Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1363686.1363848acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

PHALANX: a graph-theoretic framework for test case prioritization

Published: 16 March 2008 Publication History

Abstract

Test case prioritization for regression testing can be performed using different metrics (e.g., statement coverage, path coverage) depending on the application context. Employing different metrics requires different prioritization schemes (e.g., maximum coverage, dissimilar paths covered). This results in significant algorithmic and implementation complexity in the testing process associated with various metrics and prioritization schemes. In this paper, we present a novel approach to the test case prioritization problem that addresses this limitation. We devise a framework, Phalanx, that identifies two distinct aspects of the problem. The first relates to metrics that define ordering relations among test cases; the second defines mechanisms that implement these metrics on test suites. We abstract the information into a test-case dissimilarity graph -- a weighted graph in which nodes specify test cases and weighted edges specify user-defined proximity measures between test cases. We argue that a declustered linearization of nodes in the graph results in a desirable prioritization of test cases, since it ensures that dissimilar test cases are applied first. We explore two mechanisms for declustering the test case dissimilarity graph -- Fiedler (spectral) ordering and a greedy approach. We implement these orderings in Phalanx, a highly flexible and customizable testbed, and demonstrate excellent performance for test-case prioritization. Our experiments on test suites available from the Subject Infrastructure Repository (SIR) show that a variety of user-defined metrics can be easily incorporated in Phalanx.

References

[1]
T. Apiwattanapong, A. Orso, and M. Harrold. Efficient and precise dynamic impact analysis using execute-after sequences. In ICSE '05, pages 432--441, 2005.
[2]
R. Bryce and C. Colbourn. Test prioritization for pairwise interaction coverage. In A-MOST '05, pages 1--7, 2005.
[3]
R. Bryce, C. Colbourn, and M. Cohen. A framework of greedy methods for constructing interaction test suites. In ICSE '05, pages 146--155, 2005.
[4]
T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to algorithms. MIT Press and McGraw-Hill Book Company, 2nd edition, 2001.
[5]
C. Csallner and Y. Smaragdakis. Dsd-crasher: a hybrid analysis tool for bug finding. In ISSTA '06, pages 245--254, Portland, Maine, USA, 2006.
[6]
W. Dickinson, D. Leon, and A. Podgurski. Finding failures by cluster analysis of execution profiles. In ICSE '01, pages 339--348, Washington, DC, USA, 2001.
[7]
H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering: An International Journal, 10(4):405--435, 2005.
[8]
S. Elbaum, A. G. Malishevsky, and G. Rothermel. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng., 28(2):159--182, 2002.
[9]
J. Ostrin G. Rothermel, M. J. Harrold and C. Hong. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In ICSM, 1998.
[10]
A. George and A. Pothen. An analysis of spectral envelope reduction via quadratic assignment problems. Siam J. Matrix Anal. Appl., 18(3):706--732, July 1997.
[11]
P. Godefroid. Compositional dynamic test generation. In POPL '07: Proceedings of the 34th Annual ACM Symposium on Principles of Programming Languages, Jan 2007.
[12]
P. Godefroid, N. Klarslund, and K. Sen. Dart: Directed automated random testing. In Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, pages 213--223, Chicago, Il, 2005.
[13]
M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol., 2(3):270--285, 1993.
[14]
M. J. Harrold, D. Rosenblum, G. Rothermel, and E. Weyuker. Empirical studies of a prediction model for regression test selection. IEEE Trans. Softw. Eng., 27(3):248--263, 2001.
[15]
Bruce Hendrickson and Robert Leland. An improved spectral graph partitioning algorithm for mapping parallel computations. SIAM J. Sci. Comput., 16(2):452--469, 1995.
[16]
D. Hirschberg. Algorithms for the longest common subsequence problem. Journal of ACM, 24(4), pages 664--675, 1977.
[17]
S. B. Horton. The Optimal Linear Arrangement Problem: Algorithms and Approximation. PhD thesis, Georgia Institute of Technology, 1997.
[18]
M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In ICSE '94, pages 191--200, 1994.
[19]
D. Jeffrey and N. Gupta. Test suite reduction with selective redundancy. In ICSM, pages 549--558, 2005.
[20]
J. Law and G. Rothermel. Whole program path-based dynamic impact analysis. In ICSE '03, pages 308--318, 2003.
[21]
B. Liblit, M. Naik, A. Zheng, A. Aiken, and M. Jordan. Scalable statistical bug isolation. In PLDI, pages 15--26, Chicago, Illinois, 2005.
[22]
B. Livshits and T. Zimmermann. Dynamine: a framework for finding common bugs by mining software revision histories. In Proceedings of the 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC-FSE), Sep, 2005.
[23]
C. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V. Reddi, and K. Hazelwood. Pin: building customized program analysis tools with dynamic instrumentation. In PLDI, pages 190--200, 2005.
[24]
http://www.matlab.com.
[25]
B. Mohar. Laplace eigenvalues of graphs -- a survey. Disc. Math., 109:171--183, 1992.
[26]
A. Orso, T. Apiwattanapong, and M. Harrold. Leveraging field data for impact analysis and regression testing. In ESEC-FSE, pages 128--137, 2003.
[27]
A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In FSE, pages 241--252, Newport Beach, CA, USA, November 2004.
[28]
Michael Rabin. Fingerprinting by Random Polynomials. Technical Report TR-15-81, Center for Research in Computing Technology, Harvard University, 1981.
[29]
http://www.cs.cmu.edu/hakim/software/.
[30]
M. K. Ramanathan, A. Grama, and S. Jagannathan. Sieve: A tool for automatically detecting variations across program versions. In Proceedings of the 21st IEEE International Conference on Automated Software Engineering (ASE'06), pages 241--252, Tokyo, Japan, 2006.
[31]
X. Ren, F. Shah, F. Tip, B. Ryder, and O. Chesley. Chianti: A tool for change impact analysis of Java programs. In OOPSLA '04, pages 432--448, Vancouver, BC, Canada, 2004.
[32]
G. Rothermel, R. J. Untch, and C. Chu. Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng., 27(10):929--948, 2001.
[33]
A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In ISSTA '02, pages 97--106, 2002.
[34]
W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set minimization on fault detection effectiveness. In ICSE '95, pages 41--50, 1995.

Cited By

View all
  • (2023)Investigating Execution Trace Embedding for Test Case Prioritization2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS)10.1109/QRS60937.2023.00036(279-290)Online publication date: 22-Oct-2023
  • (2023)A Systematic Literature Review on Test Case Prioritization TechniquesAgile Software Development10.1002/9781119896838.ch7(101-159)Online publication date: 8-Feb-2023
  • (2022)Learning to Prioritize Test Cases for Computer Aided Design Software via Quantifying Functional UnitsApplied Sciences10.3390/app12201041412:20(10414)Online publication date: 15-Oct-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SAC '08: Proceedings of the 2008 ACM symposium on Applied computing
March 2008
2586 pages
ISBN:9781595937537
DOI:10.1145/1363686
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 March 2008

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. program analysis
  2. regression testing
  3. test prioritization

Qualifiers

  • Research-article

Conference

SAC '08
Sponsor:
SAC '08: The 2008 ACM Symposium on Applied Computing
March 16 - 20, 2008
Fortaleza, Ceara, Brazil

Acceptance Rates

Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

Upcoming Conference

SAC '25
The 40th ACM/SIGAPP Symposium on Applied Computing
March 31 - April 4, 2025
Catania , Italy

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Investigating Execution Trace Embedding for Test Case Prioritization2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS)10.1109/QRS60937.2023.00036(279-290)Online publication date: 22-Oct-2023
  • (2023)A Systematic Literature Review on Test Case Prioritization TechniquesAgile Software Development10.1002/9781119896838.ch7(101-159)Online publication date: 8-Feb-2023
  • (2022)Learning to Prioritize Test Cases for Computer Aided Design Software via Quantifying Functional UnitsApplied Sciences10.3390/app12201041412:20(10414)Online publication date: 15-Oct-2022
  • (2018)Graphite: A Greedy Graph-Based Technique for Regression Test Case Prioritization2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)10.1109/ISSREW.2018.00014(245-251)Online publication date: Oct-2018
  • (2016)Prioritizing test cases for early detection of refactoring faultsSoftware Testing, Verification & Reliability10.1002/stvr.160326:5(402-426)Online publication date: 1-Aug-2016
  • (2015)Exploring Test Suite Diversification and Code Coverage in Multi-Objective Test Case Selection2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST.2015.7102588(1-10)Online publication date: Apr-2015
  • (2015)An effective test case prioritization method based on fault severity2015 6th IEEE International Conference on Software Engineering and Service Science (ICSESS)10.1109/ICSESS.2015.7339162(737-741)Online publication date: Sep-2015
  • (2014)RDCC: An effective test case prioritization framework using software requirements, design and source code collaboration2014 17th International Conference on Computer and Information Technology (ICCIT)10.1109/ICCITechn.2014.7073072(75-80)Online publication date: Dec-2014
  • (2014)Static test case prioritization using topic modelsEmpirical Software Engineering10.1007/s10664-012-9219-719:1(182-212)Online publication date: 1-Feb-2014
  • (2013)Achieving scalable model-based testing through test case diversityACM Transactions on Software Engineering and Methodology10.1145/2430536.243054022:1(1-42)Online publication date: 4-Mar-2013
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media