Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

An empirical study of regression test selection techniques

Published: 01 April 2001 Publication History

Abstract

Regression testing is the process of validating modified software to detect whether new errors have been introduced into previously tested code and to provide confidence that modifications are correct. Since regression testing is an expensive process, researchers have proposed regression test selection techniques as a way to reduce some of this expense. These techniques attempt to reduce costs by selecting and running only a subset of the test cases in a program's existing test suite. Although there have been some analytical and empirical evaluations of individual techniques, to our knowledge only one comparative study, focusing on one aspect of two of these techniques, has been reported in the literature. We conducted an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment examined five techniques for reusing test cases, focusing on their relative ablilities to reduce regression testing effort and uncover faults in modified programs. Our results highlight several differences between the techiques, and expose essential trade-offs that should be considered when choosing a technique for practical application.

References

[1]
AGRAWAL, H., HORGAN, J., KRAUSER, E., AND LONDON, S. 1993. Incremental regression testing. In Proceedings of the Conference on Software Maintenance (Sept.). 348-357.
[2]
BALCER, M., HASLING, W., AND OSTRAND, T. 1989. Automatic generation of test scripts from formal test specifications. In Proceedings of the ACM SIGSOFT '89 Third Symposium on Software Testing, Analysis, and Verification (TAV3, Key West, FL, Dec. 13-15), R. A. Kemmerer, Ed. ACM Press, New York, NY, 210-218.
[3]
CHAMBERS,J.M.,CLEVELAND,W.S.,KLEINER, B., AND TUKEY, P. A. 1983. Graphical Methods for Data Analysis. Wadsworth Publ. Co., Belmont, CA.
[4]
CHEN, Y.-F., ROSENBLUM,D.S.,AND VO, K.-P. 1994. TestTube: A system for selective regression testing. In Proceedings of the 16th International Conference on Software Engineering (ICSE '94, Sorrento, Italy, May 16-21), B. Fadini, L. Osterweil, and A. van Lamsweerde, Chairs. IEEE Computer Society Press, Los Alamitos, CA, 211-220.
[5]
FISCHER, K., RAJI, F., AND CHRUSCKICKI, A. 1981. A methodology for retesting modified software. In Proceedings of the National Tele. Conference B-6-3 (Nov.). 1-6.
[6]
HARROLD,M.J.AND SOFFA, M. L. 1988. An incremental approach to unit testing during maintenance. In Proceedings of the Conference on Software Maintenance (Oct.). 362-367.
[7]
HARROLD, M., JONES,J.A.,AND LLOYD, J. 1997. Design and implementation of an interprocedural data-flow tester. Ohio State University, Columbus, OH.
[8]
HARTMANN,J.AND ROBSON, D. 1990. Techniques for selective revalidation. IEEE Software 16, 1 (Jan.), 31-38.
[9]
HUTCHINS, M., FOSTER, H., GORADIA, T., AND OSTRAND, T. 1994. Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of the 16th International Conference on Software Engineering (ICSE '94, Sorrento, Italy, May 16-21), B. Fadini, L. Osterweil, and A. van Lamsweerde, Chairs. IEEE Computer Society Press, Los Alamitos, CA, 191-200.
[10]
KIM, J.-M., PORTER, A., AND ROTHERMEL, G. 2000. An empirical study of regression test application frequency. In Proceedings of the 22nd International Conference on Software Engineering (June). 126-135.
[11]
LASKI,J.AND SZERMER, W. 1992. Identification of program modifications and its applications in software maintentance. In Proceedings of the 1992 Conference on Software Maintenance (Nov.). 282-290.
[12]
LEUNG,H.AND WHITE, L. 1990. A study of integration testing and software regression at the integration level. In Proceedings of the Conference on Software Maintenance. 290-300.
[13]
LEUNG,H.AND WHITE, L. 1991. A cost model to compare regression test strategies. In Proceedings of the Conference on Software Maintenance (Oct.). 201-208.
[14]
OSTRAND,T.J.AND BALCER, M. J. 1988. The category-partition method for specifying and generating fuctional tests. Commun. ACM 31, 6 (June), 676-686.
[15]
OSTRAND,T.AND WEYUKER, E. 1988. Using dataflow analysis for regression testing. In Proceedings of the Sixth Annual Pacific Northwest Conference on Software Quality (Sept.). 233-247.
[16]
ROSENBLUM,D.AND ROTHERMEL, G. 1997. A comparative study of regression test-selection techniques. In Proceedings of the International Workshop on Empirical Studies of Software Maintenance (Oct.). 89-94.
[17]
ROSENBLUM,D.AND WEYUKER, E. J. 1997a. Lessons learned from a regression testing case study. Empirical Softw. Eng. 2, 2, 188-191.
[18]
ROSENBLUM,D.S.AND WEYUKER, E. J. 1997b. Using coverage information to predict the cost-effectiveness of regression testing strategies. IEEE Trans. Softw. Eng. 23, 3, 146-156.
[19]
ROTHERMEL,G.AND HARROLD, M. J. 1996. Analyzing regression test selection techniques. IEEE Trans. Softw. Eng. 22, 8 (Aug.), 529-551.
[20]
ROTHERMEL,G.AND HARROLD, M. J. 1997. A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. 6, 2, 173-210.
[21]
ROTHERMEL,G.AND HARROLD, M. J. 1998. Empirical studies of a safe regression test selection technique. IEEE Trans. Softw. Eng. 24, 6, 401-419.
[22]
TAHA,A.B.,THEBAUT,S.M.,AND LIU, S. S. 1989. An approach to software fault localization and revalidation based on incremental data flow analysis. In Proceedings of the 13th Annual International Conference on Computer Software and Applications (Sept.). 527-534.
[23]
VOKOLOS,F.I.AND FRANKL, P. G. 1997. Pythia: A regression test selection tool based on textual differencing. In IFIP TC5 WG5.4 3rd International Conference on Reliability, Quality and Safety of Software-Intensive Systems (ENCRESS '97, Athens, Greece, May 29-30), D. Gritzalis, Ed. Chapman and Hall, Ltd., London, UK, 3-21.
[24]
VOKOLOS,F.I.AND FRANKL, P. G. 1998. Empirical evaluation of the textual differencing regression testing technique. In Proceedings of the International Conference on Software Maintenance (Nov.). 44-53.
[25]
WONG,W.E.,HORGAN,J.R.,MATHUR,A.P.,AND PASQUINI, A. 1997. Test set size minimization and fault detection effectiveness: A case study in a space application. In Proceedings of the 21st Annual International Conference on Computer Software and Applications (COMPSAC '97, Aug.). IEEE Computer Society, Washington, DC, 522-528.

Cited By

View all
  • (2024)Efficient Incremental Code Coverage Analysis for Regression Test SuitesProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695551(1882-1894)Online publication date: 27-Oct-2024
  • (2024)Hybrid Regression Test Selection by Integrating File and Method DependencesProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695525(1557-1569)Online publication date: 27-Oct-2024
  • (2024)Bugfox: A Trace-Based Analyzer for Localizing the Cause of Software Regression in JavaScriptProceedings of the 17th ACM SIGPLAN International Conference on Software Language Engineering10.1145/3687997.3695648(224-233)Online publication date: 17-Oct-2024
  • Show More Cited By

Recommendations

Reviews

Mauro Pezze

Regression testing, i.e., the process of validating modified software, is an expensive and industrially relevant process. One of the main problems in regression testing is the selection of a suitable subset of test cases to be executed on the new version under test. Many solutions have been proposed and independently evaluated, but the choice of a suitable method for a specific application domain has not been thoroughly investigated yet. This paper proposes a set of parameters for comparing different techniques for selecting regression test suites, describes an experimental framework for their evaluation, and proposes a first evaluation of the main classes of regression testing techniques. Techniques are evaluated by comparing effectiveness, i.e., ability to reveal defects) versus costs. The experimental framework allows evaluating effectiveness and costs of different techniques. The paper evaluates: minimization techniques, that attempt to select a minimal set of test cases, dataflow techniques, that select test cases that exercise data interactions affected by modifications, safe techniques, that guarantee that no faults that could be revealed can go undetected, random techniques, that randomly select a subset of tests, and retest-all techniques, that require all tests to be executed. Some results confirm intuitive expectations, providing experimental data, e.g., they confirm that minimization techniques have the lowest fault detection effectiveness. Some others are less intuitive and thus even more valuable. For example, the experiments show that test reduction depends dramatically on the size of the program (From 5% for small programs up to 99% for large programs) This paper can be of interest both for researchers working in software testing and experimental software engineering, and for practitioners.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 10, Issue 2
April 2001
106 pages
ISSN:1049-331X
EISSN:1557-7392
DOI:10.1145/367008
  • Editor:
  • Alex van Lamsweerde
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 April 2001
Published in TOSEM Volume 10, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. empirical study
  2. regression testing
  3. selective retest

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)68
  • Downloads (Last 6 weeks)5
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Efficient Incremental Code Coverage Analysis for Regression Test SuitesProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695551(1882-1894)Online publication date: 27-Oct-2024
  • (2024)Hybrid Regression Test Selection by Integrating File and Method DependencesProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695525(1557-1569)Online publication date: 27-Oct-2024
  • (2024)Bugfox: A Trace-Based Analyzer for Localizing the Cause of Software Regression in JavaScriptProceedings of the 17th ACM SIGPLAN International Conference on Software Language Engineering10.1145/3687997.3695648(224-233)Online publication date: 17-Oct-2024
  • (2024)Novel Approach: Prioritizing Test Cases Based on a Comprehensive Taxonomy of Product MetricsProceedings of the 2024 10th International Conference on Computer Technology Applications10.1145/3674558.3674559(1-7)Online publication date: 15-May-2024
  • (2024)Selecting “good” regression tests based on a classification of side-effects2024 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)10.1109/ICSTW60967.2024.00061(309-317)Online publication date: 27-May-2024
  • (2024)Fault localization by abstract interpretation and its applicationsJournal of Computer Languages10.1016/j.cola.2024.10128880(101288)Online publication date: Aug-2024
  • (2023)Test Case Selection through Novel Methodologies for Software Application DevelopmentsSymmetry10.3390/sym1510195915:10(1959)Online publication date: 23-Oct-2023
  • (2023)An Approach to Regression Testing Selection based on Code Changes and SmellsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624036(25-34)Online publication date: 25-Sep-2023
  • (2023)Search-based Test Case Selection for PLC Systems using Functional Block Diagram Programs2023 IEEE 34th International Symposium on Software Reliability Engineering (ISSRE)10.1109/ISSRE59848.2023.00040(228-239)Online publication date: 9-Oct-2023
  • (2023)eMOP: A Maven Plugin for Evolution-Aware Runtime VerificationRuntime Verification10.1007/978-3-031-44267-4_20(363-375)Online publication date: 3-Oct-2023
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media