Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2610384.2628058acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

CoREBench: studying complexity of regression errors

Published: 21 July 2014 Publication History

Abstract

Intuitively we know, some software errors are more complex than others. If the error can be fixed by changing one faulty statement, it is a simple error. The more substantial the fix must be, the more complex we consider the error.
In this work, we formally define and quantify the complexity of an error w.r.t. the complexity of the error's least complex, correct fix. As a concrete measure of complexity for such fixes, we introduce Cyclomatic Change Complexity which is inspired by existing program complexity metrics.
Moreover, we introduce CoREBench, a collection of 70 regression errors systematically extracted from several open-source C-projects and compare their complexity with that of the seeded errors in the two most popular error benchmarks, SIR and the Siemens Suite. We find that seeded errors are significantly less complex, i.e., require significantly less substantial fixes, compared to actual regression errors. For example, among the seeded errors more than 42% are simple compared to 8% among the actual ones. This is a concern for the external validity of studies based on seeded errors and we propose CoREBench for the controlled study of regression testing, debugging, and repair techniques.

References

[1]
J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In Proceedings of the 27th International Conference on Software Engineering, ICSE ’05, pages 402–411, 2005.
[2]
J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans. Software Eng., 32(8):608–624, 2006.
[3]
J. M. Bland and D. G. Altman. Measuring agreement in method comparison studies. Statistical Methods in Medical Research, 8(2):135–160, June 1999.
[4]
M. Böhme, B. C. d. S. Oliveira, and A. Roychoudhury. Regression tests to expose change interaction errors. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pages 334–344, 2013.
[5]
S. R. Chidamber and C. F. Kemerer. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 20(6):476–493, June 1994.
[6]
J. Cohen. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46, 1960.
[7]
V. Dallmeier and T. Zimmermann. Extraction of bug localization benchmarks from history. In Proceedings of the Twenty-second IEEE/ACM International Conference on Automated Software Engineering, ASE ’07, pages 433–436, 2007.
[8]
R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on test data selection: Help for the practicing programmer. Computer, 11(4):34–41, Apr. 1978.
[9]
H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405–435, Oct. 2005.
[10]
S. Henry and D. Kafura. Software structure metrics based on information flow. IEEE Transactions on Software Engineering, SE-7(5):510–518, 1981.
[11]
K. Herzig and A. Zeller. The impact of tangled code changes. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pages 121–130, 2013.
[12]
D. Hovemeyer and W. Pugh. Finding bugs is easy. SIGPLAN Not., 39(12):92–106, Dec. 2004.
[13]
M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of the 16th International Conference on Software Engineering, ICSE ’94, pages 191–200, 1994.
[14]
IEEE. 1003.1-1988 INT/1992 Edition, IEEE Standard Interpretations of IEEE Standard Portable Operating System Interface for Computer Environments (IEEE Std 1003.1-1988). IEEE, New York, NY, USA, 1988.
[15]
IEEE. Standard Glossary of Software Engineering Terminology. IEEE Std 610.12-1990, pages 1–84, 1990.
[16]
S. Kim, T. Zimmermann, K. Pan, and E. J. J. Whitehead. Automatic identification of bug-introducing changes. In Proceedings of the 21st IEEE/ACM International Conference on Automated Software Engineering, ASE ’06, pages 81–90, 2006.
[17]
C. Le Goues, M. Dewey-Vogt, S. Forrest, and W. Weimer. A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each. In Proceedings of the 34th International Conference on Software Engineering, ICSE ’12, pages 3–13, 2012.
[18]
S. Lu, Z. Li, F. Qin, L. Tan, P. Zhou, and Y. Zhou. Bugbench: Benchmarks for evaluating bug detection tools. In In Workshop on the Evaluation of Software Defect Detection Tools, 2005.
[19]
Lucia, F. Thung, D. Lo, and L. Jiang. Are faults localizable? In 2012 9th IEEE Working Conference on Mining Software Repositories (MSR), pages 74–77, 2012.
[20]
T. McCabe. A complexity measure. IEEE Transactions on Software Engineering, SE-2(4):308–320, 1976.
[21]
A. S. Namin and S. Kakarla. The use of mutation in testing experiments and its sensitivity to external threats. In Proceedings of the 2011 International Symposium on Software Testing and Analysis, ISSTA ’11, pages 342–352, 2011.
[22]
G. C. Necula, S. McPeak, S. P. Rahul, and W. Weimer. Cil: Intermediate language and tools for analysis and transformation of c programs. In Proceedings of the 11th International Conference on Compiler Construction, CC ’02, pages 213–228, 2002.
[23]
H. D. T. Nguyen, D. Qi, A. Roychoudhury, and S. Chandra. Semfix: Program repair via semantic analysis. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 772–781, 2013.
[24]
A. J. Offutt. Investigations of the software testing coupling effect. ACM Trans. Softw. Eng. Methodol., 1(1):5–20, Jan. 1992.
[25]
R. Santelices, P. K. Chittimalli, T. Apiwattanapong, A. Orso, and M. J. Harrold. Test-suite augmentation for evolving software. In Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, ASE ’08, pages 218–227, 2008.
[26]
J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In Proceedings of the 2005 International Workshop on Mining Software Repositories, MSR ’05, pages 1–5, 2005.
[27]
J. Spacco, J. Strecker, D. Hovemeyer, and W. Pugh. Software repository mining with marmoset: An automated programming project snapshot and testing system. In Proceedings of the 2005 International Workshop on Mining Software Repositories, MSR ’05, pages 1–5, 2005.
[28]
C. Spearman. The proof and measurement of association between two things. American Journal of Psychology, 15:72–101, 1904.
[29]
E. Weyuker. Evaluating software complexity measures. IEEE Transactions on Software Engineering, 14(9):1357–1365, 1988.

Cited By

View all
  • (2025)Top-down: A better strategy for incremental covering array generationInformation and Software Technology10.1016/j.infsof.2024.107601178(107601)Online publication date: Feb-2025
  • (2024)Patch Correctness Assessment: A SurveyACM Transactions on Software Engineering and Methodology10.1145/370297234:2(1-50)Online publication date: 8-Nov-2024
  • (2024)C2D2: Extracting Critical Changes for Real-World Bugs with Dependency-Sensitive Delta DebuggingProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3652129(300-312)Online publication date: 11-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSTA 2014: Proceedings of the 2014 International Symposium on Software Testing and Analysis
July 2014
460 pages
ISBN:9781450326452
DOI:10.1145/2610384
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 July 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Coupling Effect
  2. Error Complexity
  3. Regression

Qualifiers

  • Research-article

Conference

ISSTA '14
Sponsor:

Acceptance Rates

Overall Acceptance Rate 58 of 213 submissions, 27%

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)74
  • Downloads (Last 6 weeks)5
Reflects downloads up to 23 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Top-down: A better strategy for incremental covering array generationInformation and Software Technology10.1016/j.infsof.2024.107601178(107601)Online publication date: Feb-2025
  • (2024)Patch Correctness Assessment: A SurveyACM Transactions on Software Engineering and Methodology10.1145/370297234:2(1-50)Online publication date: 8-Nov-2024
  • (2024)C2D2: Extracting Critical Changes for Real-World Bugs with Dependency-Sensitive Delta DebuggingProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3652129(300-312)Online publication date: 11-Sep-2024
  • (2024)ROBUST: 221 bugs in the Robot Operating SystemEmpirical Software Engineering10.1007/s10664-024-10440-029:3Online publication date: 23-Mar-2024
  • (2023)PatchVerifProceedings of the 32nd USENIX Conference on Security Symposium10.5555/3620237.3620406(3011-3028)Online publication date: 9-Aug-2023
  • (2023)A Large-Scale Empirical Review of Patch Correctness Checking ApproachesProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616331(1203-1215)Online publication date: 30-Nov-2023
  • (2023)Seeing the Whole Elephant: Systematically Understanding and Uncovering Evaluation Biases in Automated Program RepairACM Transactions on Software Engineering and Methodology10.1145/356138232:3(1-37)Online publication date: 27-Apr-2023
  • (2023)RUSPATCH: Towards Timely and Effectively Patching Rust Applications2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS)10.1109/QRS60937.2023.00057(517-528)Online publication date: 22-Oct-2023
  • (2023)Evaluating the Impact of Experimental Assumptions in Automated Fault Localization2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)10.1109/ICSE48619.2023.00025(159-171)Online publication date: May-2023
  • (2023)Regression Fuzzing for Deep Learning SystemsProceedings of the 45th International Conference on Software Engineering10.1109/ICSE48619.2023.00019(82-94)Online publication date: 14-May-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media