Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/ICSM.2015.7332456guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

An empirical study of bugs in test code

Published: 29 September 2015 Publication History

Abstract

Testing aims at detecting (regression) bugs in production code. However, testing code is just as likely to contain bugs as the code it tests. Buggy test cases can silently miss bugs in the production code or loudly ring false alarms when the production code is correct. We present the first empirical study of bugs in test code to characterize their prevalence and root cause categories. We mine the bug repositories and version control systems of 211 Apache Software Foundation (ASF) projects and find 5,556 test-related bug reports. We (1) compare properties of test bugs with production bugs, such as active time and fixing effort needed, and (2) qualitatively study 443 randomly sampled test bug reports in detail and categorize them based on their impact and root causes. Our results show that (1) around half of all the projects had bugs in their test code; (2) the majority of test bugs are false alarms, i.e., test fails while the production code is correct, while a minority of these bugs result in silent horrors, i.e., test passes while the production code is incorrect; (3) incorrect and missing assertions are the dominant root cause of silent horror bugs; (4) semantic (25%), flaky (21%), environment-related (18%) bugs are the dominant root cause categories of false alarms; (5) the majority of false alarm bugs happen in the exercise portion of the tests, and (6) developers contribute more actively to fixing test bugs and test bugs are fixed sooner compared to production bugs. In addition, we evaluate whether existing bug detection tools can detect bugs in test code.

Cited By

View all
  • (2023)Flaky Tests in UI: Understanding Causes and Applying Correction StrategiesProceedings of the XXXVII Brazilian Symposium on Software Engineering10.1145/3613372.3613406(398-406)Online publication date: 25-Sep-2023
  • (2023)Silent bugs in deep learning frameworks: an empirical study of Keras and TensorFlowEmpirical Software Engineering10.1007/s10664-023-10389-629:1Online publication date: 29-Nov-2023
  • (2023)A Literature Survey of Assertions in Software TestingEngineering of Computer-Based Systems10.1007/978-3-031-49252-5_8(75-96)Online publication date: 16-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
ICSME '15: Proceedings of the 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME)
September 2015
611 pages
ISBN:9781467375320

Publisher

IEEE Computer Society

United States

Publication History

Published: 29 September 2015

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Flaky Tests in UI: Understanding Causes and Applying Correction StrategiesProceedings of the XXXVII Brazilian Symposium on Software Engineering10.1145/3613372.3613406(398-406)Online publication date: 25-Sep-2023
  • (2023)Silent bugs in deep learning frameworks: an empirical study of Keras and TensorFlowEmpirical Software Engineering10.1007/s10664-023-10389-629:1Online publication date: 29-Nov-2023
  • (2023)A Literature Survey of Assertions in Software TestingEngineering of Computer-Based Systems10.1007/978-3-031-49252-5_8(75-96)Online publication date: 16-Oct-2023
  • (2021)Investigating Test Smells in JavaScript Test CodeProceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3482909.3482915(36-45)Online publication date: 27-Sep-2021
  • (2021)A Survey of Flaky TestsACM Transactions on Software Engineering and Methodology10.1145/347610531:1(1-74)Online publication date: 26-Oct-2021
  • (2021)Quantifying no-fault-found test failures to prioritize inspection of flaky tests at EricssonProceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3468264.3473930(1371-1380)Online publication date: 20-Aug-2021
  • (2021)Verifying Determinism in Sequential ProgramsProceedings of the 43rd International Conference on Software Engineering10.1109/ICSE43902.2021.00017(37-49)Online publication date: 22-May-2021
  • (2020)Intermittently failing tests in the embedded systems domainProceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3395363.3397359(337-348)Online publication date: 18-Jul-2020
  • (2020)Testing of Mobile Applications in the WildProceedings of the 28th International Conference on Program Comprehension10.1145/3387904.3389256(296-307)Online publication date: 13-Jul-2020
  • (2020)What is the Vocabulary of Flaky Tests?Proceedings of the 17th International Conference on Mining Software Repositories10.1145/3379597.3387482(492-502)Online publication date: 29-Jun-2020
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media