Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2597073.2597108acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

An empirical study of dormant bugs

Published: 31 May 2014 Publication History

Abstract

Over the past decade, several research efforts have studied the quality of software systems by looking at post-release bugs. However, these studies do not account for bugs that remain dormant (i.e., introduced in a version of the software system, but are not found until much later) for years and across many versions. Such dormant bugs skew our under- standing of the software quality. In this paper we study dormant bugs against non-dormant bugs using data from 20 different open-source Apache foundation software systems. We find that 33% of the bugs introduced in a version are not reported till much later (i.e., they are reported in future versions as dormant bugs). Moreover, we find that 18.9% of the reported bugs in a version are not even introduced in that version (i.e., they are dormant bugs from prior versions). In short, the use of reported bugs to judge the quality of a specific version might be misleading. Exploring the fix process for dormant bugs, we find that they are fixed faster (median fix time of 5 days) than non- dormant bugs (median fix time of 8 days), and are fixed by more experienced developers (median commit counts of developers who fix dormant bug is 169% higher). Our results highlight that dormant bugs are different from non-dormant bugs in many perspectives and that future research in software quality should carefully study and consider dormant bugs.

References

[1]
Nachiappan Nagappan and Thomas Ball. Use of relative code churn measures to predict system defect density. In Proceedings of the 27th international conference on Software engineering, ICSE ’05, pages 284–292, 2005.
[2]
Thomas Zimmermann, Rahul Premraj, and Andreas Zeller. Predicting defects for eclipse. In Proceedings of the Third International Workshop on Predictor Models in Software Engineering, PROMISE ’07, 2007.
[3]
Ahmed E. Hassan. Predicting faults using the complexity of code changes. In ICSE ’09, pages 78–88, 2009.
[4]
Nachiappan Nagappan, Thomas Ball, and Andreas Zeller. Mining metrics to predict component failures. In ICSE ’06, pages 452–461, 2006.
[5]
Emad Shihab, Audris Mockus, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. High-impact defects: a study of breakage and surprise defects. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering, ESEC/FSE ’11, pages 300–310, 2011.
[6]
Emad Shihab, Zhen Ming Jiang, Walid M. Ibrahim, Bram Adams, and Ahmed E. Hassan. Understanding the impact of code and process metrics on post-release defects: a case study on the eclipse project. In Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’10, 2010.
[7]
Tse-Hsun Chen, S. W. Thomas, Meiyappan Nagappan, and A.E. Hassan. Explaining software defects using topic models. In Proceedings of the 9th Working Conference on Mining Software Repositories, MSR ’12, 2012.
[8]
Thomas Zimmermann, Nachiappan Nagappan, Harald Gall, Emanuel Giger, and Brendan Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In ESEC/FSE ’09, 2009.
[9]
Adrian Schrˇ Zter, Thomas Zimmermann, Rahul Premraj, and Andreas Zeller. If your bug database could talk. In Proceedings of the 5th International Symposium on Empirical Software Engineering, Volume II: Short Papers and Posters, pages 18–20, 2006.
[10]
Guoliang Jin, Linhai Song, Xiaoming Shi, Joel Scherpelz, and Shan Lu. Understanding and detecting real-world performance bugs. In Proceedings of the 33rd ACM SIGPLAN conference on Programming Language Design and Implementation, PLDI ’12, 2012.
[11]
Shan Lu, Zhenmin Li, Feng Qin, Lin Tan, Pin Zhou, and Yuanyuan Zhou. Bugbench: Benchmarks for evaluating bug detection tools. In Workshop on the Evaluation of Software Defect Detection Tools, 2005.
[12]
Christian Bird, Nachiappan Nagappan, Brendan Murphy, Harald Gall, and Premkumar Devanbu. Don’t touch my code!: Examining the effects of ownership on software quality. In SIGSOFT/FSE ’11, pages 4–14, 2011.
[13]
Nachiappan Nagappan and Thomas Ball. Static analysis tools as early indicators of pre-release defect density. In ICSE ’05, pages 580–586, 2005.
[14]
Thomas Zimmermann, Rahul Premraj, and Andreas Zeller. Predicting defects for Eclipse. In PROMISE ’07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering, page 9, 2007.
[15]
F. Khomh, T. Dhaliwal, Ying Zou, and B. Adams. Do faster releases improve software quality? an empirical case study of mozilla firefox. In Proceedings of the 9th International Working Conference on Mining Software Repositories, pages 179–188, June 2012.
[16]
Thomas Zimmermann, Nachiappan Nagappan, and Laurie Williams. Searching for a needle in a haystack: Predicting security vulnerabilities for windows vista. In Proceedings of the 2010 Third International Conference on Software Testing, Verification and Validation, ICST ’10, pages 421–428, 2010.
[17]
Zhenmin Li, Lin Tan, Xuanhui Wang, Shan Lu, Yuanyuan Zhou, and Chengxiang Zhai. Have things changed now?: an empirical study of bug characteristics in modern open source software. In Proceedings of the 1st workshop on Architectural and system support for improving software dependability, 2006.
[18]
Adrian Nistor, Tian Jiang, and Lin Tan. Discovering, reporting, and fixing performance bugs. In MSR ’13, May 2013.
[19]
Shahed Zaman, Bram Adams, and Ahmed E. Hassan. Security versus performance bugs: a case study on firefox. In MSR ’11, pages 93–102, 2011.
[20]
Apache Software Foundation. Jira. https://issues.apache.org/jira/, 2013.
[21]
Sunghun Kim, Hongyu Zhang, Rongxin Wu, and Liang Gong. Dealing with noise in defect prediction. In ICSE ’11, pages 481–490, 2011.
[22]
Foyzur Rahman, Daryl Posnett, and Premkumar Devanbu. Recalling the ”imprecision” of cross-project defect prediction. In SIGSOFT/FSE ’11, FSE ’12, 2012.
[23]
Daryl Posnett, Abram Hindle, and Prem Devanbu. Got issues? do new features and code improvements affect defects? In WCRE ’11, pages 211–215, 2011.
[24]
Foyzur Rahman, Daryl Posnett, Israel Herraiz, and Premkumar Devanbu. Sample size vs. bias in defect prediction. In ESEC/FSE 2013, pages 147–157, 2013.
[25]
Audris Mockus, Roy T. Fielding, and James D. Herbsleb. Two case studies of open source software development: Apache and mozilla. ACM Trans. Softw. Eng. Methodol., 11(3):309–346, July 2002.
[26]
Gaeul Jeong, Sunghun Kim, and Thomas Zimmermann. Improving bug triage with bug tossing graphs. In SIGSOFT/FSE ’09, ESEC/FSE ’09, pages 111–120, 2009.
[27]
Zuoning Yin, Ding Yuan, Yuanyuan Zhou, Shankar Pasupathy, and Lakshmi Bairavasundaram. How do fixes become bugs? In ESEC/FSE ’11, pages 26–36, 2011.
[28]
Emad Shihab, Akinori Ihara, Yasutaka Kamei, Walid M. Ibrahim, Masao Ohira, Bram Adams, Ahmed E. Hassan, and Ken-ichi Matsumoto. Studying re-opened bugs in open source software. Empirical Software Engineering, 2012.
[29]
Audris Mockus and James D. Herbsleb. Expertise browser: a quantitative approach to identifying expertise. In ICSE ’02, pages 503–512, 2002.
[30]
H. Kagdi, M. Hammad, and J.I. Maletic. Who can help me with this source code change? In Proceedings of the 24th International Conference on Software Maintenance, ICSM ’08, pages 157–166, 2008.
[31]
Foyzur Rahman and Premkumar Devanbu. Ownership, experience and defects: A fine-grained study of authorship. In ICSE ’11, pages 491–500, 2011.
[32]
Ran Tang, Ahmed E. Hassan, and Ying Zou. Techniques for identifying the country origin of mailing list participants. In Proceedings of the 2009 16th Working Conference on Reverse Engineering, WCRE ’09, pages 36–40, 2009.
[33]
S. Boslaugh and P.A. Watters. Statistics in a Nutshell: A Desktop Quick Reference. In a Nutshell (O’Reilly). O’Reilly Media, 2008.
[34]
Apache Derby. Derby-1661. https: //issues.apache.org/jira/browse/DERBY-1661, 2013.
[35]
Apache Wicket. Wicket-4161. 2013.
[36]
Apache Felix. Felix-1460. https: //issues.apache.org/jira/browse/FELIX-1460, 2013.
[37]
Emad Shihab. An Exploration of Challenges Limiting Pragmatic Software Defect Prediction. PhD thesis, 2012.
[38]
Jacek Śliwerski, Thomas Zimmermann, and Andreas Zeller. When do changes induce fixes? In MSR ’05, pages 1–5, 2005.
[39]
Vigdis By Kampenes, Tore Dyb˚ a, Jo E. Hannay, and Dag I. K. Sjøberg. Systematic review: A systematic review of effect size in software engineering experiments. Inf. Softw. Technol., 49(11-12):1073–1086, November 2007.

Cited By

View all
  • (2024)Understanding Vulnerability Inducing Commits of the Linux KernelACM Transactions on Software Engineering and Methodology10.1145/367245233:7(1-28)Online publication date: 14-Jun-2024
  • (2023)Characterizing Issue Management in Runtime SystemsProceedings of the 33rd Annual International Conference on Computer Science and Software Engineering10.5555/3615924.3615930(54-63)Online publication date: 11-Sep-2023
  • (2023)Test Flakiness Across Programming LanguagesIEEE Transactions on Software Engineering10.1109/TSE.2022.320886449:4(2039-2052)Online publication date: 1-Apr-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MSR 2014: Proceedings of the 11th Working Conference on Mining Software Repositories
May 2014
427 pages
ISBN:9781450328630
DOI:10.1145/2597073
  • General Chair:
  • Premkumar Devanbu,
  • Program Chairs:
  • Sung Kim,
  • Martin Pinzger
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • TCSE: IEEE Computer Society's Tech. Council on Software Engin.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 May 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Empirical Study
  2. Software Bugs
  3. Software Quality

Qualifiers

  • Article

Conference

ICSE '14
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)28
  • Downloads (Last 6 weeks)4
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Understanding Vulnerability Inducing Commits of the Linux KernelACM Transactions on Software Engineering and Methodology10.1145/367245233:7(1-28)Online publication date: 14-Jun-2024
  • (2023)Characterizing Issue Management in Runtime SystemsProceedings of the 33rd Annual International Conference on Computer Science and Software Engineering10.5555/3615924.3615930(54-63)Online publication date: 11-Sep-2023
  • (2023)Test Flakiness Across Programming LanguagesIEEE Transactions on Software Engineering10.1109/TSE.2022.320886449:4(2039-2052)Online publication date: 1-Apr-2023
  • (2023)A Procedure to Continuously Evaluate Predictive Performance of Just-In-Time Software Defect Prediction Models During Software DevelopmentIEEE Transactions on Software Engineering10.1109/TSE.2022.315883149:2(646-666)Online publication date: 1-Feb-2023
  • (2023)Inconsistent Defect Labels: Essence, Causes, and InfluenceIEEE Transactions on Software Engineering10.1109/TSE.2022.315678749:2(586-610)Online publication date: 1-Feb-2023
  • (2023)Uncovering the Hidden Risks: The Importance of Predicting Bugginess in Untouched Methods2023 IEEE 23rd International Working Conference on Source Code Analysis and Manipulation (SCAM)10.1109/SCAM59687.2023.00039(277-282)Online publication date: 2-Oct-2023
  • (2023)Understanding Bugs in Rust Compilers2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS)10.1109/QRS60937.2023.00023(138-149)Online publication date: 22-Oct-2023
  • (2023)BugRadarInformation and Software Technology10.1016/j.infsof.2023.107274162:COnline publication date: 1-Oct-2023
  • (2023)On the validity of retrospective predictive performance evaluation procedures in just-in-time software defect predictionEmpirical Software Engineering10.1007/s10664-023-10341-828:5Online publication date: 18-Sep-2023
  • (2023)Enhancing the defectiveness prediction of methods and classes via JITEmpirical Software Engineering10.1007/s10664-022-10261-z28:2Online publication date: 31-Jan-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media