Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2591062.2591122acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

On failure classification: the impact of "getting it wrong"

Published: 31 May 2014 Publication History

Abstract

Bug classification is a well-established practice which supports important activities such as enhancing verification and validation (V&V) efficiency and effectiveness. The state of the practice is manual and hence classification errors occur. This paper investigates the sensitivity of the value of bug classification (specifically, failure type classification) to its error rate; i.e., the degree to which misclassified historic bugs decrease the V&V effectiveness (i.e., the ability to find bugs of a failure type of interest). Results from the analysis of an industrial database of more than 3,000 bugs show that the impact of classification error rate on V&V effectiveness significantly varies with failure type. Specifically, there are failure types for which a 5% classification error can decrease the ability to find them by 66%. Conversely, there are failure types for which the V&V effectiveness is robust to very high error rates. These results show the utility of future research aimed at: 1) providing better tool support for decreasing human errors in classifying the failure type of bugs, 2) providing more robust approaches for the selection of V&V techniques, and 3) including robustness as an important criterion when evaluating technologies.

References

[1]
B. Freimut, “Developing and using defects classification schema.” Fraunhofer IESE, IESE-report No. 072.01/E, 2001.
[2]
M. Felderer and A. Beer, “Using Defect Taxonomies to Improve the Maturity of the System Test Process: Results from an Industrial Case Study,” vol. 133, D. Winkler, S. Biffl, and J. Bergsmann, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.
[3]
L. Miller, S. Mirsky, and J. H. Hayes, “Guidelines for the Verification and Validation of Expert System Software and Conventional Software,” 1995.
[4]
A. Mockus, “Missing Data in Software Engineering,” in Guide to Advanced Empirical Software Engineering, F. Shull, J. Singer, and D. I. K. Sjøberg, Eds. London: Springer London, 2008.
[5]
G. A. Liebchen and M. Shepperd, “Data sets and data quality in software engineering,” in Proceedings of the 4th international workshop on Predictor models in software engineering - PROMISE ’08, 2008, p. 39.
[6]
D. Falessi and G. Cantone, “Exploring Feasibility of Software Defects Orthogonal Classification,” in Software and Data Technologies, vol. 10, J. Filipe, B. Shishkov, and M. Helfert, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 136–152.
[7]
A. Vetro, N. Zazworka, C. Seaman, and F. Shull, “Using the ISO/IEC 9126 product quality model to classify defects : a controlled experiment,” in 16th International Conference on Evaluation & Assessment in Software Engineering (EASE 2012), 2012, pp. 187––196.
[8]
K. El Emam and I. Wieczorek, “The repeatability of code defect classifications,” in Proceedings Ninth International Symposium on Software Reliability Engineering (Cat. No.98TB100257), 1998, pp. 322–333.
[9]
K. Henningsson and C. Wohlin, “Assuring fault classification agreement - an empirical evaluation,” in Proceedings. 2004 International Symposium on Empirical Software Engineering, 2004. ISESE ’04., 2004, pp. 95–104.
[10]
G. Antoniol, K. Ayari, M. Di Penta, F. Khomh, and Y.-G. Guéhéneuc, “Is it a bug or an enhancement?,” in Proceedings of the 2008 conference of the center for advanced studies on collaborative research meeting of minds - CASCON ’08, 2008, p. 304.
[11]
A. Bachmann, C. Bird, F. Rahman, P. Devanbu, and A. Bernstein, “The missing links,” in Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering - FSE ’10, 2010, p. 97.
[12]
K. Herzig, S. Just, and A. Zeller, “It’s not a bug, it's a feature: how misclassification impacts bug prediction,” in 2013 International Conference on Software Engineering (ICSE ’13), 2013, pp. 392–401.
[13]
F. Rahman, D. Posnett, I. Herraiz, and P. Devanbu, “Sample size vs. bias in defect prediction,” in Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering - ESEC/FSE 2013, 2013, p. 147.
[14]
T. H. D. Nguyen, B. Adams, and A. E. Hassan, “A Case Study of Bias in Bug-Fix Datasets,” in 2010 17th Working Conference on Reverse Engineering, 2010, pp. 259–268.
[15]
S. Kim, H. Zhang, R. Wu, and L. Gong, “Dealing with noise in defect prediction,” in Proceeding of the 33rd international conference on Software engineering - ICSE ’11, 2011, p. 481.

Cited By

View all
  • (2017)Towards an understanding of change types in bug fixing codeInformation and Software Technology10.1016/j.infsof.2017.02.00386:C(37-53)Online publication date: 1-Jun-2017
  • (2015)The Art and Science of Analyzing Software DataundefinedOnline publication date: 15-Sep-2015
  • (2014)Achieving and Maintaining CMMI Maturity Level 5 in a Small OrganizationIEEE Software10.1109/MS.2014.1731:5(80-86)Online publication date: Sep-2014
  • Show More Cited By

Index Terms

  1. On failure classification: the impact of "getting it wrong"

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICSE Companion 2014: Companion Proceedings of the 36th International Conference on Software Engineering
    May 2014
    741 pages
    ISBN:9781450327688
    DOI:10.1145/2591062
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • TCSE: IEEE Computer Society's Tech. Council on Software Engin.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 May 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Bug classification
    2. human factor
    3. metrics
    4. software quality
    5. testing
    6. verification and validation

    Qualifiers

    • Article

    Conference

    ICSE '14
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 276 of 1,856 submissions, 15%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2017)Towards an understanding of change types in bug fixing codeInformation and Software Technology10.1016/j.infsof.2017.02.00386:C(37-53)Online publication date: 1-Jun-2017
    • (2015)The Art and Science of Analyzing Software DataundefinedOnline publication date: 15-Sep-2015
    • (2014)Achieving and Maintaining CMMI Maturity Level 5 in a Small OrganizationIEEE Software10.1109/MS.2014.1731:5(80-86)Online publication date: Sep-2014
    • (2014)Data, Data Everywhere...IEEE Software10.1109/MS.2014.11031:5(4-7)Online publication date: Sep-2014

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media