Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/381473.381488acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

Evaluating the accuracy of defect estimation models based on inspection data from two inspection cycles

Published: 01 July 2001 Publication History

Abstract

Defect content estimation techniques (DCETs), based on defect data from inspection, estimate the total number of defects in a document to evaluate the development process. For inspections that yield few data points DCETs reportedly underestimate the number of defects. If there is a second inspection cycle, the additional defect data is expected to increase estimation accuracy.
In this paper we consider 3 scenarios to combine data sets from the inspection-reinspection process. We evaluate these approaches with data from an experiment in a university environment where 31 teams inspected and reinspected a software requirements document.
Main findings of the experiment were that reinspection data improved estimation accuracy. With the best combination approach all examined estimators yielded on average estimates within 20% around the true value, all estimates stayed within 40% around the true value.

References

[1]
V.R. Basili, S. Green, O. Laitenberger, F. Lanubile, F. Shull, S. Soerumgaard, and M. Zelkowitz, "The Empirical Investigation of Perspective-Based Reading." Empirical Software Engineering: An International Journal 1, 2 (1996), pp. 133-164.
[2]
S.Biffl,M.Halling,andM.K~hle, "Investigating the Effect of a Second Software Inspection Cycle" in Proc. APAQS 2000 Asia-Pacific Conference on Quality Software (Hong Kong, Oct. 2000), IEEE Comp.Soc.Press.
[3]
S. Biffl, "Using Inspection Data for Defect Estimation." IEEE Software special issue on recent project estimation methods, December 2000.
[4]
G.E.P. Box, W.G. Hunter, and J.S. Hunter, Statistics for Experimenters, John Wiley & Sons, 1978.
[5]
L.Briand,K.ElEmam,B.Freimut,and O,Laitenberger, "A Comprehensive Evaluation of Capture- Recapture Models for Estimating Software Defect Content", IEEE Transactions on Software Engineering, vol. 26, no. 6, June 2000, pp. 518-540.
[6]
A. Chao, "Estimating the Population Size for Capture-Recapture Data with Unequal Catchability." Biometrics 43 (Dec. 1987), pp. 783-791.
[7]
A. Chao, "Estimating Population Size for Sparse Data in Capture-Recapture Experiments." Biometrics 45 (June 1989), pp. 427-438.
[8]
A. Chao, S-M. Lee, and S-L. Jeng, "Estimating Population Size for Capture-Recapture Data when Capture Probabilities Vary by Time and Individual Animal." Biometrics 48 (March 1992), pp. 201-216.
[9]
B. Curtis, "By the Way, Did Anyone Study any Real Programmers?" Empirical Studies of Programmers: First Workshop (1986), Ablex Publishing Corporation, pp. 256-262.
[10]
St. Eick, C. Loader, M.D. Long, L.G. Votta, and S. Vander Wiel, "Estimating Software Fault Content before Coding." in Proc. 14 th Int. Conf. on Software Engineering (1992), pp. 59-65.
[11]
T. Gilb, and D. Graham. Software Inspection, Addison-Wesley, 1993.
[12]
O. Laitenberger, and J.M. DeBaud, "An Encompassing Life-Cycle Centric Survey of Software Inspection." Journal of Systems and Software 1, 51(2000).
[13]
J. Miller, M. Wood, and M. Roper, "Further experiences with scenarios and checklists." Empirical Software Engineering Journal 3, 1(1998), pp. 37-64.
[14]
H. Petersson, and C. Wohlin, "Evaluation of using Capture-Recapture Methods in Software Review Data." in Proc. Conference on Empirical Assessment and Evaluation in Software Engineering (Keele University, Staffordshire, UK, 1999).
[15]
H. Petersson, and C. Wohlin, "Evaluating Defect Content Estimation Rules in Software Inspections." in Proc. Conference on Empirical Assessment and Evaluation in Software Engineering (Keele University, Staffordshire, UK, 2000).
[16]
D.L. Otis, K.P. Burnham, G.C. White, and D.R. Anderson, 'Statistical Interference from Capture Data on closed animal populations." A Publication of the Wildlife Society, Wildlife Monographs 62 (1978).
[17]
C. Robson, Real World Research, Blackwell, 1993.
[18]
T. Thelin, and P. Runeson, "Fault Content Estimations using Extended Curve Fitting Models and Model Selection." in Proc. Conference on Empirical Assessment and Evaluation in Software Engineering (Keele University, Staffordshire, UK, 2000).
[19]
B.J. Winer, O.R. Brown, and K.M. Michels, Statistical Principles in Experimental Design, McGraw-Hill Series in Psychology, 3 rd ed., 1991.
[20]
C. Wohlin, and P. Runeson, "Defect Content Estimations from Review Data." in Proc. 20 th Int. Conf. on Software Engineering (1998), pp. 400-409.
[21]
C. Wohlin, P. Runeson, M. H~st, M.C. Ohlsson, B. Regnell, and A. Wessl~n. Experimentation in Software Engineering - An Introduction. The Kluwer International Series in Software Engineering, Kluwer Academic Publishers, 2000.

Cited By

View all
  • (2016)An empirical study on independence-driven data selection for improving capture-recapture estimationProceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering10.1145/2915970.2915991(1-10)Online publication date: 1-Jun-2016
  • (2004)A Computational Framework for Supporting Software InspectionsProceedings of the 19th IEEE international conference on Automated software engineering10.5555/1025115.1025206(46-55)Online publication date: 20-Sep-2004
  • (2004)Using Machine Learning for Estimating the Defect Content After an InspectionIEEE Transactions on Software Engineering10.1109/TSE.2004.126573330:1(17-28)Online publication date: 1-Jan-2004
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '01: Proceedings of the 23rd International Conference on Software Engineering
July 2001
844 pages
ISBN:0769510507

Sponsors

Publisher

IEEE Computer Society

United States

Publication History

Published: 01 July 2001

Check for updates

Qualifiers

  • Article

Conference

ICSE01
Sponsor:
ICSE01: 23rd International Conference on Software Engineering
May 12 - 19, 2001
Ontario, Toronto, Canada

Acceptance Rates

ICSE '01 Paper Acceptance Rate 47 of 268 submissions, 18%;
Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2016)An empirical study on independence-driven data selection for improving capture-recapture estimationProceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering10.1145/2915970.2915991(1-10)Online publication date: 1-Jun-2016
  • (2004)A Computational Framework for Supporting Software InspectionsProceedings of the 19th IEEE international conference on Automated software engineering10.5555/1025115.1025206(46-55)Online publication date: 20-Sep-2004
  • (2004)Using Machine Learning for Estimating the Defect Content After an InspectionIEEE Transactions on Software Engineering10.1109/TSE.2004.126573330:1(17-28)Online publication date: 1-Jan-2004
  • (2002)Applying Machine Learning to Solve an Estimation Problem in Software InspectionsProceedings of the International Conference on Artificial Neural Networks10.5555/646259.684311(516-521)Online publication date: 28-Aug-2002
  • (2002)Empirical interval estimates for the defect content after an inspectionProceedings of the 24th International Conference on Software Engineering10.1145/581339.581350(58-68)Online publication date: 19-May-2002
  • (2002)Using a Reliability Growth Model to Control Software InspectionEmpirical Software Engineering10.1023/A:10163962324487:3(257-284)Online publication date: 1-Sep-2002

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media