Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Selecting Best Practices for Effort Estimation

Published: 01 November 2006 Publication History

Abstract

Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently underconstrained problem. Hence, the learned effort models can exhibit large deviations that prevent standard statistical methods (e.g., t-tests) from distinguishing the performance of alternative effort-estimation methods. The COSEEKMO effort-modeling workbench applies a set of heuristic rejection rules to comparatively assess results from alternative models. Using these rules, and despite the presence of large deviations, COSEEKMO can rank alternative methods for generating effort models. Based on our experiments with COSEEKMO, we advise a new view on supposed "best practices” in model-based effort estimation: 1) Each such practice should be viewed as a candidate technique which may or may not be useful in a particular domain, and 2) tools like COSEEKMO should be used to help analysts explore and select the best method for a particular domain.

References

[1]
K. Lum, J. Powell, and J. Hihn, “Validation of Spacecraft Cost Estimation Models for Flight and Ground Systems,” Proc. Conf. Int'l Soc. Parametric Analysts (ISPA), Software Modeling Track, May 2002.
[2]
J. Systems and Software, vol. 70, nos.1-2, pp. 37-60, 2004.
[3]
B. Boehm, Software Engineering Economics. Prentice Hall, 1981.
[4]
B. Boehm, E. Horowitz, R. Madachy, D. Reifer, B.K. Clark, B. Steece, A.W. Brown, S. Chulani, and C. Abts, Software Cost Estimation with Cocomo II. Prentice Hall, 2000.
[5]
B.B.S. Chulani, B. Clark, and B. Steece, “Calibration Approach and Results of the Cocomo II Post-Architecture Model,” Proc. Conf. Int'l Soc. Parametric Analysts (ISPA), 1998.
[6]
IEEE Trans. Software Eng., vol. 25, no. 4, July/Aug. 1999.
[7]
Chris F Kemerer, An empirical validation of software cost estimation models, Communications of the ACM, v.30 n.5, p.416-429, May 1987
[8]
R. Strutzke, Estimating Software-Intensive Systems: Products, Projects and Processes. Addison Wesley, 2005.
[9]
IEEE Trans. Software Eng., vol. 23, no. 12, Dec. 1997.
[10]
T. Menzies, D. Port, Z. Chen, J. Hihn, and S. Stukes, “Validation Methods for Calibrating Software Effort Models,” Proc. Int'l Conf. Software Eng. (ICSE),
[11]
Z. Chen, T. Menzies, and D. Port, “Feature Subset Selection Can Improve Software Cost Estimation,” Proc. PROMISE Workshop, Int'l Conf. Software Eng. (ICSE),
[12]
Z. Chen, T. Menzies, D. Port, and B. Boehm, “Finding the Right Data for Software Cost Modeling,” IEEE Software, Nov. 2005.
[13]
“Certified Parametric Practioner Tutorial,” Proc. 2006 Int'l Conf. Int'l Soc. Parametric Analysts (ISPA), 2006.
[14]
A. Miller, Subset Selection in Regression, second ed. Chapman & Hall, 2002.
[15]
C. Kirsopp and M. Shepperd, “Case and Feature Subset Selection in Case-Based Software Project Effort Prediction,” Proc. 22nd SGAI Int'l Conf. Knowledge-Based Systems and Applied Artificial Intelligence, 2002.
[16]
M. Jorgensen and K. Molokeen-Ostvoid, “Reasons for Software Effort Estimation Error: Impact of Respondent Error, Information Collection Approach, and Data Analysis Method,” IEEE Trans. Software Eng., vol. 30, no. 12, Dec. 2004.
[17]
R. Park, “The Central Equations of the Price Software Cost Model,” Proc. Fourth COCOMO Users Group Meeting, Nov. 1988.
[18]
R. Jensen, “An Improved Macrolevel Software Development Resource Estimation Model,” Proc. Fifth Conf. Int'l Soc. Parametric Analysts (ISPA), pp. 88-92, Apr. 1983.
[19]
L. Putnam and W. Myers, Measures for Excellence. Yourdon Press Computing Series, 1992.
[20]
V. Basili, F. McGarry, R. Pajerski, and M. Zelkowitz, “Lessons Learned from 25 Years of Process Improvement: The Rise and Fall of the NASA Software Engineering Laboratory,” Proc. 24th Int'l Conf. Software Eng. (ICSE '02),
[21]
T. Jones, Estimating Software Costs. McGraw-Hill, 1998.
[22]
J. Kliijnen, “Sensitivity Analysis and Related Analyses: A Survey of Statistical Techniques,” J. Statistical Computation and Simulation, vol. 57, nos. 1-4, pp. 111-142, 1997.
[23]
D. Ferens and D. Christensen, “Calibrating Software Cost Models to Department of Defense Database: A Review of Ten Studies,” J.Parametrics, vol. 18, no. 1, pp. 55-74, Nov. 1998.
[24]
I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, 1999.
[25]
J.R. Quinlan, “Learning with Continuous Classes,” Proc. Fifth Australian Joint Conf. Artificial Intelligence, pp. 343-348, 1992.
[26]
Artificial Intelligence, vol. 97, no. 1-2, pp. 273-324, 1997.
[27]
IEEE Trans. Knowledge and Data Eng., vol. 15, no. 6, pp. 1437-1447, Nov.-Dec. 2003.
[28]
P. Cohen, Empirical Methods for Artificial Intelligence. MIT Press, 1995.
[29]
I.H. Witten and E. Frank, Data Mining, second ed. Morgan Kaufmann, 2005.
[30]
S. Stukes and D. Ferens, “Software Cost Model Calibration,” J.Parametrics, vol. 18, no. 1, pp. 77-98, 1998.
[31]
S. Stukes and H. Apgar, “Applications Oriented Software Data Collection: Software Model Calibration Report TR-9007/549-1,” Management Consulting and Research, Mar. 1991.
[32]
S. Chulani, B. Boehm, and B. Steece, “From Multiple Regression to Bayesian Analysis for Calibrating COCOMO II,” J. Parametrics, vol. 15, no. 2, pp. 175-188, 1999.
[33]
H. Habib-agahi, S. Malhotra, and J. Quirk, “Estimating Software Productivity and Cost for NASA Projects,” J. Parametrics, pp. 59-71, Nov. 1998.
[34]
IEEE Trans Pattern Analysis and Machine Intelligence, vol. 16, no. 1, pp. 66-75, Jan. 1994.
[35]
F. Provost and T. Fawcett, “Robust Classification for Imprecise Environments,” Machine Learning, vol. 42, no. 3, Mar. 2001.
[36]
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 392-402, Mar. 2006.
[37]
L. Brieman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.

Cited By

View all
  • (2023)Dynamic Prediction of Delays in Software Projects using Delay Patterns and Bayesian ModelingProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616328(1012-1023)Online publication date: 30-Nov-2023
  • (2023)CoBRA without expertsJournal of Software: Evolution and Process10.1002/smr.256935:12Online publication date: 25-Apr-2023
  • (2023)A soft computing approach for software defect density predictionJournal of Software: Evolution and Process10.1002/smr.255336:4Online publication date: 14-Mar-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 32, Issue 11
November 2006
79 pages

Publisher

IEEE Press

Publication History

Published: 01 November 2006

Author Tags

  1. COCOMO
  2. Model-based effort estimation
  3. data mining.
  4. deviation

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Dynamic Prediction of Delays in Software Projects using Delay Patterns and Bayesian ModelingProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616328(1012-1023)Online publication date: 30-Nov-2023
  • (2023)CoBRA without expertsJournal of Software: Evolution and Process10.1002/smr.256935:12Online publication date: 25-Apr-2023
  • (2023)A soft computing approach for software defect density predictionJournal of Software: Evolution and Process10.1002/smr.255336:4Online publication date: 14-Mar-2023
  • (2022)OSS Effort Estimation Using Software Features Similarity and Developer Activity-Based MetricsACM Transactions on Software Engineering and Methodology10.1145/348581931:2(1-35)Online publication date: 4-Mar-2022
  • (2022)Locally weighted regression with different kernel smoothers for software effort estimationScience of Computer Programming10.1016/j.scico.2021.102744214:COnline publication date: 1-Feb-2022
  • (2022)An extended study on applicability and performance of homogeneous cross-project defect prediction approaches under homogeneous cross-company effort estimation situationEmpirical Software Engineering10.1007/s10664-021-10103-427:2Online publication date: 27-Jan-2022
  • (2021)Impact Factors and Best Practices to Improve Effort Estimation Strategies and Practices in DevOpsProceedings of the 11th International Conference on Information Communication and Management10.1145/3484399.3484401(11-17)Online publication date: 12-Aug-2021
  • (2021)Modeling team dynamics for the characterization and prediction of delays in user storiesProceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering10.1109/ASE51524.2021.9678939(991-1002)Online publication date: 15-Nov-2021
  • (2021)On the value of filter feature selection techniques in homogeneous ensembles effort estimationJournal of Software: Evolution and Process10.1002/smr.234333:6Online publication date: 1-Jun-2021
  • (2020)An exploratory study on applicability of cross project defect prediction approaches to cross-company effort estimationProceedings of the 16th ACM International Conference on Predictive Models and Data Analytics in Software Engineering10.1145/3416508.3417118(71-80)Online publication date: 8-Nov-2020
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media