Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

On effort-aware metrics for defect prediction

Published: 01 November 2022 Publication History

Abstract

Context

Advances in defect prediction models, aka classifiers, have been validated via accuracy metrics. Effort-aware metrics (EAMs) relate to benefits provided by a classifier in accurately ranking defective entities such as classes or methods. PofB is an EAM that relates to a user that follows a ranking of the probability that an entity is defective, provided by the classifier. Despite the importance of EAMs, there is no study investigating EAMs trends and validity.

Aim

The aim of this paper is twofold: 1) we reveal issues in EAMs usage, and 2) we propose and evaluate a normalization of PofBs (aka NPofBs), which is based on ranking defective entities by predicted defect density.

Method

We perform a systematic mapping study featuring 152 primary studies in major journals and an empirical study featuring 10 EAMs, 10 classifiers, two industrial, and 12 open-source projects.

Results

Our systematic mapping study reveals that most studies using EAMs use only a single EAM (e.g., PofB20) and that some studies mismatched EAMs names. The main result of our empirical study is that NPofBs are statistically and by orders of magnitude higher than PofBs.

Conclusions

In conclusion, the proposed normalization of PofBs: (i) increases the realism of results as it relates to a better use of classifiers, and (ii) promotes the practical adoption of prediction models in industry as it shows higher benefits. Finally, we provide a tool to compute EAMs to support researchers in avoiding past issues in using EAMs.

References

[1]
Agrawal A, Menzies T (2018) Is “better data” better than “better data miners”?: On the benefits of tuning SMOTE for defect prediction. In: Proceedings of the 40th international conference on software engineering, ICSE 2018, Gothenburg, Sweden, May 27–June 03, 2018, pp 1050–1061
[2]
Aha D and Kibler D Instance-based learning algorithms Mach Learn 1991 6 37-66
[3]
Ahluwalia A, Falessi D, Penta M D (2019) Snoring: a noise in defect prediction datasets. In: Storey MD, Adams B, Haiduc S (eds) Proceedings of the 16th international conference on mining software repositories, MSR 2019, 26–27 May 2019, Montreal, Canada., pp 63–67
[4]
Akoglu HUser’s guide to correlation coefficientsTurk J Emerg Med201818391-93https://doi.org/10.1016/j.tjem.2018.08.001 https://doi.org/10.1016/j.tjem.2018.08.001
[5]
Altman NS An introduction to kernel and nearest-neighbor nonparametric regression Am Stat 1992 46 3 175-185 Retrieved from http://www.jstor.org/stable/2685209
[6]
Amasaki S Cross-version defect prediction: use historical data, crossproject data, or both? Empir Softw Eng 2020 25 2 1573-1595
[7]
Arisholm E, Briand L C, Fuglerud M (2007) Data mining techniques for building fault-proneness models in telecom java software. In: ISSRE 2007, the 18th IEEE international symposium on software reliability, Trollhättan, Sweden, 5–9 November 2007., pp 215–224
[8]
Bangash AA, Sahar H, Hindle A, and Ali K On the time-based conclusion stability of cross-project defect prediction models Empir Softw Eng 2020 25 6 5047-5083
[9]
Basili VR, Briand LC, and Melo WL A validation of objectoriented design metrics as quality indicators IEEE Trans Softw Eng 1996 22 10 751-761
[11]
Bennin KE, Keung J, Phannachitta P, Monden A, and Mensah SMAHAKIL: diversity based oversampling approach to alleviate the class imbalance issue in software defect predictionIEEE Trans Softw Eng2018446534-550https://doi.org/10.1109/TSE.2017.2731766
[12]
Bennin KE, Keung JW, and Monden A On the relative value of data resampling approaches for software defect prediction Empir Softw Eng 2019 24 2 602-636
[13]
Bird C, Bachmann A, Aune E, Duffy J, Bernstein A, Filkov V, Devanbu P T (2009) Fair and balanced?: Bias in bug-fix datasets. In: van Vlie H, Issarny V (eds) Proceedings of the 7th joint meeting of the european software engineering conference and the ACM SIGSOFT international symposium on foundations of software engineering, 2009, Amsterdam, The Netherlands, August 24–28, 2009., pp 121–130
[14]
Breiman L Random forests Mach Learn 2001 45 1 5-32
[15]
Chawla NV, Bowyer KW, Hall LO, and Kegelmeyer WP Smote: synthetic minority over-sampling technique J Artif Intell Res 2002 16 321-357
[16]
Chen T -H, Nagappan M, Shihab E, Hassan A E (2014) An empirical study of dormant bugs. In: Proceedings of the 11th working conference on mining software repositories - MSR., p 2014
[17]
Chen H, Liu W, Gao D, Peng X, and Zhao WPersonalized defect prediction for individual source filesComput Sci201744490-95https://doi.org/10.11896/j.issn.1002-137X.2017.04.020
[18]
Chen H, Jing X, Li Z, Wu D, Peng Y, Huang Z (2020) An empirical study on heterogeneous defect prediction approaches. IEEE Trans Softw Eng (01):1–1.
[19]
Chen X, Mu Y, Liu K, Cui Z, and Ni CRevisiting heterogeneous defect prediction methods: how far are we?Inf Softw Technol2021130106441https://doi.org/10.1016/j.infsof.2020.106441
[20]
Chi J, Honda K, Washizaki H, Fukazawa Y, Munakata K, Morita S, Yamamoto R (2017) Defect analysis and prediction by applying the multistage software reliability growth model. In: IWESEP. IEEE Computer Society, pp 7–11
[21]
Cleary J G, Trigg L E (1995) K*: an instance-based learner using an entropic distance measure. In: 12th International conference on machine learning, pp 108–114
[22]
Dalla Palma S, Di Nucci D, Palomba F, Tamburri D A (2021) Withinproject defect prediction of infrastructure-as-code using product and process metrics. IEEE Trans Softw Eng 1–1.
[23]
D’Ambros M, Lanza M, and Robbes REvaluating defect prediction approaches: a benchmark and an extensive comparisonEmpir Softw Eng2012174–5531-577https://doi.org/10.1007/s10664-011-9173-9
[24]
Falessi D, Huang J, Narayana L, Thai JF, and Turhan BOn the need of preserving order of data when validating within-project defect classifiersEmpir Softw Eng20202564805-4830https://doi.org/10.1007/s10664-020-09868-x
[25]
Falessi D, Ahluwalia A, and Penta MDThe impact of dormant defects on defect prediction: a study of 19 apache projects. ACM TransSoftw Eng Methodol20223114:1-4:26https://doi.org/10.1145/3467895
[26]
Fan Y, Xia X, da Costa DA, Lo D, Hassan AE, and Li SThe impact of mislab eled changes by SZZ on just-in-time defect predictionIEEE Trans Software Eng20214781559-1586https://doi.org/10.1109/TSE.2019.2929761
[27]
Feng S, Keung J, Yu X, Xiao Y, Bennin KE, Kabir MA, and Zhang MCOSTE: complexity-based oversampling technique to alleviate the class imbalance problem in software defect predictionInf Softw Technol2021129106432https://doi.org/10.1016/j.infsof.2020.106432
[28]
Flint S W, Chauhan J, Dyer R (2021) Escaping the time pit: Pitfalls and guidelines for using time-based git data. In: 18th IEEE/ACM international conference on mining software repositories, MSR 2021, Madrid, Spain, May 17–19, 2021., pp 85–96
[29]
Fu W, Menzies T, and Shen XTuning for software analytics: is it really necessary?Softw Technol201676135-146https://doi.org/10.1016/j.infsof.2016.04.017
[30]
Fukushima T, Kamei Y, McIntosh S, Yamashita K, Ubayashi N (2014) An empirical study of just-in-time defect prediction using crossproject models. In: Proceedings of the 11th working conference on mining software repositories, pp 172–181
[31]
Ghotra B, McIntosh S, Hassan A E (2017) A large-scale study of the impact of feature selection techniques on defect classification models. In: 2017 IEEE/ACM 14th international conference on mining software repositories (msr). IEEE, pp 146–157
[32]
Giger E, D’Ambros M, Pinzger M, Gall H (2012) Method-level bug prediction, pp 171–180.
[33]
Grissom R J, Kim J J (2005) Effect sizes for research: a broad practical approach, 2nd edn. Lawrence Earlbaum Associates
[34]
Gyimóthy T, Ferenc R, and Siket I Empirical validation of objectoriented metrics on open source software for fault prediction IEEE Trans Softw Eng 2005 31 10 897-910
[35]
Hall M A (1998) Correlation-based feature subset selection for machine learning (Doctoral dissertation University of Waikato, Hamilton, New Zealand)
[36]
Hassan A E (2009) Predicting faults using the complexity of code changes. In: 31st International conference on software engineering, ICSE 2009, May 16–24, 2009, Vancouver, Canada, proceedings., pp 78–88
[37]
Herbold SComments on scottknottesd in response to “an empirical comparison of model validation techniques for defect prediction models”IEEE Trans Softw Eng201743111091-1094https://doi.org/10.1109/TSE.2017.2748129
[38]
Herbold S (2019) On the costs and profit of software defect prediction. CoRR. arXiv:1911.04309
[39]
Herbold S, Trautsch A, and Grabowski J Global vs. local models for cross-project defect prediction—a replication study Empir Softw Eng 2017 22 4 1866-1902
[40]
Herbold S, Trautsch A, and Grabowski JA comparative study to benchmark cross-project defect prediction approachesIEEE Trans Softw Eng2018449811-833https://doi.org/10.1109/TSE.2017.2724538
[41]
Herbold S, Trautsch A, and Grabowski J Correction of “a comparative study to benchmark cross-project defect prediction approaches” IEEE Trans Softw Eng 2019 45 6 632-636
[42]
Herbold S, Trautsch A, and Trautsch F On the feasibility of automated prediction of bug and non-bug issues Empir Softw Eng 2020 25 6 5333-5369
[43]
Herzig K, Just S, Zeller A (2013) It’s not a bug, it’s a feature: how misclassification impacts bug prediction. In: Notkin D, Cheng BHC, Pohl K (eds) 35th International conference on software engineering, ICSE ’13, San Francisco, CA, USA, May 18–26, 2013., pp 392–401
[44]
Hosseini S, Turhan B, and Gunarathna D A systematic literature review and meta-analysis on cross project defect prediction IEEE Trans Softw Eng 2019 45 2 111-147
[45]
Huang Q, Xia X, and Lo D Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction Empir Softw Eng 2019 24 5 2823-2862
[46]
Jiang T, Tan L, Kim S (2013) Personalized defect prediction.
[47]
Jiang Y, Cukic B, Menzies T (2008) Can data transformation help in the detection of fault-prone modules? In: Devanbu P T, Murphy B, Nagappan N, Zimmermann T (eds) Proceedings of the 2008 workshop on defects in large software systems, held in conjunction with the ACM SIGSOFT international symposium on software testing and analysis (ISSTA 2008), DEFECTS 2008, Seattle, Washington, USA, July 20, 2008., pp 16–20
[48]
Jiarpakdee J, Tantithamthavorn C, Dam H K, Grundy J (2020) An empirical study of model-agnostic techniques for defect prediction models. IEEE Trans Softw Eng 1–1.
[49]
Jing X, Wu F, Dong X, and Xu B An improved SDA based defect prediction framework for both within-project and cross-project classimbalance problems IEEE Trans Softw Eng 2017 43 4 321-339
[50]
John G H, Langley P (1995) Estimating continuous distributions in bayesian classifiers. In: Eleventh conference on uncertainty in artificial intelligence. Morgan Kaufmann, San Mateo, pp 338–345
[51]
Kamei Y, Shihab E, Adams B, Hassan AE, Mockus A, Sinha A, and Ubayashi N A large-scale empirical study of just-in-time quality assurance IEEE Trans Softw Eng 2012 39 6 757-773
[52]
Kamei Y, Fukushima T, McIntosh S, Yamashita K, Ubayashi N, and Hassan AE Studying just-in-time defect prediction using cross-project models Empir Softw Eng 2016 21 5 2072-2106
[53]
Khoshgoftaar T M, Allen E B, Goel N, Nandi A, McMullan J (1996) Detection of software modules with high debug code churn in a very large legacy system. In: Seventh international symposium on software reliability engineering, ISSRE 1996, white plains, NY, USA, October 30, 1996–Nov. 2, 1996., pp 364–371
[54]
Kim S, Zimmermann T Jr, Whitehead EJ, Zeller A (2007) Predicting faults from cached history. In: 29th International conference on software engineering (ICSE 2007), Minneapolis, MN, USA, May 20–26, 2007., pp 489–498
[55]
Kim SJr, Whitehead EJ, and Zhang Y Classifying software changes: clean or buggy IEEE Trans Softw Eng 2008 34 2 181-196
[56]
Kim S, Zhang H, Wu R, Gong L (2011) Dealing with noise in defect prediction. In: Taylor RN, Gall HC, Medvidovic N (eds) Proceedings of the 33rd international conference on software engineering, ICSE 2011, Waikiki, Honolulu, HI, USA, May 21–28, 2011., pp 481–490
[57]
Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering, EBSE 2007-001. Keele University and Durham University Joint Report, (Jul 9 2007)
[58]
Kochhar P S, Xia X, Lo D, Li S (2016) Practitioners’ expectations on automated fault localization. In: Proceedings of the 25th international symposium on software testing and analysis., pp 165–176
[59]
Kohavi R (1995) The power of decision tables. In: 8th European conference on machine learning. Springer, pp 174–189
[60]
Kondo M, Bezemer C-P, Kamei Y, Hassan AE, and Mizuno O The impact of feature reduction techniques on defect prediction models Empir Softw Eng 2019 24 4 1925-1963
[61]
Kondo M, German DM, Mizuno O, and Choi E The impact of context metrics on just-in-time defect prediction Empir Softw Eng 2020 25 1 890-939
[62]
Kotsiantis S, Tsekouras G, Pintelas P (2005) Bagging model trees for classification problems
[63]
Le Cessie JC, Van Houwelingen S (1992) Ridge estimators in logistic regression. Applied statistics
[64]
Lee T, Nam J, Han D, Kim S, and In HP Developer micro interaction metrics for software defect prediction IEEE Trans Softw Eng 2016 42 11 1015-1035
[65]
Liu J, Zhou Y, Yang Y, Lu H, Xu B (2017) Code churn: a neglected metric in effort-aware just-in-time defect prediction. In: ESEM. IEEE Computer Society, pp 11–19
[66]
Matthews BW Comparison of the predicted and observed secondary structure of t4 phage lysozyme Biochim Biophys Acta (BBA 1975 2 405 442-451
[67]
McCallum A, Nigam K (1998) A comparison of event models for naive Bayes text classification. In: Learning for text categorization: papers from the 1998 AAAI workshop. Retrieved from http://www.kamalnigam.com/papers/multinomial-aaaiws98.pdf, pp 41–48
[68]
McIntosh S and Kamei Y Are fix-inducing changes a moving target? A longitudinal case study of just-in-time defect prediction IEEE Trans Softw Eng 2018 44 5 412-428
[69]
Mende T, Koschke R (2009) Revisiting the evaluation of defect prediction models. In: Ostrand T J (ed) Proceedings of the 5th international workshop on predictive models in software engineering, PROMISE 2009, Vancouver, BC, Canada, May 18–19, 2009., p 7
[70]
Menzies T, Dekhtyar A, Stefano JSD, and Greenwald JProblems with precision: a response to “comments on ‘data mining static code attributes to learn defect predictors”’IEEE Trans Softw Eng2007339637-640https://doi.org/10.1109/TSE.2007.70721
[71]
Menzies T, Greenwald J, and Frank AData mining static code attributes to learn defect predictorsIEEE Trans Softw Eng20073312-13https://doi.org/10.1109/TSE.2007.256941
[72]
Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, and Basar Bener A Defect prediction from static code features: current results, limitations, new approaches Autom Softw Eng 2010 17 4 375-407
[73]
Morasca S and Lavazza L On the assessment of software defect prediction models via ROC curves Empir Softw Eng 2020 25 5 3977-4019
[74]
Mori T and Uchihira N Balancing the trade-off between accuracy and interpretability in software defect prediction Empir Softw Eng 2019 24 2 779-825
[75]
Moser R, Pedrycz W, Succi G (2008) A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: 30th International conference on software engineering (ICSE 2008), Leipzig, Germany, May 10–18, 2008, pp 181–190
[76]
Nagappan N, Ball T (2005) Use of relative code churn measures to predict system defect density. In: Roman G, Griswold W G, Nuseibeh B (eds) 27th International conference on software engineering (ICSE 2005), 15–21 May 2005, St. Louis, Missouri., pp 284–292
[77]
Nucci DD, Palomba F, Rosa GD, Bavota G, Oliveto R, and Lucia AD A developer centered bug prediction model IEEE Trans Softw Eng 2018 44 1 5-24
[78]
Ohlsson N and Alberg HPredicting fault-prone software modules in telephone switchesIEEE Trans Softw Eng19962212886-894https://doi.org/10.1109/32.553637
[79]
Ostrand T J, Weyuker E J (2004) A tool for mining defect-tracking systems to predict fault-prone files. In: Hassan AE, Holt RC, Mockus A (eds) Proceedings of the 1st international workshop on mining software repositories, msr@icse 2004, Edinburgh, Scotland, UK, 25th May 2004, pp 85–89
[80]
Ostrand TJ, Weyuker EJ, and Bell RM Predicting the location and number of faults in large software systems IEEE Trans Softw Eng 2005 31 4 340-355
[81]
Palomba F, Zanoni M, Fontana F A, Lucia A D, Oliveto R (2019) Toward a smell-aware bug prediction model. IEEE Trans Softw Eng 45(2):194–218
[82]
Pascarella L, Palomba F, and Bacchelli A Fine-grained just-in-time defect prediction J Syst Softw 2019 150 22-36
[83]
Pascarella L, Palomba F, Bacchelli A (2020) On the performance of method-level bug prediction: a negative result. J Syst Softw 161
[84]
Peters F, Tun TT, Yu Y, and Nuseibeh B Text filtering and ranking for security bug report prediction IEEE Trans Softw Eng 2019 45 6 615-631
[85]
Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schoelkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. Retrieved from http://research.microsoft.com/%5C~jplatt/smo.html. MIT Press
[86]
Powers DMW Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation J Mach Learn Technol 2007 2 1 37-63
[87]
Qu Y, Zheng Q, Chi J, Jin Y, He A, and Cui D Using k-core decomposition on class dependency networks to improve bug prediction model’s practical performance IEEE Trans Softw Eng 2021 47 2 348-366
[88]
Qu Y, Chi J, and Yin HLeveraging developer information for efficient effort-aware bug predictionInf Softw Technol2021137106605https://doi.org/10.1016/j.infsof.2021.106605
[89]
Quinlan R C4.5: programs for machine learning 1993 San Mateo Morgan Kaufmann Publishers
[90]
Rahman F, Posnett D, Devanbu P T (2012) Recalling the “imprecision” of cross-project defect prediction. In: Tracz W, Robillard MP, Bultan T (eds) 20th ACM SIGSOFT symposium on the foundations of software engineering (fse-20), sigsoft/fse’12, Cary, NC, USA—November 11–16, 2012., p 61
[91]
Rahman F, Posnett D, Herraiz I, Devanbu P T (2013) Sample size vs. bias in defect prediction. In: Meyer B, Baresi L, Mezini M (eds) Joint meeting of the european software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering, esec/fse’13, Saint Petersburg, Russian Federation, August 18–26, 2013., pp 147–157
[92]
Rodríguez-Pérez G, Zaidman A, Serebrenik A, Robles G, González-Barahona J M (2018b) What if a bug has a different origin?: Making sense of bugs without an explicit bug introducing change. In: Oivo M, Fernández DM, Mockus A (eds) Proceedings of the 12th ACM/IEEE international symposium on empirical software engineering and measurement, ESEM 2018, Oulu, Finland, October 11–12, 2018., pp 52:1–52:4
[93]
Rodríguez-Pérez G, Nagappan M, Robles G (2020) Watch out for extrinsic bugs! A case study of their impact in just-in-time bug prediction models on the openstack project. IEEE Trans Softw Eng 1–1.
[94]
Shepperd M, Song Q, Sun Z, and Mair C Data quality: some comments on the nasa software defect datasets IEEE Trans Softw Eng 2013 39 9 1208-1215
[95]
Shepperd M, Bowes D, and Hall TResearcher bias: the use of machine learning in software defect predictionIEEE Trans Softw Eng2014406603-616https://doi.org/10.1109/TSE.2014.2322358
[96]
Shepperd MJ, Hall T, and Bowes D Authors’ reply to “comments on ‘researcher bias: the use of machine learning in software defect prediction”’ IEEE Trans Softw Eng 2018 44 11 1129-1131
[97]
Song Q, Guo Y, and Shepperd MJ A comprehensive investigation of the role of imbalanced learning for software defect prediction IEEE Trans Softw Eng 2019 45 12 1253-1269
[98]
Spearman C The proof and measurement of association between two things Am J Psychol 1904 15 1 72-101
[99]
Tantithamthavorn C, McIntosh S, Hassan A E, Ihara A, Matsumoto K (2015) The impact of mislabelling on the performance and interpretation of defect prediction models. In: Bertolino A, Canfora G, Elbaum SG (eds) 37th IEEE/ACM international conference on software engineering, ICSE 2015, Florence, Italy, May 16–24, 2015, vol 1., pp 812–823
[100]
Tantithamthavorn C, McIntosh S, Hassan AE, and Matsumoto KComments on “researcher bias: the use of machine learning in software defect prediction”IEEE Trans Softw Eng201642111092-1094https://doi.org/10.1109/TSE.2016.2553030
[101]
Tantithamthavorn C, McIntosh S, Hassan AE, and Matsumoto K An empirical comparison of model validation techniques for defect prediction models IEEE Trans Softw Eng 2016 43 1 1-18
[102]
Tantithamthavorn C, McIntosh S, Hassan AE, and Matsumoto KThe impact of automated parameter optimization on defect prediction modelsIEEE Trans Softw Eng2019457683-711https://doi.org/10.1109/TSE.2018.2794977
[103]
Tantithamthavorn C, Hassan AE, and Matsumoto KThe impact of class rebalancing techniques on the performance and interpretation of defect prediction modelsIEEE Trans Softw Eng202046111200-1219https://doi.org/10.1109/TSE.2018.2876537
[104]
Tian Y, Lo D, Xia X, and Sun C Automated prediction of bug report priority using multi-factor analysis Empir Softw Eng 2015 20 5 1354-1383
[105]
Tu H, Yu Z, Menzies T (2020) Better data labelling with emblem (and how that impacts defect prediction). IEEE Trans Softw Eng 1–1.
[106]
Turhan B, Menzies T, Bener AB, and Di Stefano JOn the relative value of cross-company and within-company data for defect predictionEmpir Softw Eng2009145540-578https://doi.org/10.1007/s10664-008-9103-7 https://doi.org/10.1007/s10664-008-9103-7
[107]
Vandehei B, da Costa DA, and Falessi D Leveraging the defects life cycle to label affected versions and defective classes ACM Trans Softw Eng Methodol 2021 30 2 24:1-24:35
[108]
Vargha A and Delaney HD A critique and improvement of the cl common language effect size statistics of Mcgraw and Wong J Educ Behav Stat 2000 25 2 101-132
[109]
Wang S, Liu T, Tan L (2016) Automatically learning semantic features for defect prediction. In: Proceedings of the 38th international conference on software engineering, ICSE 2016, Austin, TX, USA, May 14–22, 2016, pp 297–308
[110]
Wang S, Liu T, Nam J, and Tan LDeep semantic feature learning for software defect predictionIEEE Trans Software Eng202046121267-1293https://doi.org/10.1109/TSE.2018.2877612
[111]
Weyuker EJ, Ostrand TJ, and Bell RMComparing the effectiveness of several modeling methods for fault predictionEmpir Softw Eng2010153277-295https://doi.org/10.1007/s10664-009-9111-2
[112]
Wilcoxon FIndividual comparisons by ranking methodsBiometrics19451680https://doi.org/10.2307/3001968 https://doi.org/10.2307/3001968
[113]
Wohlin C, Runeson P, Hst M, Ohlsson M C, Regnell B, Wessln A (2012) Experimentation in software engineering. Springer Publishing Company Incorporated
[114]
Xia X, Lo D, Pan SJ, Nagappan N, and Wang XHYDRA: massively compositional model for cross-project defect predictionIEEE Trans Softw Eng20164210977-998https://doi.org/10.1109/TSE.2016.2543218
[115]
Yan M, Fang Y, Lo D, Xia X, Zhang X (2017) File-level defect prediction: unsupervised vs. supervised models. In: ESEM. IEEE Computer Society, pp 344–353
[116]
Yu T, Wen W, Han X, and Hayes JH Conpredictor: concurrency defect prediction in real-world applications IEEE Trans Softw Eng 2019 45 6 558-575
[117]
Zhang H and Zhang XComments on “data mining static code attributes to learn defect predictors”IEEE Trans Softw Eng2007339635-637https://doi.org/10.1109/TSE.2007.70706
[118]
Zhang F, Mockus A, Keivanloo I, and Zou Y Towards building a universal defect prediction model with rank transformed predictors Empir Softw Eng 2016 21 5 2107-2145
[119]
Zhang F, Hassan AE, McIntosh S, and Zou Y The use of summation to aggregate software metrics hinders the performance of defect prediction models IEEE Trans Softw Eng 2017 43 5 476-491

Cited By

View all
  • (2024)An Extensive Comparison of Static Application Security Testing ToolsProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661199(69-78)Online publication date: 18-Jun-2024
  • (2024)Just-in-Time crash prediction for mobile appsEmpirical Software Engineering10.1007/s10664-024-10455-729:3Online publication date: 8-May-2024

Index Terms

  1. On effort-aware metrics for defect prediction
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Empirical Software Engineering
    Empirical Software Engineering  Volume 27, Issue 6
    Nov 2022
    1651 pages

    Publisher

    Kluwer Academic Publishers

    United States

    Publication History

    Published: 01 November 2022
    Accepted: 31 May 2022

    Author Tags

    1. Defect prediction
    2. Accuracy metrics
    3. Effort-aware metrics

    Qualifiers

    • Research-article

    Funding Sources

    • Università degli Studi di Roma Tor Vergata

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An Extensive Comparison of Static Application Security Testing ToolsProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661199(69-78)Online publication date: 18-Jun-2024
    • (2024)Just-in-Time crash prediction for mobile appsEmpirical Software Engineering10.1007/s10664-024-10455-729:3Online publication date: 8-May-2024

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media