Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/ESEM.2017.14acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

What if i had no smells?

Published: 09 November 2017 Publication History

Abstract

What would have happened if I did not have any code smell? This is an interesting question that no previous study, to the best of our knowledge, has tried to answer. In this paper, we present a method for implementing a what-if scenario analysis estimating the number of defective files in the absence of smells. Our industrial case study shows that 20% of the total defective files were likely avoidable by avoiding smells. Such estimation needs to be used with the due care though as it is based on a hypothetical history (i.e., zero number of smells and same process and product change characteristics). Specifically, the number of defective files could even increase for some types of smells. In addition, we note that in some circumstances, accepting code with smells might still be a good option for a company.

References

[1]
M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts, "Refactoring: Improving the Design of Existing Code," Xtemp01, pp. 1--337, 1999.
[2]
A. Potdar and E. Shihab, "An Exploratory Study on Self-Admitted Technical Debt," in 2014 IEEE International Conference on Software Maintenance and Evolution, 2014, pp. 91--100.
[3]
G. Bavota and B. Russo, "A large-scale empirical study on self-admitted technical debt," in Proceedings of the 13th International Workshop on Mining Software Repositories - MSR '16, 2016, pp. 315--326.
[4]
D. I. K. Sjoberg, A. Yamashita, B. C. D. Anda, A. Mockus, and T. Dyba, "Quantifying the Effect of Code Smells on Maintenance Effort," IEEE Trans. Softw. Eng., vol. 39, no. 8, pp. 1144--1156, Aug. 2013.
[5]
R. Fanta and V. Rajlich, "Removing clones from the code," J. Softw. Maint. Res. Pract., vol. 11, no. 4, pp. 223--243, 1999.
[6]
E. Murphy-Hill, C. Parnin, and A. P. Black, "How We Refactor, and How We Know It," IEEE Trans. Softw. Eng., vol. 38, no. 1, pp. 5--18, Jan. 2012.
[7]
W. Li and R. Shatnawi, "An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution," J. Syst. Softw., vol. 80, pp. 1120--1128, 2007.
[8]
A. Yamashita, "Assessing the capability of code smells to explain maintenance problems: An empirical study combining quantitative and qualitative data," Empir. Softw. Eng., vol. 19, no. 4, pp. 1111--1143, 2014.
[9]
M. Abbes, F. Khomh, Y. G. Guéhéneuc, and G. Antoniol, "An empirical study of the impact of two antipatterns, Blob and Spaghetti Code, on program comprehension," in Proceedings of the European Conference on Software Maintenance and Reengineering, CSMR, 2011, pp. 181--190.
[10]
I. Deligiannis, I. Stamelos, L. Angelis, M. Roumeliotis, and M. Shepperd, "A controlled experiment investigation of an object-oriented design heuristic for maintainability," J. Syst. Softw., vol. 72, no. 2, pp. 129--143, 2004.
[11]
A. Yamashita and L. Moonen, "To what extent can maintenance problems be predicted by code smell detection? -An empirical study," Inf. Softw. Technol., vol. 55, pp. 2223--2242, 2013.
[12]
P. Tomas, M. J. Escalona, and M. Mejias, "Open source tools for measuring the Internal Quality of Java software products. A survey," Comput. Stand. Interfaces, vol. 36, no. 1, pp. 244--255, Nov. 2013.
[13]
F. D. Davis, J. E. Kottemann, and W. E. Remus, "What-if analysis and the illusion of control," in Twenty-Fourth Annual Hawaii International Conference on System Sciences, 1991.
[14]
G. Bavota, R. Oliveto, M. Gethers, D. Poshyvanyk, and A. De Lucia, "Methodbook: Recommending move method refactorings via relational topic models," IEEE Trans. Softw. Eng., vol. 40, no. 7, pp. 671--694, 2014.
[15]
D. Falessi and A. Voegele, "Validating and prioritizing quality rules for managing technical debt: An industrial case study," in 2015 IEEE 7th International Workshop on Managing Technical Debt, MTD 2015 - Proceedings, 2015, pp. 41--48.
[16]
I. Griffith and C. Izurieta, "Design Pattern Decay: The Case for Class Grime," in Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 2014, p. 39:1--39:4.
[17]
T. Mens and T. Tourwe, "A survey of software refactoring," IEEE Trans. Softw. Eng., vol. 30, 2004.
[18]
M. Kim, T. Zimmermann, and N. Nagappan, "An Empirical Study of Refactoring Challenges and Benefits at Microsoft," IEEE Trans. Softw. Eng., pp. 1--1, Apr. 2014.
[19]
J. Ratzinger, T. Sigmund, and H. C. Gall, "On the relation of refactorings and software defect prediction," in Proceedings of the 2008 international workshop on Mining software repositories - MSR '08, 2008, p. 35.
[20]
D. Steidl and S. Eder, "Prioritizing maintainability defects based on refactoring recommendations," in Proceedings of the 22nd International Conference on Program Comprehension - ICPC 2014, 2014, pp.168--76.
[21]
F. A. Fontana, M. Zanoni, A. Marino, and M. V. M??ntyl??, "Code smell detection: Towards a machine learning-based approach," in IEEE International Conference on Software Maintenance, ICSM, 2013, pp. 396--399.
[22]
F. A. Fontana, P. Braione, and M. Zanoni, "Automatic detection of bad smells in code: An experimental assessment," J. Object Technol., vol. 11, 2012.
[23]
N. Moha and Y. Guéhéneuc, "DECOR: A method for the specification and detection of code and design smells," ..., IEEE Trans., vol. 36, pp. 20--36, 2010.
[24]
N. Zazworka, R. O. Spínola, A. Vetro', F. Shull, and C. Seaman, "A case study on effectively identifying technical debt," in Proceedings of the 17th International Conference on Evaluation and Assessment in Software Engineering - EASE '13, 2013, p. 42.
[25]
F. Palomba, M. D. P. Gabriele Bavota, R. Oliveto, and A. De Lucia, "Do they Really Smell Bad? A Study on Developers' Perception of Code Bad Smells," in 30th International Conference on Software Maintenance and Evolution (ICSM 2014), 2014.
[26]
R. Kazman, Y. Cai, R. Mo, Q. Feng, L. Xiao, S. Haziyev, V. Fedak, and A. Shapochka, "A Case Study in Locating the Architectural Roots of Technical Debt," in IEEE/ACM 37th IEEE International Conference on Software Engineering, 2015, pp. 179--188.
[27]
H. Liu, Z. Ma, W. Shao, and Z. Niu, "Schedule of bad smell detection and resolution: A new way to save effort," IEEE Trans. Softw. Eng., vol. 38, no. 1, pp. 220--235, 2012.
[28]
C. Bird and T. Zimmermann, "Assessing the Value of Branches with What-if Analysis," Proc. ACM SIGSOFT 20th Int. Symp. Found. Softw. Eng., p. 45:1--45:11, 2012.
[29]
M. D'Ambros, M. Lanza, and R. Robbes, "Evaluating defect prediction approaches: a benchmark and an extensive comparison," Empirical Software Engineering, vol. 17. pp. 531--577, 2012.
[30]
S. Kim, H. Zhang, R. Wu, and L. Gong, "Dealing with noise in defect prediction," in Proceeding of the 33rd international conference on Software engineering - ICSE '11, 2011, p. 481.
[31]
I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, Second Edition. Morgan Kaufmann Series in Data Management Systems, 2005.
[32]
M. P. Fay and M. A. Proschan, "Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules," Stat. Surv., vol. 4, no. 0, pp. 1--39, 2010.
[33]
E. E. Grant and H. Sackman, "An exploratory investigation of programmer performance under on-line and off-line conditions," IEEE Trans. Hum. factors Electron., vol. 1, 1967.
[34]
T. Kamiya, S. Kusumoto, and K. Inoue, "CCFinder: A multilinguistic token-based code clone detection system for large scale source code," IEEE Trans. Softw. Eng., vol. 28, no. 7, pp. 654--670, 2002.
[35]
F. Rahman, C. Bird, and P. Devanbu, "Clones: what is that smell?," Empir. Softw. Eng., vol. 17, no. 4-5, pp. 503--530, Dec. 2011.
[36]
C. J. Kapser and M. W. Godfrey, "`cloning considered harmful' considered harmful: Patterns of cloning in software," Empir. Softw. Eng., vol. 13, no. 6, pp. 645--692, 2008.

Cited By

View all
  • (2022)The Impact of Dormant Defects on Defect Prediction: A Study of 19 Apache ProjectsACM Transactions on Software Engineering and Methodology10.1145/346789531:1(1-26)Online publication date: 31-Jan-2022
  • (2021)Leveraging the Defects Life Cycle to Label Affected Versions and Defective ClassesACM Transactions on Software Engineering and Methodology10.1145/343392830:2(1-35)Online publication date: 10-Feb-2021
  • (2020)A preliminary study on the adequacy of static analysis warnings with respect to code smell predictionProceedings of the 4th ACM SIGSOFT International Workshop on Machine-Learning Techniques for Software-Quality Evaluation10.1145/3416505.3423559(1-6)Online publication date: 13-Nov-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEM '17: Proceedings of the 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
November 2017
481 pages
ISBN:9781509040391

Sponsors

Publisher

IEEE Press

Publication History

Published: 09 November 2017

Check for updates

Author Tags

  1. code smells
  2. machine learning
  3. software estimation
  4. technical debt

Qualifiers

  • Research-article

Conference

ESEM '17
Sponsor:

Acceptance Rates

Overall Acceptance Rate 130 of 594 submissions, 22%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)The Impact of Dormant Defects on Defect Prediction: A Study of 19 Apache ProjectsACM Transactions on Software Engineering and Methodology10.1145/346789531:1(1-26)Online publication date: 31-Jan-2022
  • (2021)Leveraging the Defects Life Cycle to Label Affected Versions and Defective ClassesACM Transactions on Software Engineering and Methodology10.1145/343392830:2(1-35)Online publication date: 10-Feb-2021
  • (2020)A preliminary study on the adequacy of static analysis warnings with respect to code smell predictionProceedings of the 4th ACM SIGSOFT International Workshop on Machine-Learning Techniques for Software-Quality Evaluation10.1145/3416505.3423559(1-6)Online publication date: 13-Nov-2020
  • (2020)On the diffuseness of technical debt items and accuracy of remediation time when using SonarQubeInformation and Software Technology10.1016/j.infsof.2020.106377128:COnline publication date: 1-Dec-2020
  • (2019)Towards surgically-precise technical debt estimation: early results and research roadmapProceedings of the 3rd ACM SIGSOFT International Workshop on Machine Learning Techniques for Software Quality Evaluation10.1145/3340482.3342747(37-42)Online publication date: 27-Aug-2019
  • (2018)Facilitating feasibility analysis: the pilot defects prediction dataset makerProceedings of the 4th ACM SIGSOFT International Workshop on Software Analytics10.1145/3278142.3278147(15-18)Online publication date: 5-Nov-2018

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media