Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1852786.1852797acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Building empirical support for automated code smell detection

Published: 16 September 2010 Publication History
  • Get Citation Alerts
  • Abstract

    Identifying refactoring opportunities in software systems is an important activity in today's agile development environments. The concept of code smells has been proposed to characterize different types of design shortcomings in code. Additionally, metric-based detection algorithms claim to identify the "smelly" components automatically. This paper presents results for an empirical study performed in a commercial environment. The study investigates the way professional software developers detect god class code smells, then compares these results to automatic classification. The results show that, even though the subjects perceive detecting god classes as an easy task, the agreement for the classification is low. Misplaced methods are a strong driver for letting subjects identify god classes as such. Earlier proposed metric-based detection approaches performed well compared to the human classification. These results lead to the conclusion that an automated metric-based pre-selection decreases the effort spent on manual code inspections. Automatic detection accompanied by a manual review increases the overall confidence in the results of metric-based classifiers.

    References

    [1]
    Cunningham, W. 1993. The WyCash portfolio management system. SIGPLAN OOPS Mess. 4, 2 (Apr. 1993), 29--30.
    [2]
    E. Van Emden, L. Moonen 2002, Java Quality Assurance by Detecting Code Smells. Ninth Working Conference on Reverse Engineering (2002).
    [3]
    Fowler, M., Beck, K. 1999. Refactoring: improving the design of existing code. Addison Wesley.
    [4]
    Kreimer, J. 2005. Adaptive Detection of Design Flaws, Electronic Notes in Theoretical Computer Science. Volume 141, Issue 4, Proceedings of the Fifth Workshop on Language Descriptions, Tools, and Applications (12 December 2005), 117--136.
    [5]
    Landis, J. R., Koch, G. G. 1977. The measurement of observer agreement for categorical data. Biometrics, 33, 159--74.
    [6]
    Lanza, M., Marinescu, R. 2006. Object-oriented metrics in practice. Springer
    [7]
    Mäntylä, M. V. and Lassenius, C. 2006. Drivers for software refactoring decisions. In Proceedings of the 2006 ACM/IEEE international Symposium on Empirical Software Engineering (Rio de Janeiro, Brazil, September 21--22, 2006). ISESE '06. ACM, New York, NY, 297--306.
    [8]
    Mäntylä, M. (2005). An experiment on subjective evolvability evaluation of object-oriented software: Explaining factors and interrater agreement. In: Proceedings of the 4th International Symposium on Empirical Software Engineering (ISESE 2005). Noosa Heads, Queensland, Australia. 17--18 November 2005, 10 pages.
    [9]
    Mäntylä, M., Vanhanen, J., Lassenius, C. (2004, Sep). Bad smells - humans as code critics. Software Maintenance, 2004. Proceedings. 20th IEEE International Conference on, 399--408.
    [10]
    Mäntylä, M., Lassenius, C. (2006). Subjective evaluation of software evolvability using code smells: An empirical study. Empirical Software Engineering, Volume 11, Issue 3, 395--431.
    [11]
    Marinescu, R. 2004. Detection Strategies: Metrics-Based Rules for Detecting Design Flaws. In Proceedings of the 20th IEEE international Conference on Software Maintenance (September 11--14, 2004). ICSM. IEEE Computer Society, Washington, DC, 350--359.
    [12]
    Moha, N., Gueheneuc, Y.-G., Duchien, L., Meur, A.-F. L. (2010). DECOR: A Method for the Specification and Detection of Code and Design Smells. Software Engineering, IEEE Transactions on, 36, 20--36.
    [13]
    Munro, M. J. 2005. Product Metrics for Automatic Identification of "Bad Smell" Design Problems in Java Source-Code. In Proceedings of the 11th IEEE international Software Metrics Symposium (September 19--22, 2005). METRICS. IEEE Computer Society, Washington, DC, 15.
    [14]
    Olbrich, S., Cruzes, D., Basili, V., Zazworka, N. (2009). The evolution and impact of code smells: A case study of two open source systems. Empirical Software Engineering and Measurement, 2009. ESEM 2009. 3rd International Symposium on, 390--400.
    [15]
    Parnin, C., Görg, C., and Nnadi, O. 2008. A catalogue of lightweight visualizations to support code smell inspection. In Proceedings of the 4th ACM Symposium on Software Visualization (Ammersee, Germany, September 16--17, 2008). SoftVis '08. ACM, New York, NY, 77--86.
    [16]
    Seaman, C., "Qualitative Methods," in Shull, F., Singer, J., and Sjøberg, D. I. K., (eds.) Guide to Advanced Empirical Software Engineering, London: Springer, pp. 35--62, 2007.
    [17]
    Thode Jr., H. C.: Testing for Normality. Marcel Dekker, New York, 2002

    Cited By

    View all
    • (2024)Study of Code Smells: A Review and Research AgendaInternational Journal of Mathematical, Engineering and Management Sciences10.33889/IJMEMS.2024.9.3.0259:3(472-498)Online publication date: 1-Jun-2024
    • (2024)An Insight into Code Smell Detection ToolReliability Engineering for Industrial Processes10.1007/978-3-031-55048-5_17(245-273)Online publication date: 23-Apr-2024
    • (2023)Towards a systematic approach to manual annotation of code smellsScience of Computer Programming10.1016/j.scico.2023.102999(102999)Online publication date: Jul-2023
    • Show More Cited By

    Index Terms

    1. Building empirical support for automated code smell detection

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ESEM '10: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
      September 2010
      423 pages
      ISBN:9781450300391
      DOI:10.1145/1852786
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 September 2010

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. code inspection
      2. code smells
      3. empirical study
      4. god class
      5. maintainability

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      ESEM '10
      Sponsor:

      Acceptance Rates

      ESEM '10 Paper Acceptance Rate 30 of 102 submissions, 29%;
      Overall Acceptance Rate 130 of 594 submissions, 22%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)26
      • Downloads (Last 6 weeks)0

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Study of Code Smells: A Review and Research AgendaInternational Journal of Mathematical, Engineering and Management Sciences10.33889/IJMEMS.2024.9.3.0259:3(472-498)Online publication date: 1-Jun-2024
      • (2024)An Insight into Code Smell Detection ToolReliability Engineering for Industrial Processes10.1007/978-3-031-55048-5_17(245-273)Online publication date: 23-Apr-2024
      • (2023)Towards a systematic approach to manual annotation of code smellsScience of Computer Programming10.1016/j.scico.2023.102999(102999)Online publication date: Jul-2023
      • (2023)Yet Another Model! A Study on Model’s Similarities for Defect and Code SmellsFundamental Approaches to Software Engineering10.1007/978-3-031-30826-0_16(282-305)Online publication date: 22-Apr-2023
      • (2022)Merging smell detectorsProceedings of the International Conference on Technical Debt10.1145/3524843.3528089(61-65)Online publication date: 16-May-2022
      • (2022)How to improve deep learning for software analyticsProceedings of the 19th International Conference on Mining Software Repositories10.1145/3524842.3528458(156-166)Online publication date: 23-May-2022
      • (2022)Studying Duplicate Logging Statements and Their Relationships With Code ClonesIEEE Transactions on Software Engineering10.1109/TSE.2021.306091848:7(2476-2494)Online publication date: 1-Jul-2022
      • (2022)Developers’ perception matters: machine learning to detect developer-sensitive smellsEmpirical Software Engineering10.1007/s10664-022-10234-227:7Online publication date: 12-Oct-2022
      • (2021)Evaluating Human versus Machine Learning Performance in a LegalTech ProblemApplied Sciences10.3390/app1201029712:1(297)Online publication date: 29-Dec-2021
      • (2021)A Systematic Literature Review on Bad Smells–5 W's: Which, When, What, Who, WhereIEEE Transactions on Software Engineering10.1109/TSE.2018.288097747:1(17-66)Online publication date: 1-Jan-2021
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media