Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3559744.3559745acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

Evaluating the Effectiveness of Regression Test Suites for Extract Method Validation

Published: 03 October 2022 Publication History

Abstract

Refactoring edits aim to improve structural aspects of a system without changing its external behavior. However, while trying to perform a safe edit, a developer might introduce refactoring faults. To avoid refactoring faults, developers often use test suites to validate refactoring edits. However, depending on the quality of a test suite, its verdict may be misleading. In this work, we first present an empirical study that investigates the effectiveness of test suites (manually created and generated) for validating Extract Method refactoring faults. We found that manual suites detected 61,9% the injected faults, while generated suites detected only 46,7% (Randoop) and 55,8% (Evosuite). Then, we propose a new approach for evaluating the quality of a test suite for detecting refactoring faults. This approach is implemented by our prototype tool that focuses on two types of Extract Method faults. We demonstrate its applicability in a second empirical study that measured the quality of test suites from three different open-source projects.

References

[1]
Everton L.G. Alves, Tiago Massoni, and Patrícia Duarte de Lima Machado. 2017. Test coverage of impacted code elements for detecting refactoring faults: An exploratory study. Journal of Systems and Software 123 (2017), 223–238. https://doi.org/10.1016/j.jss.2016.02.001
[2]
Everton LG Alves, Myoungkyu Song, Tiago Massoni, Patricia DL Machado, and Miryung Kim. 2017. Refactoring inspection support for manual refactoring edits. IEEE Transactions on Software Engineering 44, 4 (2017), 365–383.
[3]
Danny Dig and Ralph Johnson. 2006. How do APIs evolve? A story of refactoring. Journal of software maintenance and evolution: Research and Practice 18, 2(2006), 83–107.
[4]
Martin Fowler. 1999. Refactoring: Improving the Design of Existing Code (Object Technology Series) (illustrated edition ed.). Addison-Wesley Longman, Amsterdam.
[5]
Martin Fowler. 2018. Refactoring: improving the design of existing code. Addison-Wesley Professional.
[6]
Gordon Fraser and Andrea Arcuri. 2011. Evosuite: automatic test suite generation for object-oriented software. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering. 416–419.
[7]
Xi Ge and Emerson Murphy-Hill. 2014. Manual refactoring changes with automated refactoring validation. In Proceedings of the 36th International Conference on Software Engineering. 1095–1105.
[8]
Yue Jia and Mark Harman. 2010. An analysis and survey of the development of mutation testing. IEEE transactions on software engineering 37, 5 (2010), 649–678.
[9]
Nobuo Kikuchi, Takeshi Yoshimura, Ryo Sakuma, and Kenji Kono. 2014. Do injected faults cause real failures? A case study of Linux. In 2014 IEEE International Symposium on Software Reliability Engineering Workshops. IEEE, 174–179.
[10]
Miryung Kim, Dongxiang Cai, and Sunghun Kim. 2011. An empirical investigation into the role of API-level refactorings during software evolution. In Proceedings of the 33rd international conference on software engineering. 151–160.
[11]
Miryung Kim, Thomas Zimmermann, and Nachiappan Nagappan. 2012. A field study of refactoring challenges and benefits. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering. 1–11.
[12]
Tom Mens and Pieter Van Gorp. 2006. A taxonomy of model transformation. Electronic notes in theoretical computer science 152 (2006), 125–142.
[13]
Melina Mongiovi, Rohit Gheyi, Gustavo Soares, Leopoldo Teixeira, and Paulo Borba. 2014. Making refactoring safer through impact analysis. Science of Computer Programming 93 (2014), 39–64.
[14]
Jaziel S Moreira, Everton LG Alves, and Wilkerson L Andrade. 2020. An exploratory study on extract method floss-refactoring. In Proceedings of the 35th Annual ACM Symposium on Applied Computing. 1532–1539.
[15]
R. Moser, P. Abrahamsson, W. Pedrycz, A. Sillitti, and G. Succi. 2007. A case study on the impact of refactoring on quality and productivity in an agile team. In IFIP Central and East European Conference on Software Engineering Techniques. 252–266.
[16]
Emerson Murphy-Hill, Chris Parnin, and Andrew P Black. 2011. How we refactor, and how we know it. IEEE Transactions on Software Engineering 38, 1 (2011), 5–18.
[17]
Stas Negara, Nicholas Chen, Mohsen Vakilian, Ralph E Johnson, and Danny Dig. 2013. A comparative study of manual and automated refactorings. In European Conference on Object-Oriented Programming. Springer, 552–576.
[18]
Carlos Pacheco and Michael D Ernst. 2007. Randoop: feedback-directed random testing for Java. In Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion. 815–816.
[19]
Napol Rachatasumrit and Miryung Kim. 2012. An empirical investigation into the impact of refactoring on regression testing. In 2012 28th ieee international conference on software maintenance (icsm). IEEE, 357–366.
[20]
R Rajesh Kumar, Vivek Shanbhag, and KV Dinesha. 2021. Automated deadlock detection for large Java libraries. In International Conference on Distributed Computing and Internet Technology. Springer, 129–144.
[21]
Max Schäfer, Andreas Thies, Friedrich Steimann, and Frank Tip. 2012. A comprehensive approach to naming and accessibility in refactoring Java programs. IEEE Transactions on Software Engineering 38, 6 (2012), 1233–1257.
[22]
Indy P.S.C. Silva, Everton L.G. Alves, and Wilkerson L. Andrade. 2017. Analyzing Automatic Test Generation Tools for Refactoring Validation. In 2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST). 38–44. https://doi.org/10.1109/AST.2017.9
[23]
Indy PSC Silva, Everton LG Alves, and Patrícia DL Machado. 2018. Can automated test case generation cope with extract method validation?. In Proceedings of the XXXII Brazilian Symposium on Software Engineering. 152–161.
[24]
Gustavo Soares, Bruno Catao, Catuxe Varjao, Solon Aguiar, Rohit Gheyi, and Tiago Massoni. 2011. Analyzing refactorings on software repositories. In 2011 25th Brazilian Symposium on Software Engineering. IEEE, 164–173.
[25]
Gustavo Soares, Rohit Gheyi, and Tiago Massoni. 2012. Automated behavioral testing of refactoring engines. IEEE Transactions on Software Engineering 39, 2 (2012), 147–162.
[26]
Peter Weißgerber and Stephan Diehl. 2006. Are refactorings less error-prone than other changes?. In Proceedings of the 2006 international workshop on Mining software repositories. 112–118.

Index Terms

  1. Evaluating the Effectiveness of Regression Test Suites for Extract Method Validation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    SAST '22: Proceedings of the 7th Brazilian Symposium on Systematic and Automated Software Testing
    October 2022
    78 pages
    ISBN:9781450397537
    DOI:10.1145/3559744
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. fault
    2. refactoring
    3. test suite

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    SAST 2022

    Acceptance Rates

    Overall Acceptance Rate 45 of 92 submissions, 49%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 50
      Total Downloads
    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 10 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media