Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3663529.3663817acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Open access

ExLi: An Inline-Test Generation Tool for Java

Published: 10 July 2024 Publication History

Abstract

We present ExLi, a tool for automatically generating inline tests, which were recently proposed for statement-level code validation. ExLi is the first tool to support retrofitting inline tests to existing codebases, towards increasing adoption of this type of tests. ExLi first extracts inline tests from unit tests that validate methods that enclose the target statement under test. Then, ExLi uses a coverage-then-mutants based approach to minimize the set of initially generated inline tests, while preserving their fault-detection capability. ExLi works for Java, and we use it to generate inline tests for 645 target statements in 31 open-source projects. ExLi reduces the initially generated 27,415 inline tests to 873. ExLi improves the fault-detection capability of unit test suites from which inline tests are generated: the final set of inline tests kills up to 24.4% more mutants on target statements than developer written and automatically generated unit tests. ExLi is open sourced at https://github.com/EngineeringSoftware/exli and a video demo is available at https://youtu.be/qaEB4qDeds4.

References

[1]
Yiqun T Chen, Rahul Gopinath, Anita Tadakamalla, Michael D Ernst, Reid Holmes, Gordon Fraser, Paul Ammann, and René Just. 2020. Revisiting the relationship between fault detection, test adequacy criteria, and test set size. In Automated Software Engineering. 237–249. https://doi.org/10.1145/3324884.3416667
[2]
2024. Conda. https://docs.conda.io/projects/conda/en/stable
[3]
Gordon Fraser and Andrea Arcuri. 2011. Evosuite: Automatic test suite generation for object-oriented software. In International Symposium on the Foundations of Software Engineering. 416–419. https://doi.org/10.1145/2025113.2025179
[4]
Milos Gligoric, Alex Groce, Chaoqiang Zhang, Rohan Sharma, Mohammad Amin Alipour, and Darko Marinov. 2013. Comparing non-adequate test suites using coverage criteria. In International Symposium on Software Testing and Analysis. 302–313. https://doi.org/10.1145/2483760.2483769
[5]
Alex Groce, Josie Holmes, Darko Marinov, August Shi, and Lingming Zhang. 2018. An extensible, regular-expression-based tool for multi-language mutant generation. In International Conference on Software Engineering, Demonstrations. 25–28. https://doi.org/10.1145/3183440.3183485
[6]
2023. pytest-inline on PyPi. https://pypi.org/project/pytest-inline
[7]
ITest Team. 2023. ITest. https://github.com/EngineeringSoftware/inlinetest/tree/main/java
[8]
Dennis Jeffrey and Neelam Gupta. 2007. Improving fault detection capability by selectively retaining test cases during test suite reduction. Transactions on Software Engineering, 33, 2 (2007), 108–123. https://doi.org/10.1109/TSE.2007.18
[9]
jkuhnert Team. 2024. Jkuhnert Ognl. https://github.com/jkuhnert/ognl
[10]
Rafael-Michael Karampatsis and Charles Sutton. 2020. How often do single-statement bugs occur? The ManySStuBs4J dataset. In International Working Conference on Mining Software Repositories. 573–577. https://doi.org/10.1145/3379597.3387491
[11]
Jasmine Latendresse, Rabe Abdalkareem, Diego Elias Costa, and Emad Shihab. 2021. How effective is continuous integration in indicating single-statement bugs? In International Working Conference on Mining Software Repositories. 500–504. https://doi.org/10.1109/MSR52588.2021.00062
[12]
Yu Liu, Pengyu Nie, Anna Guo, Milos Gligoric, and Owolabi Legunsen. 2023. Extracting Inline Tests from Unit Tests. In International Symposium on Software Testing and Analysis. 1458–1470. https://doi.org/10.1145/3597926.3598149
[13]
Yu Liu, Pengyu Nie, Owolabi Legunsen, and Milos Gligoric. 2022. Inline tests. In Automated Software Engineering. 1–13. https://doi.org/10.1145/3551349.3556952
[14]
Yu Liu, Zachary Thurston, Alan Han, Pengyu Nie, Milos Gligoric, and Owolabi Legunsen. 2023. pytest-inline: An inline testing tool for Python. In International Conference on Software Engineering, Demonstrations. 161–164. https://doi.org/10.1109/ICSE-Companion58688.2023.00046
[15]
2022. XStream developer. https://x-stream.github.io/index.html
[16]
Louis G Michael, James Donohue, James C Davis, Dongyoon Lee, and Francisco Servant. 2019. Regexes are hard: Decision-making, difficulties, and risks in programming regular expressions. In ASE. 415–426. https://doi.org/10.1109/ASE.2019.00047
[17]
Carlos Pacheco and Michael D Ernst. 2007. Randoop: Feedback-directed random testing for Java. In International Conference on Object-Oriented Programming, Systems, Languages, and Applications. 815–816. https://doi.org/10.1145/1297846.1297902
[18]
PePy Team. 2024. pytest-inline downloads. https://pepy.tech/project/pytest-inline
[19]
Goran Petrović, Marko Ivanković, Gordon Fraser, and René Just. 2021. Practical mutation testing at scale: A view from Google. Transactions on Software Engineering, 48, 10 (2021), 3900–3912. https://doi.org/10.1109/TSE.2021.3107634
[20]
Cedric Richter and Heike Wehrheim. 2022. TSSB-3M: Mining single statement bugs at massive scale. In International Working Conference on Mining Software Repositories. 418–422. https://doi.org/10.1145/3524842.3528505
[21]
Sina Shamshiri, René Just, José Miguel Rojas, Gordon Fraser, Phil McMinn, and Andrea Arcuri. 2015. Do automatically generated unit tests find real faults? An empirical study of effectiveness and challenges (t). In Automated Software Engineering. 201–211. https://doi.org/10.1109/ASE.2015.86
[22]
August Shi. 2023. Collection of scripts to conduct test-suite reduction. https://github.com/august782/testsuite-reduction
[23]
Major Team. 2023. Major mutation framework. https://mutation-testing.org
[24]
Wmixvideo Team. 2024. Wmixvideo Nfe. https://github.com/wmixvideo/nfe
[25]
Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: A survey. Software Testing, Verification and Reliability, 22, 2 (2012), 67–120. https://doi.org/10.1002/stv.430

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FSE 2024: Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering
July 2024
715 pages
ISBN:9798400706585
DOI:10.1145/3663529
This work is licensed under a Creative Commons Attribution-NoDerivs International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Inline tests
  2. automatic test generation
  3. test carving
  4. unit tests

Qualifiers

  • Research-article

Funding Sources

  • Intel Rising Star Faculty Award
  • Google Cyber NYC Institutional Research Award
  • US National Science Foundation

Conference

FSE '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 38
    Total Downloads
  • Downloads (Last 12 months)38
  • Downloads (Last 6 weeks)26
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media