Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2642937.2642986acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Automated unit test generation for classes with environment dependencies

Published: 15 September 2014 Publication History
  • Get Citation Alerts
  • Abstract

    Automated test generation for object-oriented software typically consists of producing sequences of calls aiming at high code coverage. In practice, the success of this process may be inhibited when classes interact with their environment, such as the file system, network, user-interactions, etc. This leads to two major problems: First, code that depends on the environment can sometimes not be fully covered simply by generating sequences of calls to a class under test, for example when execution of a branch depends on the contents of a file. Second, even if code that is environment-dependent can be covered, the resulting tests may be unstable, i.e., they would pass when first generated, but then may fail when executed in a different environment. For example, tests on classes that make use of the system time may have failing assertions if the tests are executed at a different time than when they were generated.
    In this paper, we apply bytecode instrumentation to automatically separate code from its environmental dependencies, and extend the EvoSuite Java test generation tool such that it can explicitly set the state of the environment as part of the sequences of calls it generates. Using a prototype implementation, which handles a wide range of environmental interactions such as the file system, console inputs and many non-deterministic functions of the Java virtual machine (JVM), we performed experiments on 100 Java projects randomly selected from SourceForge (the SF100 corpus). The results show significantly improved code coverage - in some cases even in the order of +80%/+90%. Furthermore, our techniques reduce the number of unstable tests by more than 50%.

    References

    [1]
    A. Arcuri and L. Briand. A hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering. Software Testing, Verification and Reliability (STVR), 2012.
    [2]
    A. Arcuri, M. Z. Iqbal, and L. Briand. Black-box system testing of real-time embedded systems using random and search-based testing. In IFIP International Conference on Testing Software and Systems (ICTSS), pages 95--110, 2010.
    [3]
    A. Arcuri, M. Z. Iqbal, and L. Briand. Random testing: Theoretical results and practical implications. IEEE Transactions on Software Engineering (TSE), 38(2):258--277, 2012.
    [4]
    J. Bell and G. Kaiser. Unit test virtualization with VMVM. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 550--561, New York, NY, USA, 2014. ACM.
    [5]
    C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper., 34:1025--1050, 2004.
    [6]
    J. de Halleux and N. Tillmann. Moles: tool-assisted environment isolation with closures. In Objects, Models, Components, Patterns, pages 253--270. Springer, 2010.
    [7]
    G. Fraser and A. Arcuri. EvoSuite: Automatic test suite generation for object-oriented software. In ACM Symposium on the Foundations of Software Engineering (FSE), pages 416--419, 2011.
    [8]
    G. Fraser and A. Arcuri. Sound empirical evidence in software testing. In ACM/IEEE International Conference on Software Engineering (ICSE), pages 178--188, 2012.
    [9]
    G. Fraser and A. Arcuri. 1600 faults in 100 projects: Automatically finding faults while achieving high coverage with evosuite. Empirical Software Engineering (EMSE), 2013. To appear.
    [10]
    G. Fraser and A. Arcuri. Whole test suite generation. IEEE Transactions on Software Engineering, 39(2):276--291, 2013.
    [11]
    G. Fraser and A. Arcuri. Achieving scalable mutation-based generation of whole test suites. Empirical Software Engineering (EMSE), 2014. To appear.
    [12]
    G. Fraser and A. Zeller. Mutation-driven generation of unit tests and oracles. IEEE Transactions on Software Engineering (TSE), 28(2):278--292, 2012.
    [13]
    J. P. Galeotti, G. Fraser, and A. Arcuri. Improving search-based test suite generation with dynamic symbolic execution. In IEEE International Symposium on Software Reliability Engineering (ISSRE), 2013.
    [14]
    S. J. Galler, A. Maller, and F. Wotawa. Automatically extracting mock object behavior from Design by Contract specification for test data generation. In Proceedings of the 5th Workshop on Automation of Software Test, pages 43--50, 2010.
    [15]
    P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI'05: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 213--223. ACM, 2005.
    [16]
    M. Islam and C. Csallner. Dsc+mock: A test case + mock class generator in support of coding against interfaces. In International Workshop on Dynamic Analysis (WODA), pages 26--31, 2010.
    [17]
    M. R. Marri, T. Xie, N. Tillmann, J. De Halleux, and W. Schulte. An empirical study of testing file-system-dependent software with mock objects. In Automation of Software Test, 2009. AST'09. ICSE Workshop on, pages 149--153, 2009.
    [18]
    P. McMinn. Search-based software test data generation: A survey. Software Testing, Verification and Reliability, 14(2):105--156, 2004.
    [19]
    D. Saff and M. D. Ernst. Mock object creation for test factoring. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, pages 49--51, 2004.
    [20]
    K. Taneja, Y. Zhang, and T. Xie. Moda: Automated test generation for database applications via mock objects. In IEEE/ACM Int. Conference on Automated Software Engineering (ASE), pages 289--292, 2010.
    [21]
    N. Tillmann and W. Schulte. Parameterized unit tests. In Proc. of the 10th European Software Engineering Conference and 13th ACM SIGSOFT Int. Symposium on Foundations of Software Engineering, ESEC/FSE-13, pages 253--262, New York, NY, USA, 2005. ACM.

    Cited By

    View all
    • (2023) StubCoder: Automated Generation and Repair of Stub Code for Mock ObjectsACM Transactions on Software Engineering and Methodology10.1145/361717133:1(1-31)Online publication date: 21-Aug-2023
    • (2023)Outside the Sandbox: A Study of Input/Output Methods in JavaProceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering10.1145/3593434.3593501(253-258)Online publication date: 14-Jun-2023
    • (2023)Effective Concurrency Testing for Go via Directional Primitive-Constrained Interleaving Exploration2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE)10.1109/ASE56229.2023.00086(1364-1376)Online publication date: 11-Sep-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ASE '14: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering
    September 2014
    934 pages
    ISBN:9781450330138
    DOI:10.1145/2642937
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 September 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. automated test generation
    2. environment
    3. unit testing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ASE '14
    Sponsor:

    Acceptance Rates

    ASE '14 Paper Acceptance Rate 82 of 337 submissions, 24%;
    Overall Acceptance Rate 82 of 337 submissions, 24%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)58
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 10 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023) StubCoder: Automated Generation and Repair of Stub Code for Mock ObjectsACM Transactions on Software Engineering and Methodology10.1145/361717133:1(1-31)Online publication date: 21-Aug-2023
    • (2023)Outside the Sandbox: A Study of Input/Output Methods in JavaProceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering10.1145/3593434.3593501(253-258)Online publication date: 14-Jun-2023
    • (2023)Effective Concurrency Testing for Go via Directional Primitive-Constrained Interleaving Exploration2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE)10.1109/ASE56229.2023.00086(1364-1376)Online publication date: 11-Sep-2023
    • (2023)Classroom Practice with Learning Support System for Program Design Using Mock Technique Based on TestabilitySN Computer Science10.1007/s42979-023-02096-24:5Online publication date: 17-Aug-2023
    • (2023)Search-Based Mock Generation of External Web Service InteractionsSearch-Based Software Engineering10.1007/978-3-031-48796-5_4(52-66)Online publication date: 4-Dec-2023
    • (2022)Evolution-aware detection of order-dependent flaky testsProceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3533767.3534404(114-125)Online publication date: 18-Jul-2022
    • (2022)Record and replay of online traffic for microservices with automatic mocking point identificationProceedings of the 44th International Conference on Software Engineering: Software Engineering in Practice10.1145/3510457.3513029(221-230)Online publication date: 21-May-2022
    • (2022)Repairing order-dependent flaky tests via test generationProceedings of the 44th International Conference on Software Engineering10.1145/3510003.3510173(1881-1892)Online publication date: 21-May-2022
    • (2022)Testing Self-Adaptive Software With Probabilistic Guarantees on Performance Metrics: Extended and Comparative ResultsIEEE Transactions on Software Engineering10.1109/TSE.2021.310113048:9(3554-3572)Online publication date: 1-Sep-2022
    • (2022)Will Dependency Conflicts Affect My Program's Semantics?IEEE Transactions on Software Engineering10.1109/TSE.2021.305776748:7(2295-2316)Online publication date: 1-Jul-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media