Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2771783.2771805acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Feedback-controlled random test generation

Published: 13 July 2015 Publication History
  • Get Citation Alerts
  • Abstract

    Feedback-directed random test generation is a widely used technique to generate random method sequences. It leverages feedback to guide generation. However, the validity of feedback guidance has not been challenged yet. In this paper, we investigate the characteristics of feedback-directed random test generation and propose a method that exploits the obtained knowledge that excessive feedback limits the diversity of tests. First, we show that the feedback loop of feedback-directed generation algorithm is a positive feedback loop and amplifies the bias that emerges in the candidate value pool. This over-directs the generation and limits the diversity of generated tests. Thus, limiting the amount of feedback can improve diversity and effectiveness of generated tests. Second, we propose a method named feedback-controlled random test generation, which aggressively controls the feedback in order to promote diversity of generated tests. Experiments on eight different, real-world application libraries indicate that our method increases branch coverage by 78% to 204% over the original feedback-directed algorithm on large-scale utility libraries.

    References

    [1]
    A. Arcuri and L. Briand. Adaptive random testing: An illusion of effectiveness? In Proceedings of the 2011 International Symposium on Software Testing and Analysis, ISSTA’11, pages 265–275, 2011.
    [2]
    T. Y. Chen, F.-C. Kuo, R. G. Merkel, and T. H. Tse. Adaptive random testing: The art of test case diversity. J. Syst. Softw., 83(1):60–66, Jan. 2010.
    [3]
    T. Y. Chen, H. Leung, and I. Mak. Adaptive random testing. In Advances in Computer Science - ASIAN 2004. Higher-Level Decision Making, volume 3321 of Lecture Notes in Computer Science, pages 320–329. 2004.
    [4]
    I. Ciupa, A. Leitner, M. Oriol, and B. Meyer. Artoo: Adaptive random testing for object-oriented software. In Proceedings of the 30th International Conference on Software Engineering, ICSE’08, pages 71–80, 2008.
    [5]
    A. P. Felt, E. Chin, S. Hanna, D. Song, and D. Wagner. Android permissions demystified. In Proceedings of the 18th ACM Conference on Computer and Communications Security, CCS’11, pages 627–638, 2011.
    [6]
    G. Fraser and A. Arcuri. Sound empirical evidence in software testing. In Proceedings of the 34th International Conference on Software Engineering, ICSE’12, pages 178–188, 2012.
    [7]
    G. Fraser and A. Arcuri. Evosuite: On the challenges of test case generation in the real world. In Proceedings of the 2013 IEEE 6th International Conference on Software Testing, Verification and Validation, ICST’13, pages 362–369, 2013.
    [8]
    G. Fraser and A. Arcuri. Whole test suite generation. IEEE Trans. Softw. Eng., 39(2):276–291, Feb. 2013.
    [9]
    P. Garg, F. Ivancic, G. Balakrishnan, N. Maeda, and A. Gupta. Feedback-directed unit test generation for c/c++ using concolic execution. In Proceedings of the 2013 International Conference on Software Engineering, ICSE’13, pages 132–141, 2013.
    [10]
    P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. SIGPLAN Not., 40(6):213–223, June 2005.
    [11]
    R. Gopinath, C. Jensen, and A. Groce. Code coverage for suite evaluation by developers. In Proceedings of the 36th International Conference on Software Engineering, ICSE’14, pages 72–82, 2014.
    [12]
    A. Groce, C. Zhang, E. Eide, Y. Chen, and J. Regehr. Swarm testing. In Proceedings of the 2012 International Symposium on Software Testing and Analysis, ISSTA’12, pages 78–88, 2012.
    [13]
    L. Inozemtseva and R. Holmes. Coverage is not strongly correlated with test suite effectiveness. In Proceedings of the 36th International Conference on Software Engineering, ICSE’14, pages 435–445, 2014.
    [14]
    H. Jaygarl, K.-S. Lu, and C. K. Chang. Genred: A tool for generating and reducing object-oriented test cases. In Proceedings of the 2010 IEEE 34th Annual Computer Software and Applications Conference, COMPSAC ’10, pages 127–136, 2010.
    [15]
    P. McMinn. Search-based software test data generation: A survey: Research articles. Softw. Test. Verif. Reliab., 14(2):105–156, June 2004.
    [16]
    C. Pacheco and M. D. Ernst. Randoop: Feedback-directed random testing for java. In Companion to the 22nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications Companion, OOPSLA’07, pages 815–816, 2007.
    [17]
    C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In Proceedings of the 29th International Conference on Software Engineering, ICSE’07, pages 75–84, 2007.
    [18]
    M. Pradel and T. R. Gross. Leveraging test generation and specification mining for automated bug detection without false positives. In Proceedings of the 34th International Conference on Software Engineering, ICSE’12, pages 288–298, 2012.
    [19]
    B. Robinson, M. D. Ernst, J. H. Perkins, V. Augustine, and N. Li. Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs. In Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering, ASE’11, pages 23–32, 2011.
    [20]
    K. Sen, D. Marinov, and G. Agha. Cute: A concolic unit testing engine for c. In Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE’13, pages 263–272, 2005.
    [21]
    A. Shahbazi, A. F. Tappenden, and J. Miller. Centroidal voronoi tessellations—a new approach to random testing. IEEE Trans. Softw. Eng., 39(2):163–183, Feb. 2013.
    [22]
    S. Thummalapenta, T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. Mseqgen: Object-oriented unit-test generation via mining source code. In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, ESEC/FSE’09, pages 193–202, 2009.
    [23]
    N. Tillmann and J. De Halleux. Pex: White box test generation for .net. In Proceedings of the 2nd International Conference on Tests and Proofs, TAP’08, pages 134–153, 2008.
    [24]
    N. Tillmann and W. Schulte. Parameterized unit tests. In Proceedings of the 10th European Software Engineering Conference held jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE’13, pages 253–262, 2005.
    [25]
    J. Yi, D. Qi, S. H. Tan, and A. Roychoudhury. Expressing and checking intended changes via software change contracts. In Proceedings of the 2013 International Symposium on Software Testing and Analysis, ISSTA’13, pages 1–11, 2013.
    [26]
    C. Zhang, A. Groce, and M. A. Alipour. Using test case reduction and prioritization to improve symbolic execution. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, ISSTA’14, pages 160–170, 2014.
    [27]
    S. Zhang, Y. Bu, X. S. Wang, and M. D. Ernst. Dependence-guided random test generation. CSE 503 Course Project Report, University of Washington, May 2010.
    [28]
    W. Zheng, Q. Zhang, M. Lyu, and T. Xie. Random unit-test generation with mut-aware sequence recommendation. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, ASE’10, pages 293–296, 2010.

    Cited By

    View all
    • (2022)Testing Self-Adaptive Software With Probabilistic Guarantees on Performance Metrics: Extended and Comparative ResultsIEEE Transactions on Software Engineering10.1109/TSE.2021.310113048:9(3554-3572)Online publication date: 1-Sep-2022
    • (2022)Baton: symphony of random testing and concolic testing through machine learning and taint analysisScience China Information Sciences10.1007/s11432-020-3403-266:3Online publication date: 11-Nov-2022
    • (2021)Confuzzion: A Java Virtual Machine Fuzzer for Type Confusion Vulnerabilities2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00069(586-597)Online publication date: Dec-2021
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ISSTA 2015: Proceedings of the 2015 International Symposium on Software Testing and Analysis
    July 2015
    447 pages
    ISBN:9781450336208
    DOI:10.1145/2771783
    • General Chair:
    • Michal Young,
    • Program Chair:
    • Tao Xie
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 July 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Best Artifact

    Author Tags

    1. Diversity
    2. Random testing
    3. Test generation

    Qualifiers

    • Research-article

    Conference

    ISSTA '15
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 58 of 213 submissions, 27%

    Upcoming Conference

    ISSTA '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)Testing Self-Adaptive Software With Probabilistic Guarantees on Performance Metrics: Extended and Comparative ResultsIEEE Transactions on Software Engineering10.1109/TSE.2021.310113048:9(3554-3572)Online publication date: 1-Sep-2022
    • (2022)Baton: symphony of random testing and concolic testing through machine learning and taint analysisScience China Information Sciences10.1007/s11432-020-3403-266:3Online publication date: 11-Nov-2022
    • (2021)Confuzzion: A Java Virtual Machine Fuzzer for Type Confusion Vulnerabilities2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00069(586-597)Online publication date: Dec-2021
    • (2021)EvoSpex: An Evolutionary Algorithm for Learning Postconditions2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)10.1109/ICSE43902.2021.00112(1223-1235)Online publication date: May-2021
    • (2020)Using Relative Lines of Code to Guide Automated Test Generation for PythonACM Transactions on Software Engineering and Methodology10.1145/340889629:4(1-38)Online publication date: 26-Sep-2020
    • (2020)Testing self-adaptive software with probabilistic guarantees on performance metricsProceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409685(1002-1014)Online publication date: 8-Nov-2020
    • (2018)Fuzzing: State of the ArtIEEE Transactions on Reliability10.1109/TR.2018.283447667:3(1199-1218)Online publication date: Sep-2018
    • (2018)User Feedback-Based Test Suite Management: A Research FrameworkProceedings of First International Conference on Smart System, Innovations and Computing10.1007/978-981-10-5828-8_78(825-830)Online publication date: 9-Jan-2018
    • (2017)Generating Tests by ExampleVerification, Model Checking, and Abstract Interpretation10.1007/978-3-319-73721-8_19(406-429)Online publication date: 29-Dec-2017
    • (2016)A framework for test data generation of object-oriented programs based on complete testing chain2016 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)10.1109/SNPD.2016.7515930(391-397)Online publication date: May-2016

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media