Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3092703.3098220acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
short-paper

A suite of tools for making effective use of automatically generated tests

Published: 10 July 2017 Publication History

Abstract

Automated test generation tools (we hope) produce failing tests from time to time. In a world of fault-free code this would not be true, but in such a world we would not need automated test generation tools. Failing tests are generally speaking the most valuable products of the testing process, and users need tools that extract their full value. This paper describes the tools provided by the TSTL testing language for making use of tests (which are not limited to failing tests). In addition to the usual tools for simple delta-debugging and executing tests as regressions, TSTL provides tools for 1) minimizing tests by criteria other than failure, such as code coverage, 2) normalizing tests to achieve further reduction and canonicalization than provided by delta-debugging, 3) generalizing tests to describe the neighborhood of similar tests that fail in the same fashion, and 4) avoiding slippage, where delta-debugging causes a failing test to change underlying fault. These tools can be accessed both by easy-to-use command-line tools and via a powerful API that supports more complex custom test manipulations.

References

[1]
SymPy. http://www.sympy.org/en/index.html.
[2]
J. Andrews, Y. R. Zhang, and A. Groce. Comparing automated unit testing strategies. Technical Report 736, Department of Computer Science, University of Western Ontario, December 2010.
[3]
N. Batchelder. Coverage.py. https://coverage.readthedocs.org/en/coverage-4.0.1/.
[4]
Y. Chen, A. Groce, C. Zhang, W.-K. Wong, X. Fern, E. Eide, and J. Regehr. Taming compiler fuzzers. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 197–208, 2013.
[5]
K. Claessen and J. Hughes. QuickCheck: a lightweight tool for random testing of haskell programs. In ICFP, pages 268–279, 2000.
[6]
A. Groce, M. A. Alipour, C. Zhang, Y. Chen, and J. Regehr. Cause reduction: Delta-debugging, even without bugs. Journal of Software Testing, Verification, and Reliability. accepted for publication.
[7]
A. Groce, M. A. Alipour, C. Zhang, Y. Chen, and J. Regehr. Cause reduction for quick testing. In IEEE International Conference on Software Testing, Verification and Validation, pages 243–252. IEEE, 2014.
[8]
A. Groce and M. Erwig. Finding common ground: Choose, assert, and assume. In International Workshop on Dynamic Analysis, pages 12–17, 2012.
[9]
A. Groce, J. Holmes, and K. Kellar. One test to rule them all. In International Symposium on Software Testing and Analysis, page accepted for publication, 2017.
[10]
A. Groce, G. Holzmann, and R. Joshi. Randomized differential testing as a prelude to formal verification. In International Conference on Software Engineering, pages 621–631, 2007.
[11]
A. Groce and D. Kroening. Making the most of BMC counterexamples. Electron. Notes Theor. Comput. Sci., 119(2):67–81, Mar. 2005.
[12]
A. Groce and J. Pinto. A little language for testing. In NASA Formal Methods Symposium, pages 204–218, 2015.
[13]
A. Groce, J. Pinto, P. Azimi, and P. Mittal. TSTL: a language and tool for testing (demo). In ACM International Symposium on Software Testing and Analysis, pages 414–417, 2015.
[14]
A. Groce and W. Visser. What went wrong: Explaining counterexamples. In SPIN Workshop on Model Checking of Software, pages 121–135, 2003.
[15]
A. Groce, C. Zhang, E. Eide, Y. Chen, and J. Regehr. Swarm testing. In International Symposium on Software Testing and Analysis, pages 78–88, 2012.
[16]
J. Holmes, A. Groce, and A. Alipour. Mitigating (and exploiting) test reduction slippage. In Workshop on Automated Software Testing, 2016.
[17]
J. Holmes, A. Groce, J. Pinto, P. Mittal, P. Azimi, K. Kellar, and J. O’Brien. TSTL: the template scripting testing language. International Journal on Software Tools for Technology Transfer, 2017. Accepted for publication.
[18]
H. Jin, K. Ravi, and F. Somenzi. Fate and free will in error traces. In Tools and Algorithms for the Construction and Analysis of Systems, pages 445–458, 2002.
[19]
J. A. Jones and M. J. Harrold. Empirical evaluation of the Tarantula automatic fault-localization technique. In Automated Software Engineering, pages 273–282, 2005.
[20]
Y. Lei and J. H. Andrews. Minimization of randomized unit test cases. In International Symposium on Software Reliability Engineering, pages 267–276, 2005.
[21]
A. Leitner, M. Oriol, A. Zeller, I. Ciupa, and B. Meyer. Efficient unit test case minimization. In International Conference on Automated Software Engineering, pages 417–420, 2007.
[22]
D. R. MacIver. Hypothesis: Test faster, fix more. http://hypothesis.works/.
[23]
D. R. MacIver. Personal communication.
[24]
W. McKeeman. Differential testing for software. Digital Technical Journal of Digital Equipment Corporation, 10(1):100–107, 1998.
[25]
G. Misherghi and Z. Su. Hdd: hierarchical delta debugging. In International Conference on Software engineering, pages 142–151, 2006.
[26]
L. Pike. SmartCheck: automatic and efficient counterexample reduction and generalization. In ACM SIGPLAN Symposium on Haskell, pages 53–64, 2014.
[27]
J. Regehr, Y. Chen, P. Cuoq, E. Eide, C. Ellison, and X. Yang. Test-case reduction for C compiler bugs. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 335–346, 2012.
[28]
J. Ruderman. Bug 329066 - Lithium, a testcase reduction tool (delta debugger). https://bugzilla.mozilla.org/show bug.cgi?id=329066, 2006.
[29]
A. Zeller and R. Hildebrandt. Simplifying and isolating failure-inducing input. Software Engineering, IEEE Transactions on, 28(2):183–200, 2002.
[30]
S. Zhang. Practical semantic test simplification. In International Conference on Software Engineering, pages 1173–1176, 2013.
[31]
Abstract 1 Introduction 2 A Brief TSTL Primer 3 The Basic TSTL Test Tools 4 Avoiding Slippage 5 API Access to Tool Functionality 6 Related Work 7 Conclusions and Future Work References

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSTA 2017: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis
July 2017
447 pages
ISBN:9781450350761
DOI:10.1145/3092703
  • General Chair:
  • Tevfik Bultan,
  • Program Chair:
  • Koushik Sen
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 July 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. generalization
  2. normalization
  3. semantic simplification
  4. slippage
  5. test reduction

Qualifiers

  • Short-paper

Conference

ISSTA '17
Sponsor:

Acceptance Rates

Overall Acceptance Rate 58 of 213 submissions, 27%

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 136
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)2
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media