Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1181775.1181806acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
Article

Carving differential unit test cases from system test cases

Published: 05 November 2006 Publication History

Abstract

Unit test cases are focused and efficient. System tests are effective at exercising complex usage patterns. Differential unit tests (DUT) are a hybrid of unit and system tests. They are generated by carving the system components, while executing a system test case, that influence the behavior of the target unit, and then re-assembling those components so that the unit can be exercised as it was by the system test. We conjecture that DUTs retain some of the advantages of unit tests, can be automatically and inexpensively generated, and have the potential for revealing faults related to intricate system executions. In this paper we present a framework for automatically carving and replaying DUTs that accounts for a wide-variety of strategies, we implement an instance of the framework with several techniques to mitigate test cost and enhance flexibility, and we empirically assess the efficacy of carving and replaying DUTs.

References

[1]
J. Bach. Useful features of a test automation system (part iii). Testing Techniques Newsletter, Oct. 1996.]]
[2]
K. Beck. Extreme Programming Explained: Embrace Change. Addison-Wesley Professional, first edition, 1999.]]
[3]
K. Beck. Test Driven Development: By Example. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2002.]]
[4]
R. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools, chapter 18, pages 943--951. Object Technologies. Addison Wesley, Reading, Massachusetts, USA, first edition, 1999.]]
[5]
D. Binkley. Semantics guided regression test cost reduction. IEEE Transactions on Software Engineering, 23(8):498--516, Aug. 1997.]]
[6]
D. Binkley, R. Capellini, L. Raszewski, and C. Smith. An implementation of and experiment with semantic differencing. In International Conference on Software Maintenance, pages 82--91, Nov. 2001.]]
[7]
C. Boyapati, S. Khurshid, and D. Marinov. Korat: automated testing based on java predicates. In International Symposium on Software Testing and Analysis, pages 123--133, 2002.]]
[8]
L. C. Briand, M. D. Penta, and Y. Labiche. Assessing and improving state-based class testing: A series of experiments. IEEE Trans. Softw. Eng., 30(11):770--793, 2004.]]
[9]
A. Carzaniga, D. Rosenblum, and A. Wolf. Achieving scalability and expressiveness in an internet-scale event notification service. In ACM Symposium Principles of Distributed Computing, pages 219--227, July 2000.]]
[10]
Y. Chen, D. Rosenblum, and K. Vo. TestTube: A system for selective regression testing. In International Conference on Software Engineering, pages 211--220, May 1994.]]
[11]
Y. Cheon and G. T. Leavens. A simple and practical approach to unit testing: The jml and junit. In European Conference on Object-Oriented Programming, pages 231--255, June 2002.]]
[12]
C. Csallner and Y. Smaragdakis. Jcrasher: an automatic robustness tester for java. Software Practice and Expererience, 34(11):1025--1050, 2004.]]
[13]
M. Dahm and J. Van Zyl. Byte code engineering library. http://jakarta.apache.org/bcel/, June 2002.]]
[14]
S. Dieckmann and U. Holzle. A study of the allocation behavior of the specjvm98 java benchmark. In European Conference on Object-Oriented Programming, pages 92--115, London, UK, 1999. Springer-Verlag.]]
[15]
H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering: An International Journal, 10(4):405--435, 2005.]]
[16]
S. Elbaum, P. Kallakuri, A. G. Malishevsky, G. Rothermel, and S. Kanduri. Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software Testing, Verification, and Reliability, 13(2):65--83, June 2003.]]
[17]
E. Gamma and K. Beck. Junit. http://sourceforge.net/projects/junit, Dec. 2005.]]
[18]
R. Hildebrandt and A. Zeller. Simplifying failure-inducing input. In International Symposium on Software Testing and Analysis, pages 135--145, 2000.]]
[19]
C. Jaramillo, R. Gupta, and M. L. Soffa. Comparison checking: An approach to avoid debugging of optimized code. In European Software Engineering Conference/ Foundations of Software Engineering, pages 268--284, 1999.]]
[20]
JTest. Jtest product overview. http://www.parasoft.com/jsp/products/home.jsp?product=Jtest, Oct. 2005.]]
[21]
H. Leung and L. White. Insights into regression testing. In International Conference on Software Maintenance, pages 60--69, Oct. 1989.]]
[22]
H. Leung and L. White. A study of integration testing and software regression at the integration level. In International Conference on Software Maintenance, pages 290--300, Nov. 1990.]]
[23]
K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Comm. ACM, 41(5):81--86, May 1998.]]
[24]
A. Orso, T. Apiwattanapong, J. Law, G. Rothermel, and M. J. Harrold. An Empirical Comparison of Dynamic Impact Analysis Algorithms. In Proceedings of the International Conference on Software Engineering, pages 491--500, 2004.]]
[25]
A. Orso and B. Kennedy. Selective capture and replay of program executions. In Workshop on Dynamic Analysis, May 2005.]]
[26]
C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In European Conference on Object-Oriented Programming, pages 504--527, July 2005.]]
[27]
V. P. Ranganath and J. Hatcliff. Pruning interference and ready dependence for slicing concurrent java programs. In International Conference on Compiler Construction, pages 39--56, April 2004.]]
[28]
S. K. Reddy. Carving module test cases from system test cases: an application to regression testing. Master's thesis, University of Nebraska - Lincoln, Computer Science and Engineering Department, July 2004.]]
[29]
T. Reps, T. Ball, M. Das, and J. Larus. The use of program profiling for software maintenance with applications to the year 2000 problem. In European Software Engineering Conference/ Foundations of Software Engineering, pages 432--449, 1997.]]
[30]
G. Rothermel, S. Elbaum, and H. Do. Software infrastructure repository. http://cse.unl.edu/~galileo/php/sir/index.php, Jan. 2006.]]
[31]
G. Rothermel, S. Elbaum, A. G. Malishevsky, P. Kallakuri, and X. Qiu. On test suite composition and cost-effective regression testing. ACM Transactions of Software Engineering and Methodologies, 13(3):277--331, July 2004.]]
[32]
G. Rothermel and M. Harrold. Analyzing regression test selection techniques. IEEE Transactions on Software Engineering, 22(8):529--551, Aug. 1996.]]
[33]
D. Saff, S. Artzi, J. Perkins, and M. Ernst. Automated test factoring for java. In Conference of Automated Software Engineering, pages 114--123, Nov. 2005.]]
[34]
D. Saff and M. Ernst. Automatic mock object creation for test factoring. In Workshop on Program Analysis for Software Tools and Engineering, pages 49--51, June 2004.]]
[35]
D. Saff and M. D. Ernst. An experimental evaluation of continuous testing during development. In Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 76--85, 2004.]]
[36]
B. Weide. Modular regression testing": Connections to component-based software. In Workshop on Component-based Software Engineering, pages 82--91, May 2001.]]
[37]
E. J. Weyuker. On testing non-testable programs. The Computer Journal, 15(4):465--470, 1982.]]
[38]
T. Xie and D. Notkin. Tool-assisted unit-test generation and selection based on operational abstractions. Automated Software Engineering Journal, 2006.]]
[39]
XStream. Xstream - 1.1.2. http://xstream.codehaus.org, Aug. 2005.]]

Cited By

View all
  • (2024)Atlas: Automating Cross-Language Fuzzing on Android Closed-Source LibrariesProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3652133(350-362)Online publication date: 11-Sep-2024
  • (2024)Understandable Test Generation Through Capture/Replay and LLMsProceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings10.1145/3639478.3639789(261-263)Online publication date: 14-Apr-2024
  • (2024)Test Case Generation for Python Libraries Using Dependent Projects' Test-Suites2024 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion (SANER-C)10.1109/SANER-C62648.2024.00029(167-174)Online publication date: 12-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGSOFT '06/FSE-14: Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
November 2006
298 pages
ISBN:1595934685
DOI:10.1145/1181775
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 November 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automated test generation
  2. carving and replay
  3. regression testing

Qualifiers

  • Article

Conference

SIGSOFT06/FSE-14
Sponsor:

Acceptance Rates

Overall Acceptance Rate 17 of 128 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)37
  • Downloads (Last 6 weeks)2
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Atlas: Automating Cross-Language Fuzzing on Android Closed-Source LibrariesProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3652133(350-362)Online publication date: 11-Sep-2024
  • (2024)Understandable Test Generation Through Capture/Replay and LLMsProceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings10.1145/3639478.3639789(261-263)Online publication date: 14-Apr-2024
  • (2024)Test Case Generation for Python Libraries Using Dependent Projects' Test-Suites2024 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion (SANER-C)10.1109/SANER-C62648.2024.00029(167-174)Online publication date: 12-Mar-2024
  • (2024)Detecting semantic conflicts with unit testsJournal of Systems and Software10.1016/j.jss.2024.112070214(112070)Online publication date: Aug-2024
  • (2023) StubCoder: Automated Generation and Repair of Stub Code for Mock ObjectsACM Transactions on Software Engineering and Methodology10.1145/361717133:1(1-31)Online publication date: 21-Aug-2023
  • (2023)Generating Understandable Unit Tests through End-to-End Test Scenario Carving2023 IEEE 23rd International Working Conference on Source Code Analysis and Manipulation (SCAM)10.1109/SCAM59687.2023.00021(107-118)Online publication date: 2-Oct-2023
  • (2023)HasBugs - Handpicked Haskell Bugs2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)10.1109/MSR59073.2023.00040(223-227)Online publication date: May-2023
  • (2023)Action-Based Test Carving for Android Apps2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)10.1109/ICSTW58534.2023.00032(107-116)Online publication date: Apr-2023
  • (2023)Artisan: An Action-Based Test Carving Tool for Android Apps2023 IEEE International Conference on Software Maintenance and Evolution (ICSME)10.1109/ICSME58846.2023.00076(580-585)Online publication date: 1-Oct-2023
  • (2022)Test mimicry to assess the exploitability of library vulnerabilitiesProceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3533767.3534398(276-288)Online publication date: 18-Jul-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media