Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2393596.2393636acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

CarFast: achieving higher statement coverage faster

Published: 11 November 2012 Publication History

Abstract

Test coverage is an important metric of software quality, since it indicates thoroughness of testing. In industry, test coverage is often measured as statement coverage. A fundamental problem of software testing is how to achieve higher statement coverage faster, and it is a difficult problem since it requires testers to cleverly find input data that can steer execution sooner toward sections of application code that contain more statements.
We created a novel fully automatic approach for aChieving higher stAtement coveRage FASTer (CarFast), which we implemented and evaluated on twelve generated Java applications whose sizes range from 300 LOC to one million LOC. We compared CarFast with several popular test case generation techniques, including pure random, adaptive random, and Directed Automated Random Testing (DART). Our results indicate with strong statistical significance that when execution time is measured in terms of the number of runs of the application on different input test data, CarFast outperforms the evaluated competitive approaches on most subject applications.

References

[1]
A. Arcuri and L. Briand. Adaptive random testing: An illusion of effectiveness? In ISSTA, pages 265--275, 2011.
[2]
A. Arcuri and L. C. Briand. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In ICSE, pages 1--10, 2011.
[3]
A. Arcuri, M. Z. Iqbal, and L. Briand. Formal analysis of the effectiveness and predictability of random testing. In ISSTA '10, pages 219--230, 2010.
[4]
D. L. Bird and C. U. Munoz. Automatic generation of random self-checking test cases. IBM Syst. J., 22:229--245, September 1983.
[5]
S. M. Blackburn et al. The DaCapo benchmarks: Java benchmarking development and analysis. In Proc. 21st OOPSLA, pages 169--190, Oct. 2006.
[6]
S. M. Blackburn et al. Wake up and smell the coffee: Evaluation methodology for the 21st century. Commun. ACM, 51(8):83--89, Aug. 2008.
[7]
A. Bron, E. Farchi, Y. Magid, Y. Nir, and S. Ur. Applications of synchronization coverage. In Proc. 10th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pages 206--212. ACM, 2005.
[8]
É. Bruneton, R. Lenglet, and T. Coupaye. ASM: A code manipulation tool to implement adaptable systems. In ACM SIGOPS France (Adaptable and extensible component systems), Nov. 2002.
[9]
J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In ASE '08, pages 443--446, 2008.
[10]
X. Cai and M. R. Lyu. The effect of code coverage on fault detection under different testing. In A-MOST, pages 1--7, 2005.
[11]
M.-H. Chen, M. R. Lyu, and W. E. Wong. An empirical study of the correlation between code coverage and reliability estimation. In 3rd IEEE Int. Soft. Metrics Sym., pages 133--141, 1996.
[12]
T. Y. Chen, R. Merkel, G. Eddy, and P. K. Wong. Adaptive random testing through dynamic partitioning. In QSIC '04, pages 79--86, 2004.
[13]
I. Ciupa, A. Leitner, M. Oriol, and B. Meyer. Artoo: Adaptive random testing for object-oriented software. In ICSE '08, pages 71--80, 2008.
[14]
L. A. Clarke. A system to generate test data and symbolically execute programs. IEEE Trans. Softw. Eng., 2(3):215--222, 1976.
[15]
G. A. Cohen, J. S. Chase, and D. L. Kaminsky. Automatic program transformation with JOIE. In USENIX Annual Technical Symposium, June 1998.
[16]
M. B. Cohen, P. B. Gibbons, W. B. Mugridge, and C. J. Colbourn. Constructing test suites for interaction testing. In ICSE, pages 38--48, 2003.
[17]
S. Cohen and B. Kimelfeld. Querying parse trees of stochastic context-free grammars. In Proc. 13th ICDT, pages 62--75, Mar. 2010.
[18]
S. Cornett. Minimum acceptable code coverage. Bullseye Testing Technology, http://www.bullseye.com/minimum.html, 2011.
[19]
R. A. DeMillo and A. J. Offutt. Constraint-based automatic test data generation. IEEE Trans. Softw. Eng., 17(9):900--910, 1991.
[20]
B. Dufour, K. Driesen, L. Hendren, and C. Verbrugge. Dynamic metrics for Java. In Proc. 18th OOPSLA, pages 149--168, Oct. 2003.
[21]
S. Elbaum, A. G. Malishevsky, and G. Rothermel. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng., 28(2):159--182, 2002.
[22]
M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381--395, 1981.
[23]
S. Fomel and J. F. Claerbout. Guest editors' introduction: Reproducible research. Computing in Science and Engineering, 11(1):5--7, Jan. 2009.
[24]
J. E. Forrester and B. P. Miller. An empirical study of the robustness of Windows NT applications using random testing. In USENIX Windows Systems Symposium - Volume 4, 2000.
[25]
P. Godefroid. Compositional dynamic test generation. In POPL '07, pages 47--54, 2007.
[26]
P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. In PLDI '05, pages 213--223, 2005.
[27]
P. Godefroid, M. Y. Levin, and D. A. Molnar. Automated whitebox fuzz testing. In Network Distributed Security Symposium (NDSS). Internet Society, 2008.
[28]
M. Grindal, J. Offutt, and S. F. Andler. Combination testing strategies: A survey. Softw. Test., Verif. Reliab., 15(3):167--199, 2005.
[29]
D. Hamlet. When only random testing will do. In RT '06, pages 1--9, 2006.
[30]
R. Hamlet. Random testing. In Encyclopedia of Software Engineering, pages 970--978, 1994.
[31]
P. Husbands, C. Iancu, and K. Yelick. A performance analysis of the Berkeley UPC compiler. In ICS '03, pages 63--73, 2003.
[32]
I. Hussain, C. Csallner, M. Grechanik, C. Fu, Q. Xie, S. Park, K. Taneja, and B. M. M. Hossain. Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator. In WODA, July 2012.
[33]
M. Islam and C. Csallner. Dsc+mock: A test case + mock class generator in support of coding against interfaces. In WODA, pages 26--31, July 2010.
[34]
A. Joshi, L. Eeckhout, R. H. Bell, Jr., and L. K. John. Distilling the essence of proprietary workloads into miniature benchmarks. ACM TACO, 5(2):10:1--10:33, Sept. 2008.
[35]
C. Kaner. Software negligence & testing coverage. In STAR '96, 1996.
[36]
C. Kaner, J. Bach, and B. Pettichord. Lessons Learned in Software Testing. Wiley, Dec. 2001.
[37]
K. Kanoun and L. Spainhower. Dependability Benchmarking for Computer Systems. Wiley, July 2008.
[38]
Y. W. Kim. Efficient use of code coverage in large-scale software development. In CASCON '03, pages 145--155, 2003.
[39]
K. Koster and D. C. Kao. State coverage: A structural test adequacy criterion for behavior checking. In Proc. 15th FSE, Companion Papers, pages 541--544. ACM, Sept. 2007.
[40]
R. Kuhn, Y. Lei, and R. Kacker. Practical combinatorial testing: Beyond pairwise. IT Professional, 10(3):19--23, 2008.
[41]
C. Li and C. Csallner. Dynamic symbolic database application testing. In Proc. 3rd DBTest, June 2010.
[42]
M. Maierhofer and M. A. Ertl. Local stack allocation. In Proc. 7th International Conference on Compiler Construction (CC), pages 189--203. Springer, Apr. 1998.
[43]
R. Majumdar and K. Sen. Hybrid concolic testing. In ICSE '07, pages 416--426, 2007.
[44]
Y. K. Malaiya, M. N. Li, J. M. Bieman, and R. Karcich. Software reliability growth with test coverage. IEEE Trans. on Reliability, 51:420--426, 2002.
[45]
B. Marick. How to misuse code coverage. In Proc. of the 16th Intl. Conf. on Testing Comp. Soft., pages 16--18, 1999.
[46]
G. McDaniel. IBM Dictionary of Computing. Dec. 1994.
[47]
B. Meyer. Testing insights. Bertrand Meyer's technology blog, http://bertrandmeyer.com/2011/07/11/testing-insights, 2011.
[48]
S. S. Muchnick. Advanced compiler design and implementation. Morgan Kaufmann, 1997.
[49]
A. S. Namin and J. H. Andrews. The influence of size and coverage on test suite effectiveness. In ISSTA '09, pages 57--68, 2009.
[50]
A. S. Namin and S. Kakarla. The use of mutation in testing experiments and its sensitivity to external threats. In ISSTA, pages 342--352, 2011.
[51]
P. Piwowarski, M. Ohba, and J. Caruso. Coverage measurement experience during function test. In ICSE, pages 287--301, May 1993.
[52]
P. Runeson. A survey of unit testing practices. IEEE Softw., 23:22--29, July 2006.
[53]
R. H. Saavedra and A. J. Smith. Analysis of benchmark characteristics and benchmark performance prediction. ACM Trans. Comput. Syst., 14(4):344--384, Nov. 1996.
[54]
Sable Reserch Group. Soot: A java optimization framework. http://www.sable.mcgill.ca/soot/.
[55]
M. Schwab, M. Karrenbach, and J. Claerbout. Making scientific computations reproducible. Computing in Science and Engineering, 2(6):61--67, Nov. 2000.
[56]
R. M. Sirkin. Statistics for the Social Sciences. Sage Publications, third edition, Aug. 2005.
[57]
D. R. Slutz. Massive stochastic testing of SQL. In VLDB '98, pages 618--622, 1998.
[58]
R. Torkar and S. Mankefors. A survey on testing and reuse. In Proc. IEEE International Conference on Software - Science, Technology & Engineering. IEEE, 2003.
[59]
Y. L. Traon, T. Mouelhi, and B. Baudry. Testing security policies: Going beyond functional testing. In ISSRE, pages 93--102, 2007.
[60]
S. Ur and A. Ziv. Off-the-shelf vs. custom made coverage models, which is the one for you? In STAR '98, May 1998.
[61]
T. Xie, N. Tillmann, P. de Halleux, and W. Schulte. Fitness-guided path exploration in dynamic symbolic execution. In DSN, pages 359--368. IEEE, June 2009.
[62]
Q. Yang, J. J. Li, and D. Weiss. A survey of coverage based testing tools. In Proc. International Workshop on Automation of Software Test (AST), pages 99--103. ACM, 2006.
[63]
H. Zhu, P. A. V. Hall, and J. H. R. May. Software unit test coverage and adequacy. ACM Comput. Surv., 29(4):366--427, 1997.

Cited By

View all
  • (2024)Automated Test Case Generation for Path Coverage by Using Multi-Objective Particle Swarm Optimization Algorithm with Reinforcement Learning and Relationship Matrix StrategiesInternational Journal of Software Engineering and Knowledge Engineering10.1142/S0218194024500189(1-29)Online publication date: 22-May-2024
  • (2023)AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE)10.1109/ASE56229.2023.00133(497-509)Online publication date: 11-Sep-2023
  • (2023)Automatic Test Data Generation Symbolic and Concolic ExecutionsSoftware Testing Automation10.1007/978-3-031-22057-9_13(503-542)Online publication date: 25-Mar-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FSE '12: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering
November 2012
494 pages
ISBN:9781450316149
DOI:10.1145/2393596
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 November 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. experimentation
  2. statement coverage
  3. testing

Qualifiers

  • Research-article

Funding Sources

Conference

SIGSOFT/FSE'12
Sponsor:

Acceptance Rates

Overall Acceptance Rate 17 of 128 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)3
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Automated Test Case Generation for Path Coverage by Using Multi-Objective Particle Swarm Optimization Algorithm with Reinforcement Learning and Relationship Matrix StrategiesInternational Journal of Software Engineering and Knowledge Engineering10.1142/S0218194024500189(1-29)Online publication date: 22-May-2024
  • (2023)AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE)10.1109/ASE56229.2023.00133(497-509)Online publication date: 11-Sep-2023
  • (2023)Automatic Test Data Generation Symbolic and Concolic ExecutionsSoftware Testing Automation10.1007/978-3-031-22057-9_13(503-542)Online publication date: 25-Mar-2023
  • (2022)Enhancing Dynamic Symbolic Execution by Automatically Learning Search HeuristicsIEEE Transactions on Software Engineering10.1109/TSE.2021.310187048:9(3640-3663)Online publication date: 1-Sep-2022
  • (2022)ConEx: Efficient Exploration of Big-Data System Configurations for Better PerformanceIEEE Transactions on Software Engineering10.1109/TSE.2020.300756048:3(893-909)Online publication date: 1-Mar-2022
  • (2021)Symbolic value-flow static analysis: deep, precise, complete modeling of Ethereum smart contractsProceedings of the ACM on Programming Languages10.1145/34855405:OOPSLA(1-30)Online publication date: 15-Oct-2021
  • (2021)Generating Test Cases from Requirements: A Case Study in Railway Control System Domain2021 International Symposium on Theoretical Aspects of Software Engineering (TASE)10.1109/TASE52547.2021.00029(183-190)Online publication date: Aug-2021
  • (2019)Structural Coverage Analysis MethodsCode Generation, Analysis Tools, and Testing for Quality10.4018/978-1-5225-7455-2.ch002(36-63)Online publication date: 2019
  • (2019)Concolic testing with adaptively changing search heuristicsProceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3338906.3338964(235-245)Online publication date: 12-Aug-2019
  • (2019)Efficient concolic testing of MPI applicationsProceedings of the 28th International Conference on Compiler Construction10.1145/3302516.3307353(193-204)Online publication date: 16-Feb-2019
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media