Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2786805.2786823acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Open access

Efficient dependency detection for safe Java test acceleration

Published: 30 August 2015 Publication History

Abstract

Slow builds remain a plague for software developers. The frequency with which code can be built (compiled, tested and packaged) directly impacts the productivity of developers: longer build times mean a longer wait before determining if a change to the application being built was successful. We have discovered that in the case of some languages, such as Java, the majority of build time is spent running tests, where dependencies between individual tests are complicated to discover, making many existing test acceleration techniques unsound to deploy in practice. Without knowledge of which tests are dependent on others, we cannot safely parallelize the execution of the tests, nor can we perform incremental testing (i.e., execute only a subset of an application's tests for each build). The previous techniques for detecting these dependencies did not scale to large test suites: given a test suite that normally ran in two hours, the best-case running scenario for the previous tool would have taken over 422 CPU days to find dependencies between all test methods (and would not soundly find all dependencies) — on the same project the exhaustive technique (to find all dependencies) would have taken over 1e300 years. We present a novel approach to detecting all dependencies between test cases in large projects that can enable safe exploitation of parallelism and test selection with a modest analysis cost.

References

[1]
Dependency and data driven unit testing framework for java. https://code.google.com/p/depunit/.
[2]
Junit: A programmer-oriented testing framework for java. http://junit.org/.
[3]
Next generation java testing. http://testng.org/doc/index.html.
[4]
Apache Software Foundation. The apache ant project. http://ant.apache.org/.
[5]
Apache Software Foundation. The apache maven project. http://maven.apache.org/.
[6]
T. Ball. On the limit of control flow analysis for regression test selection. In Proceedings of the 1998 ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA ’98, pages 134–142, New York, NY, USA, 1998. ACM.
[7]
J. Bell and G. Kaiser. Unit Test Virtualization with VMVM. In Proceedings of the 36th International Conference on Software Engineering, ICSE ’14, pages 550–561, New York, NY, USA, 2014. ACM.
[8]
J. Bell, E. Melski, M. Dattatreya, and G. Kaiser. Vroom: Faster Build Processes for Java. In IEEE Software Special Issue: Release Engineering. IEEE Computer Society, March/April 2015. To Appear. Preprint: http://jonbell.net/s2bel.pdf.
[9]
J. Bell, N. Sarda, and G. Kaiser. Chronicler: Lightweight recording to reproduce field failures. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 362–371, Piscataway, NJ, USA, 2013. IEEE Press.
[10]
L. C. Briand, Y. Labiche, and S. He. Automating regression test selection based on uml designs. Inf. Softw. Technol., 51(1):16–30, Jan. 2009.
[11]
T. Chen and M. Lau. A new heuristic for test suite reduction. Information and Software Technology, 40(5–6):347 – 354, 1998.
[12]
T. Chen and M. Lau. A simulation study on some heuristics for test suite reduction. Information and Software Technology, 40(13):777 – 787, 1998.
[13]
C. Csallner and Y. Smaragdakis. Jcrasher: an automatic robustness tester for java. Software: Practice and Experience, 34(11):1025–1050, 2004.
[14]
H. Do, G. Rothermel, and A. Kinneer. Empirical studies of test case prioritization in a junit testing environment. In Software Reliability Engineering, 2004. ISSRE 2004. 15th International Symposium on, pages 113–124, 2004.
[15]
S. Elbaum, A. Malishevsky, and G. Rothermel. Incorporating varying test costs and fault severities into test case prioritization. In Proceedings of the 23rd International Conference on Software Engineering, ICSE ’01, pages 329–338, Washington, DC, USA, 2001. IEEE Computer Society.
[16]
M. Fowler. Eradicating non-determinism in tests. http://martinfowler.com/articles/ nonDeterminism.html, 2011.
[17]
M. Gligoric, R. Majumdar, R. Sharma, L. Eloussi, and D. Marinov. Regression test selection for distributed software histories. In A. Biere and R. Bloem, editors, Computer Aided Verification, volume 8559 of Lecture Notes in Computer Science, pages 293–309. Springer International Publishing, 2014.
[18]
M. Gligoric, S. Negara, O. Legunsen, and D. Marinov. An empirical evaluation and comparison of manual and automated test selection. In Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, pages 361–372, New York, NY, USA, 2014. ACM.
[19]
M. Gligoric, W. Schulte, C. Prasad, D. van Velzen, I. Narasamdya, and B. Livshits. Automated migration of build scripts using dynamic analysis and search-based refactoring. In Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages &; Applications, OOPSLA ’14, pages 599–616, New York, NY, USA, 2014. ACM.
[20]
T. L. Graves, M. J. Harrold, J.-M. Kim, A. Porter, and G. Rothermel. An empirical study of regression test selection techniques. ACM Trans. Softw. Eng. Methodol., 10(2):184–208, Apr. 2001.
[21]
A. Gyori, A. Shi, F. Hairi, and D. Marinov. Reliable testing: Detecting state-polluting tests to prevent test dependency. In Proceedings of the 2015 ACM International Symposium on Software Testing and Analysis, 2015.
[22]
S. Haidry and T. Miller. Using dependency structures for prioritization of functional test suites. Software Engineering, IEEE Transactions on, 39(2):258–275, Feb 2013.
[23]
D. Hao, L. Zhang, X. Wu, H. Mei, and G. Rothermel. On-demand test suite reduction. In Proceedings of the 2012 International Conference on Software Engineering, ICSE 2012, pages 738–748, Piscataway, NJ, USA, 2012. IEEE Press.
[24]
M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol., 2(3):270–285, July 1993.
[25]
M. J. Harrold, J. A. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. A. Spoon, and A. Gujarathi. Regression test selection for java software. In Proceedings of the 16th ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications, OOPSLA ’01, pages 312–326, New York, NY, USA, 2001. ACM.
[26]
J. Huang, P. Liu, and C. Zhang. Leap: Lightweight deterministic multi-processor replay of concurrent java programs. In Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE ’10, pages 385–386, New York, NY, USA, 2010. ACM.
[27]
D. Jeffrey and N. Gupta. Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Trans. Softw. Eng., 33(2):108–123, Feb. 2007.
[28]
J. A. Jones and M. J. Harrold. Test-suite reduction and prioritization for modified condition/decision coverage. IEEE Trans. Softw. Eng., 29(3):195–209, Mar. 2003.
[29]
Q. Luo, F. Hariri, L. Eloussi, and D. Marinov. An empirical analysis of flaky tests. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pages 643–653, New York, NY, USA, 2014. ACM.
[30]
A. M. Memon and M. B. Cohen. Automated testing of gui applications: Models, tools, and controlling flakiness. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 1479–1480, Piscataway, NJ, USA, 2013. IEEE Press.
[31]
Oracle. Jvm tool interface. http://docs.oracle.com/ javase/7/docs/platform/jvmti/jvmti.html.
[32]
A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In Proceedings of the 12th ACM SIGSOFT Twelfth International Symposium on Foundations of Software Engineering, SIGSOFT ’04/FSE-12, pages 241–251, New York, NY, USA, 2004. ACM.
[33]
J. Ousterhout. 10–20x faster software builds. USENIX ATC, 2005.
[34]
G. Rothermel and M. J. Harrold. Analyzing regression test selection techniques. IEEE Transactions on Software Engineering, 22(8):529–441, August 1996.
[35]
G. Rothermel, R. Untch, C. Chu, and M. Harrold. Test case prioritization: an empirical study. In Proceedings of the IEEE International Conference on Software Maintenance (ICSM ’99), pages 179–188, 1999.
[36]
A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis, ISSTA ’02, pages 97–106, New York, NY, USA, 2002. ACM.
[37]
S. Tallam and N. Gupta. A concept analysis inspired greedy algorithm for test suite minimization. In Proceedings of the 6th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, PASTE ’05, pages 35–42, New York, NY, USA, 2005. ACM.
[38]
W. E. Wong, J. R. Horgan, S. London, and H. A. Bellcore. A study of effective regression testing in practice. In Proceedings of the Eighth International Symposium on Software Reliability Engineering, ISSRE ’97, Washington, DC, USA, 1997. IEEE Computer Society.
[39]
W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set minimization on fault detection effectiveness. In Proceedings of the 17th international conference on Software engineering, ICSE ’95, pages 41–50, New York, NY, USA, 1995. ACM.
[40]
S. Zhang, D. Jalali, J. Wuttke, K. Muslu, M. Ernst, and D. Notkin. Empirically revisiting the test independence assumption. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, ISSTA ’14, pages 384–396, New York, NY, USA, 2014. ACM.

Cited By

View all
  • (2025)Contrasting test selection, prioritization, and batch testing at scaleEmpirical Software Engineering10.1007/s10664-024-10589-830:1Online publication date: 1-Feb-2025
  • (2024)Efficient Detection of Test Interference in C ProjectsProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3694995(166-178)Online publication date: 27-Oct-2024
  • (2024)A Mutation-Guided Assessment of Acceleration Approaches for Continuous Integration: An Empirical Study of YourBaseProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644914(556-568)Online publication date: 15-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering
August 2015
1068 pages
ISBN:9781450336758
DOI:10.1145/2786805
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 August 2015

Check for updates

Author Tags

  1. Test dependence
  2. detection algorithms
  3. empirical studies

Qualifiers

  • Research-article

Conference

ESEC/FSE'15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)192
  • Downloads (Last 6 weeks)34
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Contrasting test selection, prioritization, and batch testing at scaleEmpirical Software Engineering10.1007/s10664-024-10589-830:1Online publication date: 1-Feb-2025
  • (2024)Efficient Detection of Test Interference in C ProjectsProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3694995(166-178)Online publication date: 27-Oct-2024
  • (2024)A Mutation-Guided Assessment of Acceleration Approaches for Continuous Integration: An Empirical Study of YourBaseProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644914(556-568)Online publication date: 15-Apr-2024
  • (2024)The Lost World: Characterizing and Detecting Undiscovered Test SmellsACM Transactions on Software Engineering and Methodology10.1145/363197333:3(1-32)Online publication date: 15-Mar-2024
  • (2024)How Trustworthy Is Your Continuous Integration (CI) Accelerator?: A Comparison of the Trustworthiness of CI Acceleration ProductsIEEE Software10.1109/MS.2024.339561641:6(82-90)Online publication date: 1-Nov-2024
  • (2024)A survey of Detecting Flakiness in Automated Test Regression Suite2024 21st Learning and Technology Conference (L&T)10.1109/LT60077.2024.10469624(330-336)Online publication date: 15-Jan-2024
  • (2024)Automatically Reproducing Timing-Dependent Flaky-Test Failures2024 IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST60714.2024.00032(269-280)Online publication date: 27-May-2024
  • (2024)Test Code Flakiness in Mobile Apps: The Developer’s PerspectiveInformation and Software Technology10.1016/j.infsof.2023.107394168(107394)Online publication date: Apr-2024
  • (2023)Accelerating Continuous Integration with Parallel Batch TestingProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616255(55-67)Online publication date: 30-Nov-2023
  • (2023)Systematically Producing Test Orders to Detect Order-Dependent Flaky TestsProceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3597926.3598083(627-638)Online publication date: 12-Jul-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media