Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2816707.2816718acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Tracking down performance variation against source code evolution

Published: 21 October 2015 Publication History

Abstract

Little is known about how software performance evolves across software revisions. The severity of this situation is high since (i) most performance variations seem to happen accidentally and (ii) addressing a performance regression is challenging, especially when functional code is stacked on it. This paper reports an empirical study on the performance evolution of 19 applications, totaling over 19 MLOC. It took 52 days to run our 49 benchmarks. By relating performance variation with source code revisions, we found out that: (i) 1 out of every 3 application revisions introduces a performance variation, (ii) performance variations may be classified into 9 patterns, (iii) the most prominent cause of performance regression involves loops and collections. We carefully describe the patterns we identified, and detail how we addressed the numerous challenges we faced to complete our experiment.

References

[1]
Juan Pablo Sandoval Alcocer and Alexandre Bergel. Tracking performance failures with rizel. In Proceedings of the 2013 International Workshop on Principles of Software Evolution, IWPSE 2013, pages 38–42, New York, NY, USA, 2013. ACM.
[2]
Alexandre Bergel. Counting messages as a proxy for average execution time in pharo. In Proceedings of the 25th European Conference on Object-Oriented Programming (ECOOP’11), LNCS, pages 533–557. Springer-Verlag, July 2011.
[3]
Stephen M. Blackburn, Robin Garner, Chris Hoffmann, Asjad M. Khang, Kathryn S. McKinley, Rotem Bentzur, Amer Diwan, Daniel Feinberg, Daniel Frampton, Samuel Z. Guyer, Martin Hirzel, Antony Hosking, Maria Jump, Han Lee, J. Eliot B. Moss, Aashish Phansalkar, Darko Stefanovi´c, Thomas VanDrunen, Daniel von Dincklage, and Ben Wiedermann. The dacapo benchmarks: java benchmarking development and analysis. In Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, OOPSLA ’06, pages 169–190, New York, NY, USA, 2006. ACM.
[4]
Adriana E. Chis, Nick Mitchell, Edith Schonberg, Gary Sevitsky, Patrick O’Sullivan, Trevor Parsons, and John Murphy. Patterns of memory inefficiency. In Proceedings of the 25th European Conference on Object-oriented Programming, ECOOP’11, pages 383–407, Berlin, Heidelberg, 2011. Springer-Verlag.
[5]
Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley Professional, Reading, Mass., 1995.
[6]
Andy Georges, Dries Buytaert, and Lieven Eeckhout. Statistically rigorous java performance evaluation. In Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications, OOPSLA ’07, pages 57–76, New York, NY, USA, 2007. ACM.
[7]
Peng Huang, Xiao Ma, Dongcai Shen, and Yuanyuan Zhou. Performance regression testing target prioritization via performance risk analysis. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 60–71, New York, NY, USA, 2014. ACM.
[8]
Guoliang Jin, Linhai Song, Xiaoming Shi, Joel Scherpelz, and Shan Lu. Understanding and detecting real-world performance bugs. SIGPLAN Not., 47(6):77–88, June 2012.
[9]
Tomas Kalibera and Richard Jones. Rigorous benchmarking in reasonable time. In Proceedings of the 2013 International Symposium on Memory Management, ISMM ’13, pages 63–74, New York, NY, USA, 2013. ACM.
[10]
Manny Lehman and Les Belady. Program Evolution: Processes of Software Change. London Academic Press, London, 1985.
[11]
Ian Molyneaux. The Art of Application Performance Testing: Help for Programmers and Quality Assurance. O’Reilly Media, Inc., 1st edition, 2009.
[12]
Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proceeding of the 14th international conference on Architectural support for programming languages and operating systems, ASPLOS ’09, pages 265–276, New York, NY, USA, 2009. ACM.
[13]
Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. Evaluating the accuracy of java profilers. In Proceedings of the 31st conference on Programming language design and implementation, PLDI ’10, pages 187–197, New York, NY, USA, 2010. ACM.
[14]
Thanh H. D. Nguyen, Meiyappan Nagappan, Ahmed E. Hassan, Mohamed Nasser, and Parminder Flora. An industrial case study of automatically identifying performance regressioncauses. In Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, pages 232–241, New York, NY, USA, 2014. ACM.
[15]
Adrian Nistor, Tian Jiang, and Lin Tan. Discovering, reporting, and fixing performance bugs. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR ’13, pages 237–246, Piscataway, NJ, USA, 2013. IEEE Press.
[16]
Adrian Nistor, Linhai Song, Darko Marinov, and Shan Lu. Toddler: Detecting performance problems via similar memoryaccess patterns. In Proceedings of the 2013 International Conference on Software Engineering, ICSE ’13, pages 562– 571, Piscataway, NJ, USA, 2013. IEEE Press.
[17]
Ohad Shacham, Martin Vechev, and Eran Yahav. Chameleon: Adaptive selection of collections. In Proceedings of the 2009 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’09, pages 408–418, New York, NY, USA, 2009. ACM.
[18]
Bastian Steinert, Damien Cassou, and Robert Hirschfeld. Coexist: overcoming aversion to change. In Proceedings of the 8th symposium on Dynamic languages, DLS ’12, pages 107–118, New York, NY, USA, 2012. ACM.
[19]
Haiying Xu, Christopher J. F. Pickett, and Clark Verbrugge. Dynamic purity analysis for java programs. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE ’07), pages 75–82, New York, NY, USA, 2007. ACM.
[20]
Shahed Zaman, Bram Adams, and Ahmed E. Hassan. A qualitative study on performance bugs. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories, MSR ’12, pages 199–208, Piscataway, NJ, USA, 2012. IEEE Press.

Cited By

View all
  • (2022)Automated Identification of Performance Changes at Code Level2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS57517.2022.00096(916-925)Online publication date: Dec-2022
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • (2021)How Do Developers Use the Java Stream API?Computational Science and Its Applications – ICCSA 202110.1007/978-3-030-87007-2_23(323-335)Online publication date: 11-Sep-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DLS 2015: Proceedings of the 11th Symposium on Dynamic Languages
October 2015
176 pages
ISBN:9781450336901
DOI:10.1145/2816707
  • cover image ACM SIGPLAN Notices
    ACM SIGPLAN Notices  Volume 51, Issue 2
    DLS '15
    Feburary 2016
    176 pages
    ISSN:0362-1340
    EISSN:1558-1160
    DOI:10.1145/2936313
    • Editor:
    • Andy Gill
    Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 October 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. performance analysis
  2. performance evolution
  3. performance variation

Qualifiers

  • Research-article

Conference

SPLASH '15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 32 of 77 submissions, 42%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)2
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Automated Identification of Performance Changes at Code Level2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS57517.2022.00096(916-925)Online publication date: Dec-2022
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • (2021)How Do Developers Use the Java Stream API?Computational Science and Its Applications – ICCSA 202110.1007/978-3-030-87007-2_23(323-335)Online publication date: 11-Sep-2021
  • (2020)Using black-box performance models to detect performance regressions under varying workloads: an empirical studyEmpirical Software Engineering10.1007/s10664-020-09866-zOnline publication date: 28-Aug-2020
  • (2019)Enhancing Commit Graphs with Visual Runtime Clues2019 Working Conference on Software Visualization (VISSOFT)10.1109/VISSOFT.2019.00012(28-32)Online publication date: Sep-2019
  • (2019)Analyzing performance-aware code changes in software development processProceedings of the 27th International Conference on Program Comprehension10.1109/ICPC.2019.00049(300-310)Online publication date: 25-May-2019
  • (2018)An evaluation of open-source software microbenchmark suites for continuous performance assessmentProceedings of the 15th International Conference on Mining Software Repositories10.1145/3196398.3196407(119-130)Online publication date: 28-May-2018
  • (2018)How to Detect Performance Changes in Software HistoryCompanion of the 2018 ACM/SPEC International Conference on Performance Engineering10.1145/3185768.3186404(183-188)Online publication date: 2-Apr-2018
  • (2017)An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source ProjectsProceedings of the 8th ACM/SPEC on International Conference on Performance Engineering10.1145/3030207.3030213(373-384)Online publication date: 17-Apr-2017
  • (2016)Dynamically Composing Collection Operations through Collection PromisesProceedings of the 11th edition of the International Workshop on Smalltalk Technologies10.1145/2991041.2991049(1-5)Online publication date: 23-Aug-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media