Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3578245.3585025acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
short-paper
Open access

GraalVM Compiler Benchmark Results Dataset (Data Artifact)

Published: 15 April 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Systematic testing of software performance during development is a persistent challenge, made increasingly important by the magnifying effect of mass software deployment on any savings. In practice, such systematic performance evaluation requires a combination of an efficient and reliable measurement procedure integrated into the development environment, coupled with an automated evaluation of the measurement results and compact reporting of detected performance anomalies.
    A realistic evaluation of research contributions to systematic software performance testing can benefit from the availability of measurement data that comes from long term development activities in a well documented context. This paper presents a data artifact that aggregates more than 70 machine time years of performance measurements over 7 years of development of the GraalVM Compiler Project, aiming to reduce the costs of evaluating research contributions in this and similar contexts.

    References

    [1]
    2008. SPECjvm2008 Project Home Page. https://www.spec.org/jvm2008.
    [2]
    2023. GraalVM Benchmark Results Artifact. https://zenodo.org/communities/graalvm-compiler-benchmark-results.
    [3]
    2023. GraalVM Benchmark Results Viewer. https://graal.d3s.mff.cuni.cz.
    [4]
    2023. GraalVM Project Home Page. https://www.graalvm.org.
    [5]
    Milad Abdullah, Lubomír Bulej, Tomás Bures, Petr Hnetynka, Vojtech Horký, and Petr Tuma. 2022. Reducing Experiment Costs in Automated Software Performance Regression Detection. In Proceedings of SEAA 2022.
    [6]
    Stephen M. Blackburn, Robin Garner, Chris Hoffmann, Asjad M. Khang, Kathryn S. McKinley, Rotem Bentzur, Amer Diwan, Daniel Feinberg, Daniel Frampton, Samuel Z. Guyer, Martin Hirzel, Antony Hosking, Maria Jump, Han Lee, J. Eliot B. Moss, Aashish Phansalkar, Darko Stefanovic, Thomas VanDrunen, Daniel von Dincklage, and Ben Wiedermann. 2006. The DaCapo Benchmarks: Java Bench- marking Development and Analysis. In Proceedings of OOPSLA 2006. https://doi.org/10.1145/1167473.1167488
    [7]
    Jinfu Chen, Weiyi Shang, and Emad Shihab. 2022. PerfJIT: Test-Level Just-in-Time Prediction for Performance Regression Introducing Commits. IEEE Transactions on Software Engineering 48, 5 (May 2022), 1529--1544. https://doi.org/10.1109/TSE.2020.3023955
    [8]
    David Daly. 2021. Creating a Virtuous Cycle in Performance Testing at MongoDB. In Proceedings of ICPE 2021. https://doi.org/10.1145/3427921.3450234
    [9]
    David Daly. 2021. MongoDB Benchmark Results Artifact. https://doi.org/10.5281/zenodo.5138516
    [10]
    Augusto Born De Oliveira, Sebastian Fischmeister, Amer Diwan, Matthias Hauswirth, and Peter F. Sweeney. 2017. Perphecy: Performance Regression Test Selection Made Simple but Effective. In Proceedings of ICST 2017. https://doi.org/10.1109/ICST.2017.17
    [11]
    Zishuo Ding, Jinfu Chen, and Weiyi Shang. 2020. Towards the Use of the Readily Available Tests from the Release Pipeline as Performance Tests: Are We There Yet?. In Proceedings of ICSE 2020. https://doi.org/10.1145/3377811.3380351
    [12]
    Tobias Hartmann, Albert Noll, and Thomas Gross. 2014. Efficient Code Management for Dynamic Multi-Tiered Compilation Systems. In Proceedings of PPPJ 2014. https://doi.org/10.1145/2647508.2647513
    [13]
    Henrik Ingo and David Daly. 2020. Automated System Performance Testing at MongoDB. In Proceedings of DBTEST 2020. https://doi.org/10.1145/3395032.3395323
    [14]
    Christoph Laaber, Harald C. Gall, and Philipp Leitner. 2021. Applying Test Case Prioritization to Software Microbenchmarks. Empirical Software Engineering 26, 6 (Sept. 2021), 133. https://doi.org/10.1007/s10664-021--10037-x
    [15]
    Christoph Laaber and Philipp Leitner. 2018. An Evaluation of Open-Source Software Microbenchmark Suites for Continuous Performance Assessment. In Proceedings of MSR 2018. https://doi.org/10.1145/3196398.3196407
    [16]
    Philipp Leitner and Cor-Paul Bezemer. 2017. An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects. In Proceedings of ICPE 2017. https://doi.org/10.1145/3030207.3030213
    [17]
    Shaikh Mostafa, Xiaoyin Wang, and Tao Xie. 2017. PerfRanker: Prioritization of Performance Regression Tests for Collection-Intensive Software. In Proceedings of ISSTA 2017. https://doi.org/10.1145/3092703.3092725
    [18]
    Stefan Mühlbauer, Sven Apel, and Norbert Siegmund. 2021. Identifying Software Performance Changes across Variants and Versions. In Proceedings of ASE 2020. https://doi.org/10.1145/3324884.3416573
    [19]
    Aleksandar Prokopec, Andrea Rosà, David Leopoldseder, Gilles Duboscq, Petr Tuma, Martin Studener, Lubomír Bulej, Yudi Zheng, Alex Villazón, Doug Simon, Thomas Würthinger, and Walter Binder. 2019. Renaissance: Benchmarking Suite for Parallel Applications on the JVM. In Proceedings of PLDI 2019. 17. https://doi.org/10.1145/3314221.3314637
    [20]
    Andreas Sewe, Mira Mezini, Aibek Sarimbekov, and Walter Binder. 2011. Da Capo Con Scala: Design and Analysis of a Scala Benchmark Suite for the Java Virtual Machine. In Proceedings of OOPSLA 2011. https://doi.org/10.1145/2048066.2048118
    [21]
    Luca Traini, Vittorio Cortellessa, Daniele Di Pompeo, and Michele Tucci. 2022. Towards Effective Assessment of Steady State Performance in Java Software: Are We There Yet? Empirical Software Engineering 28, 1 (Nov. 2022), 13. https://doi.org/10.1007/s10664-022--10247-x
    [22]
    Petr Tuma. 2018. Frame Allocation Randomizer Project Home Page. https://github.com/d-iii-s/frame-randomizer.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICPE '23 Companion: Companion of the 2023 ACM/SPEC International Conference on Performance Engineering
    April 2023
    421 pages
    ISBN:9798400700729
    DOI:10.1145/3578245
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 April 2023

    Check for updates

    Badges

    Author Tags

    1. benchmark
    2. compiler
    3. dataset

    Qualifiers

    • Short-paper

    Conference

    ICPE '23

    Acceptance Rates

    Overall Acceptance Rate 252 of 851 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 214
      Total Downloads
    • Downloads (Last 12 months)164
    • Downloads (Last 6 weeks)12

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media