Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2989225.2989232acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Cross-language compiler benchmarking: are we fast yet?

Published: 01 November 2016 Publication History

Abstract

Comparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance comes up repeatedly in the research community, industry, and wider audience of enthusiasts.
This paper presents 14 benchmarks and a novel methodology to assess the compiler effectiveness across language implementations. Using a set of common language abstractions, the benchmarks are implemented in Java, JavaScript, Ruby, Crystal, Newspeak, and Smalltalk. We show that the benchmarks exhibit a wide range of characteristics using language-agnostic metrics. Using four different languages on top of the same compiler, we show that the benchmarks perform similarly and therefore allow for a comparison of compiler effectiveness across languages. Based on anecdotes, we argue that these benchmarks help language implementers to identify performance bugs and optimization potential by comparing to other language implementations.

References

[1]
S. M. Blackburn, R. Garner, C. Hoffmann, A. M. Khang, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, B. Moss, A. Phansalkar, D. Stefanovi´c, T. Van-Drunen, D. von Dincklage, and B. Wiedermann. The DaCapo Benchmarks: Java Benchmarking Development and Analysis. In Proc. of OOPSLA, pages 169–190. ACM, 2006.
[2]
H.-J. Boehm. Space Efficient Conservative Garbage Collection. In Proc. of PLDI, pages 197–206. ACM, 1993.
[3]
B. Dufour, K. Driesen, L. Hendren, and C. Verbrugge. Dynamic Metrics for Java. In Proc. of OOPSLA, pages 149–168. ACM, 2003.
[4]
P. Havlak. Nesting of Reducible and Irreducible Loops. ACM Trans. Program. Lang. Syst., 19(4):557–567, 1997.
[5]
K. Hoste and L. Eeckhout. Microarchitecture-Independent Workload Characterization. IEEE Micro, 27(3):63–72, 2007.
[7]
R. Hundt. Loop Recognition in C++/Java/Go/Scala. In Proc. of Scala Days, 2011.
[8]
T. Kalibera, J. Hagelberg, P. Maj, F. Pizlo, B. Titzer, and J. Vitek. CDx: A Family of Real-time Java Benchmarks. Concurrency and Computation: Practice and Experience, 23 (14):1679–1700, 2011.
[9]
W. H. Li, D. R. White, and J. Singer. JVM-hosted Languages: They Talk the Talk, but Do They Walk the Walk? In Proc. of PPPJ, pages 101–112. ACM, 2013. 2500828.2500838.
[10]
P. Ratanaworabhan, B. Livshits, and B. G. Zorn. JSMeter: Comparing the Behavior of JavaScript Benchmarks with Real Web Applications. In Proc. of WebApps. USENIX, 2010.
[11]
G. Richards, S. Lebresne, B. Burg, and J. Vitek. An Analysis of the Dynamic Behavior of JavaScript Programs. In Proc. of PLDI, pages 1–12. ACM, 2010.
[12]
1809028.1806598.
[13]
M. Richards. Bench, 1999.
[14]
A. Sarimbekov, A. Podzimek, L. Bulej, Y. Zheng, N. Ricci, and W. Binder. Characteristics of Dynamic JVM Languages. In Proc. of VMIL, pages 11–20. ACM, 2013. 2542142.2542144.
[15]
A. Sewe, M. Mezini, A. Sarimbekov, and W. Binder. Da Capo Con Scala: Design and Analysis of a Scala Benchmark Suite for the Java Virtual Machine. In Proc. of OOPSLA, pages 657–676. ACM, 2011.
[16]
K. Shiv, K. Chow, Y. Wang, and D. Petrochenko. SPECjvm2008 Performance Characterization. In Proc. of SPEC Benchmark Workshop, pages 17–35. Springer, 2009.
[18]
M. Wolczko. Benchmarking Java with Richards and Deltablue, 2013.
[19]
T. Würthinger, C. Wimmer, A. Wöß, L. Stadler, G. Duboscq, C. Humer, G. Richards, D. Simon, and M. Wolczko. One VM to Rule Them All. In Proc. of Onward!, pages 187–204. ACM, 2013.
[20]
A. Wöß, C. Wirth, D. Bonetta, C. Seaton, C. Humer, and H. Mössenböck. An Object Storage Model for the Truffle Language Implementation Framework. In Proc. of PPPJ, pages 133–144. ACM, 2014.
[21]
T. Würthinger, A. Wöß, L. Stadler, G. Duboscq, D. Simon, and C. Wimmer. Self-Optimizing AST Interpreters. In Proc. of DLS, pages 73–82, 2012.

Cited By

View all
  • (2024)Interactive Programming for Microcontrollers by Offloading Dynamic Incremental CompilationProceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3679007.3685062(28-40)Online publication date: 13-Sep-2024
  • (2024)Towards Realistic Results for Instrumentation-Based Profilers for JIT-Compiled SystemsProceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3679007.3685058(82-89)Online publication date: 13-Sep-2024
  • (2023)CHERI Performance Enhancement for a Bytecode InterpreterProceedings of the 15th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages10.1145/3623507.3623552(1-10)Online publication date: 18-Oct-2023
  • Show More Cited By

Index Terms

  1. Cross-language compiler benchmarking: are we fast yet?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    DLS 2016: Proceedings of the 12th Symposium on Dynamic Languages
    November 2016
    131 pages
    ISBN:9781450344456
    DOI:10.1145/2989225
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 November 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Benchmarking
    2. Languages
    3. Virtual Machines

    Qualifiers

    • Research-article

    Funding Sources

    • Austrian Science Fund (FWF)

    Conference

    SPLASH '16
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 32 of 77 submissions, 42%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)49
    • Downloads (Last 6 weeks)11
    Reflects downloads up to 15 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Interactive Programming for Microcontrollers by Offloading Dynamic Incremental CompilationProceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3679007.3685062(28-40)Online publication date: 13-Sep-2024
    • (2024)Towards Realistic Results for Instrumentation-Based Profilers for JIT-Compiled SystemsProceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3679007.3685058(82-89)Online publication date: 13-Sep-2024
    • (2023)CHERI Performance Enhancement for a Bytecode InterpreterProceedings of the 15th ACM SIGPLAN International Workshop on Virtual Machines and Intermediate Languages10.1145/3623507.3623552(1-10)Online publication date: 18-Oct-2023
    • (2023)AST vs. Bytecode: Interpreters in the Age of Meta-CompilationProceedings of the ACM on Programming Languages10.1145/36228087:OOPSLA2(318-346)Online publication date: 16-Oct-2023
    • (2023)Don’t Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java Profilers (Poster Abstract)Proceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3624307(181-182)Online publication date: 19-Oct-2023
    • (2023)Don’t Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java ProfilersProceedings of the 20th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes10.1145/3617651.3622985(100-113)Online publication date: 19-Oct-2023
    • (2023)Collecting Cyclic Garbage across Foreign Function Interfaces: Who Takes the Last Piece of Cake?Proceedings of the ACM on Programming Languages10.1145/35912447:PLDI(591-614)Online publication date: 6-Jun-2023
    • (2023)GTP Benchmarks for Gradual Typing PerformanceProceedings of the 2023 ACM Conference on Reproducibility and Replicability10.1145/3589806.3600034(102-114)Online publication date: 27-Jun-2023
    • (2023)Optimizing the Order of Bytecode Handlers in Interpreters using a Genetic AlgorithmProceedings of the 38th ACM/SIGAPP Symposium on Applied Computing10.1145/3555776.3577712(1384-1393)Online publication date: 27-Mar-2023
    • (2022)Generating Virtual Machine Code of JavaScript Engine for Embedded SystemsJournal of Information Processing10.2197/ipsjjip.30.67930(679-693)Online publication date: 2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media