Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3578360acmconferencesBook PagePublication PagesccConference Proceedingsconference-collections
CC 2023: Proceedings of the 32nd ACM SIGPLAN International Conference on Compiler Construction
ACM2023 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CC '23: 32nd ACM SIGPLAN International Conference on Compiler Construction Montréal QC Canada February 25 - 26, 2023
ISBN:
979-8-4007-0088-0
Published:
17 February 2023
Sponsors:

Bibliometrics
Skip Abstract Section
Abstract

Welcome to the 32nd ACM SIGPLAN International Conference on Compiler Construction (CC 2023), held in Montréal, Québec, Canada over February 25–26, 2023. As in the previous eight years, CC is held jointly with the International Symposium on Code Generation and Optimization (CGO), the Symposium on Principles and Practice of Parallel Programming (PPoPP), and the International Symposium on High-Performance Computer Architecture (HPCA). Colocation of these four conferences creates an exciting opportunity for a broad range of researchers in the areas of compilation, optimization, parallelism, and computer architecture to interact and explore collaborative research opportunities.

Skip Table Of Content Section
SESSION: Vector and Parallelism
Java Vector API: Benchmarking and Performance Analysis

The Java Vector API is a new module introduced in Java 16, allowing developers to concisely express vector computations. The API promises both high performance, achieved via the runtime compilation of vector operations to hardware vector instructions, ...

research-article
Compiling Discrete Probabilistic Programs for Vectorized Exact Inference

Probabilistic programming languages (PPLs) are essential for reasoning under uncertainty. Even though many real-world probabilistic programs involve discrete distributions, the state-of-the-art PPLs are suboptimal for a large class of tasks dealing ...

research-article
Open Access
A Multi-threaded Fast Hardware Compiler for HDLs

A set of new Hardware Description Languages (HDLs) are emerging to ease hardware design. HDL compilation time is a major bottleneck in the designer’s productivity. Moreover, as the HDLs are developed independently, the possibility to share ...

SESSION: Scheduling and Tuning
Efficiently Learning Locality Optimizations by Decomposing Transformation Domains

Optimizing compilers for efficient machine learning are more important than ever due to the rising ubiquity of the application domain in numerous facets of life. Predictive model-guided compiler optimization is sometimes used to derive sequences of ...

A Deep Learning Model for Loop Interchange

Loop interchange is an important code optimization that improves data locality and extracts parallelism. While previous research in compilers has tried to automate the selection of which loops to interchange, existing methods have an important ...

(De/Re)-Compositions Expressed Systematically via MDH-Based Schedules

We introduce a new scheduling language, based on the formalism of Multi-Dimensional Homomorphisms (MDH). In contrast to existing scheduling languages, our MDH-based language is designed to systematically "de-compose" computations for the memory and core ...

SESSION: Code Generation and Synthesis
research-article
Public Access
A Sound and Complete Algorithm for Code Generation in Distance-Based ISA

The single-thread performance of a processor core is essential even in the multicore era. However, increasing the processing width of a core to improve the single-thread performance leads to a super-linear increase in power consumption. To overcome ...

Matching Linear Algebra and Tensor Code to Specialized Hardware Accelerators

Dedicated tensor accelerators demonstrate the importance of linear algebra in modern applications. Such accelerators have the potential for impressive performance gains, but require programmers to rewrite code using vendor APIs - a barrier to wider ...

research-article
Open Access
Torchy: A Tracing JIT Compiler for PyTorch

Machine learning (ML) models keep getting larger and more complex. Whereas before models used to be represented by static data-flow graphs, they are now implemented via arbitrary Python code. Eager-mode frameworks, such as PyTorch, are now the ...

SESSION: Backend
A Symbolic Emulator for Shuffle Synthesis on the NVIDIA PTX Code

Various kinds of applications take advantage of GPUs through automation tools that attempt to automatically exploit the available performance of the GPU's parallel architecture. Directive-based programming models, such as OpenACC, are one such method ...

Register Allocation for Compressed ISAs in LLVM

We present an adaptation to the LLVM greedy register allocator to improve code density for compressed RISC ISAs.

Many RISC architectures have extensions defining smaller encodings for common instructions, typically 16 rather than 32 bits wide. However,...

RL4ReAl: Reinforcement Learning for Register Allocation

We aim to automate decades of research and experience in register allocation, leveraging machine learning. We tackle this problem by embedding a multi-agent reinforcement learning algorithm within LLVM, training it with the state of the art techniques. ...

SESSION: Code Size and Bugs
research-article
Open Access
Automatically Localizing Dynamic Code Generation Bugs in JIT Compiler Back-End

Just-in-Time (JIT) compilers are ubiquitous in modern computing systems and are used in a wide variety of software. Dynamic code generation bugs, where the JIT compiler silently emits incorrect code, can result in exploitable vulnerabilities. They,...

HyBF: A Hybrid Branch Fusion Strategy for Code Size Reduction

Binary code size is a first-class design consideration in many computing domains and a critical factor in many more, but compiler optimizations targeting code size are few and often limited in functionality. When size reduction opportunities are left ...

research-article
Linker Code Size Optimization for Native Mobile Applications

Modern mobile applications have grown rapidly in binary size, which restricts user growth and hinders updates for existing users. Thus, reducing the binary size is important for application developers. Recent studies have shown the possibility of ...

SESSION: Domain Specific Languages
research-article
Building a Compiled Query Engine in Python

The simplicity of Python and its rich set of libraries has made it the most popular language for data science. Moreover, the interpreted nature of Python offers an easy debugging experience for the developers. However, it comes with the price of poor ...

research-article
Open Access
Codon: A Compiler for High-Performance Pythonic Applications and DSLs

Domain-specific languages (DSLs) are able to provide intuitive high-level abstractions that are easy to work with while attaining better performance than general-purpose languages. Yet, implementing new DSLs is a burdensome task. As a result, new DSLs ...

MOD2IR: High-Performance Code Generation for a Biophysically Detailed Neuronal Simulation DSL

Advances in computational capabilities and large volumes of experimental data have established computer simulations of brain tissue models as an important pillar in modern neuroscience. Alongside, a variety of domain specific languages (DSLs) have been ...

SESSION: Optimizations
A Hotspot-Driven Semi-automated Competitive Analysis Framework for Identifying Compiler Key Optimizations

High-performance compilers play an important role in improving the run-time performance of a program, and it is hard and time-consuming to identify the key optimizations implemented in a high-performance compiler with traditional program analysis. In ...

LAGrad: Statically Optimized Differentiable Programming in MLIR

Automatic differentiation (AD) is a central algorithm in deep learning and the emerging field of differentiable programming. However, the performance of AD remains a significant bottleneck in these fields. Training large models requires repeatedly ...

research-article
Lazy Evaluation for the Lazy: Automatically Transforming Call-by-Value into Call-by-Need

This paper introduces lazification, a code transformation technique that replaces strict with lazy evaluation of function parameters whenever such modification is deemed profitable. The transformation is designed for an imperative, low-level program ...

Contributors
  • McGill University
  • University of Waterloo

Index Terms

  1. Proceedings of the 32nd ACM SIGPLAN International Conference on Compiler Construction

    Recommendations