CASES has served as a flagship forum for researchers and practitioners working at the intersection of the disparate yet overlapping domains of compilers, computer architecture, and hardware synthesis since 2006, which makes it the eighteenth edition this year as part of Embedded Systems Week.
Proceeding Downloads
Designing an Edge Inferencing Accelerator Using High-Level Synthesis
Convolutional neural networks are computationally intensive. A single inference can require billions of multiply/accumulate operations. In the datacenter, where ample power, space, and cooling are available, high powered CPUs or GPUs can be used. However,...
Work-in-Process: Error-Compensation-Based Energy-Efficient MAC Unit for CNNs
Approximate circuits sacrifice accuracy in exchange for energy efficiency and have been widely used in hardware deployment of neural networks (NNs). Since convolution accounts for most of the power consumption in NNs, it is necessary to design an ...
Work-in-Progress: QRCNN: Scalable CNNs
Dropping the features/kernels in the convolutional layer of convolutional neural networks is a popular variant of structured pruning to reduce the computational load, but this comes at the cost of retraining and performance loss. In this work, we propose ...
Work-in-Progress: Towards Evaluating CNNs Against Integrity Attacks on Multi-tenant Computation
We present an infrastructure for evaluating CNN models for vulnerability against a variety of integrity attacks. Our focus is on attacks that corrupt CNN computations with an impact on prediction/classification accuracy. The attack model encompasses a ...
WIP: Automatic DNN Deployment on Heterogeneous Platforms: the GAP9 Case Study
Emerging Artificial-Intelligence-enabled System-on-Chips (AI-SoCs) combine a flexible microcontroller with parallel Digital Signal Processors (DSP) and heterogeneous acceleration capabilities. In this Work-in-Progress paper, we focus on the GAP9 RISC-V ...
Special Session - Non-Volatile Memories: Challenges and Opportunities for Embedded System Architectures with Focus on Machine Learning Applications
- Jorg Henkel,
- Lokesh Siddhu,
- Lars Bauer,
- Jurgen Teich,
- Stefan Wildermann,
- Mehdi Tahoori,
- Mahta Mayahinia,
- Jeronimo Castrillon,
- Asif Ali Khan,
- Hamid Farzaneh,
- Joao Paulo C. De Lima,
- Jian-Jia Chen,
- Christian Hakert,
- Kuan-Hsun Chen,
- Chia-Lin Yang,
- Hsiang-Yun Cheng
This paper explores the challenges and opportunities of integrating non-volatile memories (NVMs) into embedded systems for machine learning. NVMs offer advantages such as increased memory density, lower power consumption, non-volatility, and compute-in-...
Index Terms
- Proceedings of the International Conference on Compilers, Architecture, and Synthesis for Embedded Systems