Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3557988.3569715acmconferencesArticle/Chapter ViewAbstractPublication PagesslipConference Proceedingsconference-collections
invited-talk

An Automated Design Methodology for Computational SRAM Dedicated to Highly Data-Centric Applications: Invited Paper

Published: 27 January 2023 Publication History

Abstract

To meet the performance requirements of highly data-centric applications (e.g. edge-AI or lattice-based cryptography), Computational SRAM (C-SRAM), a new type of computational memory, was designed as a key element of an emerging computing paradigm called near-memory computing. For this particular type of applications, C-SRAM has been specialized to perform low-latency vector operations in order to limit energy-intensive data transfers with the processor or dedicated processing units. This paper presents a design methodology that aims at making the C-SRAM design flow as simple as possible by automating the configuration of the memory part (e.g. number of SRAM cuts and access ports) according to system constraints (e.g. instruction frequency or memory capacity) and off-the-shelf SRAM compilers. In order to fairly quantify the benefits of the proposed memory selector, it has been evaluated with three different CMOS process technologies from two different foundries. The results show that this memory selection methodology makes it possible to determine the best memory configuration whatever the CMOS process technology and the trade-off between area and power consumption. Furthermore, we also show how this methodology could be used to efficiently assess the level of design optimization of available SRAM compilers in a targeted CMOS process technology.

References

[1]
M. Horowitz. 2014. Computing's Energy Problem (and what we can do about it). In ISSCC, pp. 10--14.
[2]
J. L. Hennessy and D. A. Patterson. 2018. Computer Architecture: A Quantitative Approach. 6th edition.
[3]
Yong Liu and Zhiqiang Gao. 2001. A flexible SRAM compiler for embedded application. In 6th International Conference on Solid-State and Integrated Circuit Technology. Proceedings, Vol.1, pp. 213--216.
[4]
N. Verma et al. 2019. In-memory computing: Advances and prospects. In SSCM, vol. 11, No. 3, pp. 43--55.
[5]
J. Wang et al. 2020. A 28-nm Compute SRAM With Bit-Serial Logic/Arithmetic Operations for Programmable In-Memory Vector Computing. In JSSC, vol. 55, No. 1, pp. 76--86.
[6]
A. Biswas and A. P. Chandrakasan. 2019. CONV-SRAM: An energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks. In JSSC, Vol. 54, No. 1, pp. 217--230.
[7]
H. Valavi et al. 2019. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute. In JSSC, Vol. 54, No. 6, pp. 1789--1799.
[8]
Gene M. Amdahl. 2013. Computer Architecture and Amdahl's Law. In IEEE Computer, Vol. 46, No. 12, pp. 38--46.
[9]
M. Kooli et al. 2018. Smart Instruction Codes for In-Memory Computing Architectures Compatible with Standard SRAM Interfaces. In DATE, pp. 1634--1639.
[10]
R. Gauchi et al. 2019. Memory Sizing of a Scalable SRAM In-Memory Computing Tile Based Architecture. In VLSI-SoC, pp. 166--171.
[11]
R. Gauchi et al. 2020. Exploration of a Scalable Vector-based In-Memory Computing Architecture via a System-on-Chip Evaluation Framework. In ISLPED.
[12]
H. Ekin Sumbul et al. 2020. A 2.9--33.0 TOPS/W Reconfigurable 1-D/2-D Compute-Near-Memory Inference Accelerator in 10-nm FinFET CMOS. In SSCL, Vol. 3, pp. 118--121.
[13]
J.-P. Noel et al. 2020. Computational SRAM Design Automation using Pushed-Rule Bitcells for Energy-Effecient Vector Processing, In DATE, pp. 1187--1192.

Cited By

View all
  • (2023)Compute-In-Place Serial FeRAM: Enhancing Performance, Efficiency and Adaptability in Critical Embedded Systems2023 IFIP/IEEE 31st International Conference on Very Large Scale Integration (VLSI-SoC)10.1109/VLSI-SoC57769.2023.10321864(1-6)Online publication date: 16-Oct-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SLIP '22: Proceedings of the 24th ACM/IEEE Workshop on System Level Interconnect Pathfinding
November 2022
46 pages
ISBN:9781450395366
DOI:10.1145/3557988
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

In-Cooperation

  • IEEE CAS
  • IEEE CEDA

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2023

Check for updates

Qualifiers

  • Invited-talk

Conference

ICCAD '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6 of 8 submissions, 75%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Compute-In-Place Serial FeRAM: Enhancing Performance, Efficiency and Adaptability in Critical Embedded Systems2023 IFIP/IEEE 31st International Conference on Very Large Scale Integration (VLSI-SoC)10.1109/VLSI-SoC57769.2023.10321864(1-6)Online publication date: 16-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media