Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3583781.3590241acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Open access

Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

Published: 05 June 2023 Publication History

Abstract

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies. However, the intrinsic non-idealities in crossbars, which are often non-deterministic and non-linear, degrade the performance of the deployed DNNs. In addition to quantization errors, most frequently encountered non-idealities during inference include crossbar circuit-level parasitic resistances and device-level non-idealities such as stochastic read noise and temporal drift. In this work, our goal is to closely examine the distortions caused by these non-idealities on the dot-product operations in analog crossbars and explore the feasibility of a nearly training-less solution via crossbar-aware fine-tuning of batchnorm parameters in real-time to mitigate the impact of the non-idealities. This enables reduction in hardware costs in terms of memory and training energy for IMC noise-aware retraining of the DNN weights on crossbars.

References

[1]
Alzubaidi et al. 2021. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of big Data (2021).
[2]
Alessio Antolini et al. 2023. Combined HW/SW Drift and Variability Mitigation for PCM-based Analog In-memory Computing for Neural Network Applications. IEEE JETCAS (2023).
[3]
Abhiroop Bhattacharjee et al. 2021. NEAT: Non-linearity Aware Training for Accurate, Energy-Efficient and Robust Implementation of Neural Networks on 1T-1R Crossbars. IEEE TCAD (2021).
[4]
Abhiroop Bhattacharjee et al. 2022. Examining the robustness of spiking neural networks on non-ideal memristive crossbars. In ISLPED.
[5]
Byun et al. 2022. Recent Advances in Synaptic Nonvolatile Memory Devices and Compensating Architectural and Algorithmic Methods Toward Fully Integrated Neuromorphic Chips. Advanced Materials Technologies (2022).
[6]
Indranil Chakraborty et al. 2020a. Geniex: A generalized approach to emulating non-ideality in memristive xbars using neural networks. In ACM/IEEE DAC.
[7]
Indranil Chakraborty et al. 2020b. Pathways to efficient neuromorphic computing with non-volatile memory technologies. Applied Physics Reviews (2020).
[8]
Basma Hajri et al. 2019. RRAM device models: A comparative analysis with experimental validation. IEEE Access (2019).
[9]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. pmlr.
[10]
Shubham Jain et al. 2020. RxNN: A framework for evaluating deep neural networks on resistive crossbars. IEEE TCAD (2020).
[11]
Hai Jin et al. 2021. ReHy: A ReRAM-Based Digital/Analog Hybrid PIM Architecture for Accelerating CNN Training. IEEE TPDS (2021).
[12]
Vinay Joshi et al. 2020. Accurate deep neural network inference using computational phase-change memory. Nature communications (2020).
[13]
Corey Lammie and Mostafa Rahimi Azghadi. 2020. Memtorch: A simulation framework for deep memristive cross-bar architectures. In 2020 ISCAS. IEEE.
[14]
Matthew J Marinella et al. 2018. Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator. IEEE JETCAS (2018).
[15]
SR Nandakumar et al. 2018. A phase-change memory model for neuromorphic computing. Journal of Applied Physics (2018).
[16]
Shubham Negi et al. 2022. NAX: Co-Designing Neural Network and Hardware Architecture for Memristive Xbar based Computing Systems. DAC (2022).
[17]
X. Peng et al. 2020. DNN NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training. IEEE TCAD (2020).
[18]
Malte J Rasch et al. 2021. A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays. In 2021 AICAS. IEEE.
[19]
Malte J Rasch et al. 2023. Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators. arXiv:2302.08469 (2023).
[20]
Sourjya Roy et al. 2021. TxSim: Modeling training of deep neural networks on resistive crossbar systems. IEEE TVLSI (2021).
[21]
Abu Sebastian et al. 2020. Memory devices and applications for in-memory computing. Nature nanotechnology (2020).
[22]
Sun et al. 2019. Impact of non-ideal characteristics of resistive synaptic devices on implementing convolutional neural networks. IEEE JETCAS (2019).
[23]
Christian Szegedy et al. 2016. Rethinking the inception architecture for computer vision. In CVPR.
[24]
Li-Huang Tsai et al. 2020. Robust processing-in-memory neural networks via noise-aware normalization. arXiv:2007.03230 (2020).
[25]
Kodai Ueyoshi et al. 2022. DIANA: An end-to-end energy-efficient digital and ANAlog hybrid neural network SoC. In 2022 ISSCC. IEEE.
[26]
Fan Zhang and Miao Hu. 2020. Mitigate parasitic resistance in resistive crossbar-based convolutional neural networks. ACM JETC (2020).

Cited By

View all
  • (2024)A Readout Scheme for PCM-Based Analog In-Memory Computing With Drift Compensation Through Reference Conductance TrackingIEEE Open Journal of the Solid-State Circuits Society10.1109/OJSSCS.2024.34324684(69-82)Online publication date: 2024
  • (2024)Are SNNs Truly Energy-efficient? — A Hardware PerspectiveICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10448269(13311-13315)Online publication date: 14-Apr-2024
  • (2024)Fast and robust analog in-memory deep neural network trainingNature Communications10.1038/s41467-024-51221-z15:1Online publication date: 20-Aug-2024

Index Terms

  1. Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    GLSVLSI '23: Proceedings of the Great Lakes Symposium on VLSI 2023
    June 2023
    731 pages
    ISBN:9798400701252
    DOI:10.1145/3583781
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 June 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. batchnorm adaptation
    2. energy- & memory-efficiencies
    3. in-memory computing
    4. memristive crossbars
    5. non-idealities

    Qualifiers

    • Research-article

    Funding Sources

    • DARPA AI Exploration (AIE) program
    • DoE MMICC center SEA-CROGS
    • CoCoSys
    • Google Research Scholar Award
    • National Science Foundation CAREER Award
    • TII (Abu Dhabi)

    Conference

    GLSVLSI '23
    Sponsor:
    GLSVLSI '23: Great Lakes Symposium on VLSI 2023
    June 5 - 7, 2023
    TN, Knoxville, USA

    Acceptance Rates

    Overall Acceptance Rate 312 of 1,156 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)138
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Readout Scheme for PCM-Based Analog In-Memory Computing With Drift Compensation Through Reference Conductance TrackingIEEE Open Journal of the Solid-State Circuits Society10.1109/OJSSCS.2024.34324684(69-82)Online publication date: 2024
    • (2024)Are SNNs Truly Energy-efficient? — A Hardware PerspectiveICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10448269(13311-13315)Online publication date: 14-Apr-2024
    • (2024)Fast and robust analog in-memory deep neural network trainingNature Communications10.1038/s41467-024-51221-z15:1Online publication date: 20-Aug-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media