Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3386263.3406910acmotherconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Public Access

Redundant Neurons and Shared Redundant Synapses for Robust Memristor-based DNNs with Reduced Overhead

Published: 07 September 2020 Publication History

Abstract

The dominating computational workload in the inference phase of deep neural networks (DNNs) is matrix-vector multiplication. An arising solution to accelerate the inference phase is to perform analog matrix-vector multiplication using memristor crossbar arrays (MCAs). A key challenge is that stuck-at-fault defects may degrade the classification accuracy of the memristor-based DNNs. A common technique to reduce the negative impact of stuck-at-faults is to utilize redundant synapses, i.e, each row in a weight matrix is realized using two (or r) parallel rows in an MCA. In this paper, we propose to handle stuck-at-faults by inserting redundant neurons and by sharing redundant synapses. The first technique is based on inserting redundant neurons to surgically repair neurons connected to rows and columns in the MCAs with many stuck-at-faults. The second technique is focused on sharing redundant synapses between different neurons to reduce the hardware overhead, which generalizes (1:r) synapse redundancy in previous studies to (q:r) synapse redundancy. The experimental results demonstrate new trade-offs between robustness and hardware overhead without requiring the neural networks to be retrained. Compared with state-of-the-art, the power and area overhead for a neural network can be reduced with up to 16% and 25%, respectively.

Supplementary Material

MP4 File (3386263.3406910.mp4)
The dominating computational workload in the inference phase of deep neural networks (DNNs) is matrix-vector multiplication. An arising solution to accelerate the inference phase is to perform analog matrix-vector multiplication using memristor crossbar arrays (MCAs). A key challenge is that stuck-at-fault defects may degrade the classification accuracy of the memristor-based DNNs. A common technique to reduce the negative impact of stuck-at-faults is to utilize redundant synapses, i.e, each row in a weight matrix is realized using two (or r) parallel rows in an MCA. In this paper, we propose to handle stuck-at-faults by inserting redundant neurons and by sharing redundant synapses.

References

[1]
C. Y. Chen et al. RRAM defect modeling and failure analysis based on march test and a novel squeeze-search scheme. Tran. on Computers, 64(1):180--190, 2015.
[2]
L. Chen et al. Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar. DATE'17, pages 19--24, 2017.
[3]
P. Chi et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM -based main memory. ISCA'16, pages 27--39, 2016.
[4]
T. H. Cormen et al. Introduction to Algorithms. McGraw-Hill Higher Education, 2001.
[5]
Z. He et al. Noise injection adaption: End-to-end reram crossbar non-ideal effect adaption for neural network mapping. DAC'19, pages 57:1--57:6, 2019.
[6]
M. Hu et al. Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication. DAC'16, pages 1--6, 2016.
[7]
S. Kannan et al. Modeling, detection, and diagnosis of faults in multilevel memristor memories. IEEE TCAD, 34(5):822--834, 2015.
[8]
Y. LeCun et al. Deep learning. Nature, 521(7553):436, 2015.
[9]
B. Liu et al. Vortex: Variation-aware training for memristor X-bar. DAC'15, pages 1--6, 2015.
[10]
X. Liu et al. Harmonica: A framework of heterogeneous computing systems with memristor-based neuromorphic computing accelerators. IEEE TCAS, 63(5):617--628, 2016.
[11]
H. Miao et al. Memristor-based analog computation and neural network classification with a dot product engine. Advanced Materials, 30(9):1705914.
[12]
A. Shafiee et al. ISAAC: A convolutional neural network accelerator with In-Situ analog arithmetic in crossbars. ISCA'16, pages 14--26, 2016.
[13]
P. Simon. Too Big to Ignore: The Business Case for Big Data. Wiley Publishing, 1st edition, 2013.
[14]
L. Song et al. Pipelayer: A pipelined ReRAM-based accelerator for deep learning. HPCA'17, pages 541--552, 2017.
[15]
A. J. Van de Goor and Y. Zorian. Effective march algorithms for testing single-order addressed memories. Journal of Electronic Testing, 5(4):337--345, 1994.
[16]
W. A. Wulf and S. A. McKee. Hitting the memory wall: Implications of the obvious. SIGARCH Computing Architecture News, 23(1):20--24, 1995.
[17]
L. Xia et al. Fault-tolerant training with on-line fault detection for rram-based neural computing systems. DAC'17, pages 1--6, 2017.
[18]
L. Xia et al. Stuck-at fault tolerance in rram computing systems. IEEE JETCAS, 8(1):102--115, 2018.
[19]
B. Yan et al. A closed-loop design to enhance weight stability of memristor based neural network chips. ICCAD'17, pages 541--548, 2017.
[20]
B. Zhang et al. Handling stuck-at-faults in memristor crossbar arrays using matrix transformations. ASP-DAC'19, pages 438--443, 2019.

Cited By

View all
  • (2022)FAMCroNA: Fault Analysis in Memristive Crossbars for Neuromorphic ApplicationsJournal of Electronic Testing10.1007/s10836-022-06001-238:2(145-163)Online publication date: 13-May-2022

Index Terms

  1. Redundant Neurons and Shared Redundant Synapses for Robust Memristor-based DNNs with Reduced Overhead

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Other conferences
          GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSI
          September 2020
          597 pages
          ISBN:9781450379441
          DOI:10.1145/3386263
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 07 September 2020

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. deep neural networks
          2. in-memory computing
          3. memristor crossbar arrays
          4. stuck-at-fault defects

          Qualifiers

          • Research-article

          Funding Sources

          Conference

          GLSVLSI '20
          GLSVLSI '20: Great Lakes Symposium on VLSI 2020
          September 7 - 9, 2020
          Virtual Event, China

          Acceptance Rates

          Overall Acceptance Rate 312 of 1,156 submissions, 27%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)34
          • Downloads (Last 6 weeks)8
          Reflects downloads up to 04 Oct 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2022)FAMCroNA: Fault Analysis in Memristive Crossbars for Neuromorphic ApplicationsJournal of Electronic Testing10.1007/s10836-022-06001-238:2(145-163)Online publication date: 13-May-2022

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media