Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2627369.2627613acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article

AxNN: energy-efficient neuromorphic systems using approximate computing

Published: 11 August 2014 Publication History

Abstract

Neuromorphic algorithms, which are comprised of highly complex, large-scale networks of artificial neurons, are increasingly used for a variety of recognition, classification, search and vision tasks. However, their computational and energy requirements can be quite high, and hence their energy-efficient implementation is of great interest.
We propose a new approach to design energy-efficient hardware implementations of large-scale neural networks (NNs) using approximate computing. Our work is motivated by the observations that (i) NNs are used in applications where less-than-perfect results are acceptable, and often inevitable, and (ii) they are highly resilient to inexactness in many (but not all) of their constituent computations. We make two key contributions. First, we propose a method to transform any given NN into an Approximate Neural Network (AxNN). This is performed by (i) adapting the backpropagation technique, which is commonly used to train these networks, to quantify the impact of approximating each neuron to the overall network quality (e.g., classification accuracy), and (ii) selectively approximating those neurons that impact network quality the least. Further, we make the key observation that training is a naturally error-healing process that can be used to mitigate the impact of approximations to neurons. Therefore, we incrementally retrain the network with the approximations in-place, reclaiming a significant portion of the quality ceded by approximations. As a second contribution, we propose a programmable and quality-configurable neuromorphic processing engine (qcNPE), which utilizes arrays of specialized processing elements that execute neuron computations with dynamically configurable accuracies and can be used to execute AxNNs from diverse applications. We evaluated the proposed approach by constructing AXNNs for 6 recognition applications (ranging in complexity from 12-47,818 neurons and 160-3,155,968 connections) and executing them on two different platforms--qcNPE implementation containing 272 processing elements in 45nm technology and a commodity Intel Xeon server. Our results demonstrate 1.14X-1.92X energy benefits for virtually no loss (< 0.5%) in output quality, and even higher improvements (upto 2.3X) when some loss (upto 7.5%) in output quality is acceptable.

References

[1]
K. Kavukcuoglu et. al. Learning convolutional feature hierarchies for visual recognition. In NIPS, 2010.
[2]
A. Krizhevsky et. al. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
[3]
Y. Lecun et. al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), Nov 1998.
[4]
J. Hawkins et. al. Hierarchical temporal memory: Concepts, theory, and terminology. Numenta Inc. Whitepaper, 2006.
[5]
George Rosenberg. Improving photo search: A step across the semantic gap. June 2009.
[6]
Jeffrey Dean et. al. Large scale distributed deep networks. In NIPS, 2012.
[7]
Scientists See Promise in Deep-Learning Programs, www.nytimes.com /2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html.
[8]
S. Chakradhar and A. Raghunathan. Best-effort computing: Re-thinking parallel software and hardware. In Proc. DAC '10.
[9]
V. K. Chippa et. al. Scalable effort hardware design: Exploiting algorithmic resilience for energy efficiency. In Proc. DAC '10.
[10]
R. Hegde et. al. Energy-efficient signal processing via algorithmic noise-tolerance. In Proc. ISLPED '99, pages 30--35.
[11]
S. Venkataramani et. al. SALSA: systematic logic synthesis of approximate circuits. In Proc. DAC '12.
[12]
H. Esmaeilzadeh et. al. Neural acceleration for general-purpose approximate programs. In MICRO, 2012.
[13]
H. Esmaeilzadeh et. al. Architecture support for disciplined approximate programming. In Proc. ASPLOS 2012.
[14]
S. Venkataramani et al. Quality programmable vector processors for approximate computing. In Proc. MICRO, 2013.
[15]
C. Farabet et al. Neuflow: A runtime reconfigurable dataflow processor for vision. In Proc. CVPRW, 2011.
[16]
S. Chakradhar et. al. A dynamically configurable coprocessor for convolutional neural networks. In Proc. ISCA '10.
[17]
E. Painkras et. al. Spinnaker: A multi-core system-on-chip for massively-parallel neural net simulation. In Proc. CICC '12.
[18]
J. Ngiam et. al. On optimization methods for deep learning. In Proc. ICML, pages 265--272, 2011.
[19]
B. Rajendran et. al. Specifications of nanoscale devices and circuits for neuromorphic computational systems. IEEE Trans. on Electron Devices, 60(1):246--253, 2013.
[20]
Sung Hyun Jo et. al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Letters '10.
[21]
K. Roy et al. Beyond charge-based computation: Boolean and non-Boolean computing with spin torque devices. In Proc. ISLPED, pages 139--142, Sep. 2013.
[22]
K. Palem et. al. Sustaining moore's law in embedded computing through probabilistic and approximate design: Retrospects and prospects. In Proc. CASES, pages 1--10, 2009.

Cited By

View all
  • (2024)Transfer Adversarial Attacks through Approximate ComputingProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3670449(1-6)Online publication date: 30-Jul-2024
  • (2024)On the Commutative Operation of Approximate CMOS Ripple Carry Adders (RCAs)IEEE Transactions on Nanotechnology10.1109/TNANO.2023.334284423(265-273)Online publication date: 2024
  • (2024)MARLIN: A Co-Design Methodology for Approximate ReconfigurabLe Inference of Neural Networks at the EdgeIEEE Transactions on Circuits and Systems I: Regular Papers10.1109/TCSI.2024.336595271:5(2105-2118)Online publication date: May-2024
  • Show More Cited By

Index Terms

  1. AxNN: energy-efficient neuromorphic systems using approximate computing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ISLPED '14: Proceedings of the 2014 international symposium on Low power electronics and design
    August 2014
    398 pages
    ISBN:9781450329750
    DOI:10.1145/2627369
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 August 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. approximate computing
    2. energy efficiency
    3. large-scale neural networks
    4. neuromorphic systems

    Qualifiers

    • Research-article

    Conference

    ISLPED'14
    Sponsor:

    Acceptance Rates

    ISLPED '14 Paper Acceptance Rate 63 of 184 submissions, 34%;
    Overall Acceptance Rate 398 of 1,159 submissions, 34%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)77
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Transfer Adversarial Attacks through Approximate ComputingProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3670449(1-6)Online publication date: 30-Jul-2024
    • (2024)On the Commutative Operation of Approximate CMOS Ripple Carry Adders (RCAs)IEEE Transactions on Nanotechnology10.1109/TNANO.2023.334284423(265-273)Online publication date: 2024
    • (2024)MARLIN: A Co-Design Methodology for Approximate ReconfigurabLe Inference of Neural Networks at the EdgeIEEE Transactions on Circuits and Systems I: Regular Papers10.1109/TCSI.2024.336595271:5(2105-2118)Online publication date: May-2024
    • (2024)SimBU: Self-Similarity-Based Hybrid Binary-Unary Computing for Nonlinear FunctionsIEEE Transactions on Computers10.1109/TC.2024.339851273:9(2192-2205)Online publication date: 1-Sep-2024
    • (2024)Rapid Emulation of Approximate DNN Accelerators2024 IEEE International Symposium on Circuits and Systems (ISCAS)10.1109/ISCAS58744.2024.10558108(1-5)Online publication date: 19-May-2024
    • (2024)An Efficient Hardware Implementation of Spiking Neural Network Using Approximate Izhikevich Neuron2024 9th International Conference on Integrated Circuits, Design, and Verification (ICDV)10.1109/ICDV61346.2024.10616602(13-18)Online publication date: 6-Jun-2024
    • (2024)FlipBit: Approximate Flash Memory for IoT Devices2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)10.1109/HPCA57654.2024.00072(876-890)Online publication date: 2-Mar-2024
    • (2024)Approximate Fault-Tolerant Neural Network Systems2024 IEEE European Test Symposium (ETS)10.1109/ETS61313.2024.10567290(1-10)Online publication date: 20-May-2024
    • (2024)Ineffectiveness of Digital Transformations for Detecting Adversarial Attacks Against Quantized and Approximate CNNs2024 IEEE International Conference on Cyber Security and Resilience (CSR)10.1109/CSR61664.2024.10679345(290-295)Online publication date: 2-Sep-2024
    • (2024)Design Wireless Communication Circuits and Systems Using Approximate ComputingDesign and Applications of Emerging Computer Systems10.1007/978-3-031-42478-6_20(531-565)Online publication date: 14-Jan-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media