Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3338852.3339874acmconferencesArticle/Chapter ViewAbstractPublication PagessbcciConference Proceedingsconference-collections
research-article

Reduction of neural network circuits by constant and nearly constant signal propagation

Published: 26 August 2019 Publication History

Abstract

This work focuses on optimizing circuits representing neural networks (NNs) in the form of and-inverter graphs (AIGs). The optimization is done by analyzing the training set of the neural network to find constant bit values at the primary inputs. The constant values are then propagated through the AIG, which results in removing unnecessary nodes. Furthermore, a trade-off between neural network accuracy and its reduction due to constant propagation is investigated by replacing with constants those inputs that are likely to be zero or one. The experimental results show a significant reduction in circuit size with negligible loss in accuracy.

References

[1]
Jayesh Ahire. 2018. Artificial Neural Networks: the Brain behind AI. Lulu. com.
[2]
Arash Ardakani, François Leduc-Primeau, Naoya Onizawa, Takahiro Hanyu, and Warren J Gross. 2017. VLSI implementation of deep neural network using integral stochastic computing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 25, 10 (2017), 2688--2699.
[3]
A. Biere. 2007. AIGER Format And-Inverter Graph(AIG) Format. (2007).
[4]
Li Deng. 2012. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine 29, 6 (2012), 141--142.
[5]
Winston Haaswijk, Edo Collins, Benoit Seguin, Mathias Soeken, Frédéric Kaplan, Sabine Süsstrunk, and Giovanni De Micheli. 2018. Deep learning for logic optimization algorithms. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1--4.
[6]
Jie Han and Michael Orshansky. 2013. Approximate computing: An emerging paradigm for energy-efficient design. In 2013 18th IEEE European Test Symposium (ETS). IEEE, 1--6.
[7]
Kaiming He and Jian Sun. 2015. Convolutional Neural Networks at Constrained Time Cost. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8]
Alan Mishchenko, Robert Brayton, Jie-Hong R. Jiang, and Stephen Jang. 2011. Scalable Don'T-care-based Logic Optimization and Resynthesis. ACM Trans. Reconfigurable Technol. Syst. 4, 4, Article 34 (Dec. 2011), 23 pages.
[9]
Alan Mishchenko, Satrajit Chatterjee, and Robert Brayton. 2006. DAG-aware AIG rewriting a fresh look at combinational logic synthesis. In Proceedings of the 43rd annual Design Automation Conference. ACM, 532--535.
[10]
Janardan Misra and Indranil Saha. 2010. Artificial neural networks in hardware: A survey of two decades of progress. Neurocomputing 74, 1--3 (2010), 239--255.
[11]
Michael A Nielsen. 2015. Neural networks and deep learning. Vol. 25. Determination press San Francisco, CA, USA:.
[12]
Behrooz Parhami. 2010. Computer arithmetic. Vol. 20. Oxford university press.
[13]
Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. 2016. Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA '16). ACM, New York, NY, USA, 26--35.
[14]
Jürgen Schmidhuber. 2015. Deep learning in neural networks: An overview. Neural networks 61 (2015), 85--117.
[15]
Swagath Venkataramani, Kaushik Roy, and Anand Raghunathan. 2013. Substitute-and-simplify: A unified design paradigm for approximate and quality configurable circuits. In 2013 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1367--1372.
[16]
Yi Wu and Weikang Qian. 2016. An Efficient Method for Multi-level Approximate Logic Synthesis Under Error Rate Constraint. In Proceedings of the 53rd Annual Design Automation Conference (DAC '16). ACM, New York, NY, USA, Article 128, 6 pages.
[17]
Yue Yao, Shuyang Huang, Chen Wang, Yi Wu, and Weikang Qian. 2017. Approximate disjoint bi-decomposition and its application to approximate logic synthesis. In 2017 IEEE International Conference on Computer Design (ICCD). IEEE, 517--524.
[18]
C. Yu, M. Ciesielski, and A. Mishchenko. 2018. Fast Algebraic Rewriting Based on And-Inverter Graphs. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37, 9 (Sep. 2018), 1907--1911.
[19]
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Cited By

View all
  • (2022)Optimizing machine learning logic circuits with constant signal propagationIntegration, the VLSI Journal10.1016/j.vlsi.2022.08.00487:C(293-305)Online publication date: 1-Nov-2022
  • (2021)Exploring Constant Signal Propagation to Optimize Neural Network Circuits2021 34th SBC/SBMicro/IEEE/ACM Symposium on Integrated Circuits and Systems Design (SBCCI)10.1109/SBCCI53441.2021.9529971(1-6)Online publication date: 23-Aug-2021

Index Terms

  1. Reduction of neural network circuits by constant and nearly constant signal propagation

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SBCCI '19: Proceedings of the 32nd Symposium on Integrated Circuits and Systems Design
      August 2019
      204 pages
      ISBN:9781450368445
      DOI:10.1145/3338852
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 26 August 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. and-inverter graph
      2. logic synthesis
      3. neural networks

      Qualifiers

      • Research-article

      Funding Sources

      • Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
      • Conselho Nacional de Desenvolvimento Científico e Tecnológico

      Conference

      SBCCI '19
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 133 of 347 submissions, 38%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)2
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 16 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)Optimizing machine learning logic circuits with constant signal propagationIntegration, the VLSI Journal10.1016/j.vlsi.2022.08.00487:C(293-305)Online publication date: 1-Nov-2022
      • (2021)Exploring Constant Signal Propagation to Optimize Neural Network Circuits2021 34th SBC/SBMicro/IEEE/ACM Symposium on Integrated Circuits and Systems Design (SBCCI)10.1109/SBCCI53441.2021.9529971(1-6)Online publication date: 23-Aug-2021

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media