Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3526241.3530315acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

Reducing Power Consumption using Approximate Encoding for CNN Accelerators at the Edge

Published: 06 June 2022 Publication History

Abstract

Convolutional neural networks (CNNs) have demonstrated significant potential across a range of applications due to their superior accuracy. Edge inference, in which inference is performed locally in embedded systems with limited power resources, is researched for its energy efficiency. An approximate encoder is proposed in this study for decreasing switching activity, which minimizes power consumption in CNN accelerators at the edge. The proposed encoder performs approximate encoding based on a pattern matching of a comparison pattern and current data. Software determines the value of the comparison pattern and the availability of the recommended encoder. Experiments with a CIFAR-10 dataset utilizing LeNet5 show that using the suggested encoder, depending upon the comparison pattern, power consumption of a CNN accelerator can be reduced by 21.5% with 1.59% degradation on inference quality.

References

[1]
Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278--2324, 1998.
[2]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks", in NIPS, Vol. 1, pp. 1097--1105, 2012.
[3]
K. Simonyan, and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition", Preprint at https://arxiv.org/abs/1409.1556, 2014.
[4]
C. Szegedy, et al., "Going Deeper With Convolutions", in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1--9, 2015.
[5]
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770--778, 2016.
[6]
K. Simonyan, and A. Zisserman, "Two-Stream Convolutional Networks for action recognition in videos", Preprint at https://arxiv.org/abs/1406.2199, 2014.
[7]
Q. Chen and V. Koltun, "Photographic image synthesis with cascaded refinement networks", Preprint at https://arxiv.org/abs/1707.09405, 2017.
[8]
X. Xu, Y. Ding, S. X. Hu, M. Niemier, J. Cong, Y. Hu, and Y. Shi, "Scaling for edge inference of deep neural networks", in Nature electronics, Vol. 1, 2018.
[9]
S. Venkataramani, V. K. Chippa, S. T. Chakradhar, K. Roy and A. Raghunathan, "Quality programmable vector processors for approximate computing", in MICRO, 2013.
[10]
S. Mittal, "A Survey of Techniques for Approximate Computing", in ACM Comput. Surv. 48, 4, Article 62, 2016.
[11]
Y. Chen, J. Emer, and V. Sze, "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks", in ISCA, 2016.
[12]
A. Krizhevsky, "Learning Multiple Layers of Features from Tiny Images", https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf, 2009.
[13]
M. Poncino and E. Macii, "Low-energy RGB color approximation for digital LCD interfaces", in IEEE Trans. on Consum. Electron. 52 (3), 2006.
[14]
D.J. Pagliari, E. Macii, and M. Poncino, "Serial T0: Approximate bus encoding for energy-efficient transmission of sensor signals", in DAC, 2016.
[15]
Y. Kim, S. Behroozi, V. Raghunathan, and A. Raghunathan, "AXSERBUS: A quality-configurable approximate serial bus for energy-efficient sensing", in ISLPED, 2017.
[16]
P. Stanley-Marbell and M. Rinard, "Reducing serial I/O power in error-tolerant applications by efficient lossy encoding", in DAC, 2016.
[17]
Y. H. Chen, T. Krishna, J. S. Emer, and V. Sze, "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks", in JSSCC Vol. 52, No.1, 2017.
[18]
C. R. Baugh and B. A. Wooley, "A Two's Complement Parallel Array Multiplication Algorithm", in IEEE Trans. on Computers, vol. C-22, no. 12, 1973.

Cited By

View all
  • (2024)Low-Power Bus Encoding by Ternary LWC and Quaternary Transition Signaling: From Initial Concept to Circuit DesignIEEE Transactions on Very Large Scale Integration (VLSI) Systems10.1109/TVLSI.2023.333727732:4(682-694)Online publication date: Apr-2024
  • (2023)Approximation Opportunities in Edge Computing Hardware: A Systematic Literature ReviewACM Computing Surveys10.1145/357277255:12(1-49)Online publication date: 3-Mar-2023

Index Terms

  1. Reducing Power Consumption using Approximate Encoding for CNN Accelerators at the Edge

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      GLSVLSI '22: Proceedings of the Great Lakes Symposium on VLSI 2022
      June 2022
      560 pages
      ISBN:9781450393225
      DOI:10.1145/3526241
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 June 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. approximate encoding
      2. convolutional neural network
      3. low-power CNN accelerator
      4. reducing switching activity

      Qualifiers

      • Research-article

      Funding Sources

      • Fukuoka University Internal Research Competitive Funds
      • JSPS KAKENHI Grant

      Conference

      GLSVLSI '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 312 of 1,156 submissions, 27%

      Upcoming Conference

      GLSVLSI '25
      Great Lakes Symposium on VLSI 2025
      June 30 - July 2, 2025
      New Orleans , LA , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)11
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 26 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Low-Power Bus Encoding by Ternary LWC and Quaternary Transition Signaling: From Initial Concept to Circuit DesignIEEE Transactions on Very Large Scale Integration (VLSI) Systems10.1109/TVLSI.2023.333727732:4(682-694)Online publication date: Apr-2024
      • (2023)Approximation Opportunities in Edge Computing Hardware: A Systematic Literature ReviewACM Computing Surveys10.1145/357277255:12(1-49)Online publication date: 3-Mar-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media