Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3583781.3590213acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Public Access

Accelerating Low Bit-width Neural Networks at the Edge, PIM or FPGA: A Comparative Study

Published: 05 June 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Deep Neural Network (DNN) acceleration with digital Processing-in-Memory (PIM) platforms at the edge is an actively-explored domain with great potential to not only address memory-wall bottlenecks but to offer orders of performance improvement in comparison to the von-Neumann architecture. On the other side, FPGA-based edge computing has been followed as a potential solution to accelerate compute-intensive workloads. In this work, adopting low-bit-width neural networks, we perform a solid and comparative inference performance analysis of a recent processing-in-SRAM tape-out with a low-resource FPGA board and a high-performance GPU to provide a guideline for the research community. We explore and highlight the key architectural constraints of these edge candidates that impact their overall performance. Our experimental data demonstrate that the processing-in-SRAM can obtain up to ~160x speed-up and up to 228x higher efficiency (img/s/W) compared to the under-test FPGA on the CIFAR-10 dataset.

    References

    [1]
    M. Abedin, A. Roohi, M. Liehr, N. Cady, and S. Angizi, "Mr-pipa: An integrated multilevel rram (hfo x)-based processing-in-pixel accelerator," IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, vol. 8, no. 2, pp. 59--67, 2022.
    [2]
    S. Tabrizchi, A. Nezhadi, S. Angizi, and A. Roohi, ?Appcip: Energy-efficient approximate convolution-in-pixel scheme for neural network acceleration," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2023.
    [3]
    S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, ?Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients," arXiv preprint arXiv:1606.06160, 2016.
    [4]
    C. Eckert, X. Wang, J. Wang, A. Subramaniyan, R. Iyer, D. Sylvester, D. Blaaauw, and R. Das, ?Neural cache: Bit-serial in-cache acceleration of deep neural networks," in 2018 ACM/IEEE 45Th annual international symposium on computer architecture (ISCA). IEEE, 2018, pp. 383--396.
    [5]
    S. Angizi, Z. He, A. S. Rakin, and D. Fan, ?Cmp-pim: an energy-efficient comparator-based processing-in-memory neural network accelerator," in Proceedings of the 55th Annual Design Automation Conference, 2018, pp. 1--6.
    [6]
    S. Biookaghazadeh, M. Zhao, and F. Ren, ?Are fpgas suitable for edge computing?" in {USENIX} Workshop on Hot Topics in Edge Computing (HotEdge 18), 2018.
    [7]
    Z. Zhang, M. P. Mahmud, and A. Z. Kouzani, "Fitnn: A low-resource fpga-based cnn accelerator for drones," IEEE Internet of Things Journal, vol. 9, no. 21, pp. 21 357--21 369, 2022.
    [8]
    D. C. Le, E. Y. Oh, J. H. Jeong, S. H. Kim, M. Jeon, J. Jang, and C.-H. Youn, "An opencl-based sift accelerator for image features extraction on fpga in mobile edge computing environment," in 2018 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2018, pp. 1406--1410.
    [9]
    C. Xu, S. Jiang, G. Luo, G. Sun, N. An, G. Huang, and X. Liu, "The case for fpga- based edge computing," IEEE Transactions on Mobile Computing, vol. 21, no. 7, pp. 2610--2619, 2020.
    [10]
    G. Sahu, A. Seal, A. Yazidi, and O. Krejcar, "A dual-channel dehaze-net for single image dehazing in visual internet of things using pynq-z2 board," IEEE Transac- tions on Automation Science and Engineering, 2022.
    [11]
    K. Karras, E. Pallis, G. Mastorakis, Y. Nikoloudakis, J. M. Batalla, C. X. Mavromous- takis, and E. Markakis, "A hardware acceleration platform for ai-based inference at the edge," Circuits, Systems, and Signal Processing, vol. 39, pp. 1059--1070, 2020.
    [12]
    P. Chi, S. Li, C. Xu, T. Zhang, J. Zhao, Y. Liu, Y. Wang, and Y. Xie, "Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory," ACM SIGARCH Computer Architecture News, vol. 44, no. 3, pp. 27--39, 2016.
    [13]
    S. Li, D. Niu, K. T. Malladi, H. Zheng, B. Brennan, and Y. Xie, "Drisa: A dram-based reconfigurable in-situ accelerator," in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017, pp. 288--301.
    [14]
    S. Jain, A. Sengupta, K. Roy, and A. Raghunathan, "Rx-caffe: Framework for evaluating and training deep neural networks on resistive crossbars," arXiv preprint arXiv:1809.00072, 2018.
    [15]
    S. K. Mandal, G. Krishnan, A. A. Goksoy, G. R. Nair, Y. Cao, and U. Y. Ogras, "Coin: Communication-aware in-memory acceleration for graph convolutional networks," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 12, no. 2, pp. 472--485, 2022.
    [16]
    J. Li, A. Louri, A. Karanth, and R. Bunescu, "Gcnax: A flexible and energy-efficient accelerator for graph convolutional neural networks," in 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2021, pp. 775--788.
    [17]
    S. Angizi, S. Tabrizchi, and A. Roohi, "Pisa: A binary-weight processing-in-sensor accelerator for edge image processing," arXiv preprint arXiv:2202.09035, 2022.
    [18]
    S. Angizi, Z. He, F. Parveen, and D. Fan, "Imce: Energy-efficient bit-wise in- memory convolution engine for deep neural network," in 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2018, pp. 111--116.
    [19]
    S. Li, C. Xu, Q. Zou, J. Zhao, Y. Lu, and Y. Xie, "Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories," in Proceedings of the 53rd Annual Design Automation Conference, 2016, pp. 1--6.
    [20]
    S. Jain, A. Ranjan, K. Roy, and A. Raghunathan, "Computing in memory with spin- transfer torque magnetic ram," IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 26, no. 3, pp. 470--483, 2017.
    [21]
    S. Yin, Z. Jiang, J.-S. Seo, and M. Seok, "Xnor-sram: In-memory computing sram macro for binary/ternary deep neural networks," IEEE Journal of Solid-State Circuits, vol. 55, no. 6, pp. 1733--1743, 2020.
    [22]
    Z. Jiang, S. Yin, J.-S. Seo, and M. Seok, "C3sram: An in-memory-computing sram macro based on robust capacitive coupling computing mechanism," IEEE Journal of Solid-State Circuits, vol. 55, no. 7, pp. 1888--1897, 2020.
    [23]
    Y.-H. Chen, T. Krishna, J. S. Emer, and V. Sze, "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks," IEEE journal of solid-state circuits, vol. 52, no. 1, pp. 127--138, 2016.
    [24]
    C. Zhang, G. Sun, Z. Fang, P. Zhou, P. Pan, and J. Cong, "Caffeine: Toward uni- formed representation and acceleration for deep convolutional neural networks," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 38, no. 11, pp. 2072--2085, 2018.
    [25]
    M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "Xnor-net: Imagenet classi- fication using binary convolutional neural networks," in Computer Vision--ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV. Springer, 2016, pp. 525--542.
    [26]
    R. Andri, L. Cavigelli, D. Rossi, and L. Benini, "Yodann: An ultra-low power convolutional neural network accelerator based on binary weights," in 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 2016, pp. 236--241.
    [27]
    M. Courbariaux, Y. Bengio, and J.-P. David, "Binaryconnect: Training deep neural networks with binary weights during propagations," Advances in neural information processing systems, vol. 28, 2015.
    [28]
    A. Sridharan, S. Angizi, S. K. Cherupally, F. Zhang, J.-S. Seo, and D. Fan, "A 1.23-ghz 16-kb programmable and generic processing-in-sram accelerator in 65nm," in ESSCIRC 2022-IEEE 48th European Solid State Circuits Conference (ESSCIRC). IEEE, 2022, pp. 153--156.
    [29]
    Xilinx, "Xup pynq-z2." [Online]. Available: https://www.xilinx.com/support/ university/xup-boards/XUPPYNQ-Z2.html
    [30]
    A. Biswas and A. P. Chandrakasan, "Conv-sram: An energy-efficient sram with in- memory dot-product computation for low-power convolutional neural networks," IEEE Journal of Solid-State Circuits, vol. 54, no. 1, pp. 217--230, 2018.
    [31]
    J. Wang, X. Wang, C. Eckert, A. Subramaniyan, R. Das, D. Blaauw, and D. Sylvester, "A 28-nm compute sram with bit-serial logic/arithmetic operations for programmable in-memory vector computing," IEEE Journal of Solid-State Circuits, vol. 55, no. 1, pp. 76--86, 2019.
    [32]
    H. Valavi, P. J. Ramadge, E. Nestler, and N. Verma, "A 64-tile 2.4-mb in-memory- computing cnn accelerator employing charge-domain compute," IEEE Journal of Solid-State Circuits, vol. 54, no. 6, pp. 1789--1799, 2019.
    [33]
    Y. Zhang, L. Xu, Q. Dong, J. Wang, D. Blaauw, and D. Sylvester, "Recryptor: A reconfigurable cryptographic cortex-m0 processor with in-memory and near-memory computing for iot security," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 995--1005, 2018.
    [34]
    Xilinx, "Python productivity for zynq," 2018. [Online]. Available: http: //www.pynq.io/
    [35]
    Y. Umuroglu, N. J. Fraser, G. Gambardella, M. Blott, P. Leong, M. Jahre, and K. Vissers, "Finn: A framework for fast, scalable binarized neural network inference," in Proceedings of the 2017 ACM/SIGDA international symposium on field-programmable gate arrays, 2017, pp. 65--74.
    [36]
    M. Blott, T. B. Preußer, N. J. Fraser, G. Gambardella, K. O'brien, Y. Umuroglu, M. Leeser, and K. Vissers, "Finn-r: An end-to-end deep-learning framework for fast exploration of quantized neural networks," ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 11, no. 3, pp. 1--23, 2018.
    [37]
    A. Santos, J. D. Ferreira, O. Mutlu, and G. Falcao, "Redbit: An end-to-end flexible framework for evaluating the accuracy of quantized cnns," arXiv preprint arXiv:2301.06193, 2023.
    [38]
    A. Krizhevsky et al., "The cifar-10 and cifar-100 datasets." [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html
    [39]
    C. C. Yann LeCun and C. Burges, "The mnist database." [Online]. Available: http://yann.lecun.com/exdb/mnist/

    Cited By

    View all
    • (2023)High Performance VLSI Architecture for Real-Time Video Edge Detection2023 Second International Conference on Advances in Computational Intelligence and Communication (ICACIC)10.1109/ICACIC59454.2023.10435054(1-6)Online publication date: 7-Dec-2023

    Index Terms

    1. Accelerating Low Bit-width Neural Networks at the Edge, PIM or FPGA: A Comparative Study

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      GLSVLSI '23: Proceedings of the Great Lakes Symposium on VLSI 2023
      June 2023
      731 pages
      ISBN:9798400701252
      DOI:10.1145/3583781
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 June 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. deep neural networks
      2. fpga
      3. processing-in-memory
      4. sram

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      GLSVLSI '23
      Sponsor:
      GLSVLSI '23: Great Lakes Symposium on VLSI 2023
      June 5 - 7, 2023
      TN, Knoxville, USA

      Acceptance Rates

      Overall Acceptance Rate 312 of 1,156 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)156
      • Downloads (Last 6 weeks)18
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)High Performance VLSI Architecture for Real-Time Video Edge Detection2023 Second International Conference on Advances in Computational Intelligence and Communication (ICACIC)10.1109/ICACIC59454.2023.10435054(1-6)Online publication date: 7-Dec-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media