Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/ASP-DAC58780.2024.10473935acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article

Hardware-Software Co-Design of a Collaborative DNN Accelerator for 3D Stacked Memories with Multi-Channel Data

Published: 03 April 2024 Publication History

Abstract

Hardware accelerators are preferred over general-purpose processors for processing Deep Neural Networks (DNN) as the later suffer from power and memory walls. However, hardware accelerators designed as a separate logic chip from the memory still suffer from memory wall. Processing-in-memory accelerators, which try to overcome this memory wall by developing the compute elements as part of the memory structures, are highly constrained due to the memory manufacturing process. Near-data-processing (NDP) based hardware accelerator design is an alternative paradigm that could combine the benefit of high bandwidth, low access energy of processing-in-memory, and design flexibility of separate logic chip. However, NDP has area, data flow and thermal constraints, hindering high throughput designs. In this work, we propose an HBM3-based NDP accelerator that tackles the constraints of NDP with a hardware-software co-design approach. The proposed design takes only 50% area, delivers a speed-up of 3×, and is about 6× more energy efficient than state-of-the-art NDP hardware accelerator for inferencing workloads such as AlexNet, MobileNet, ResNet, and VGG without loss of accuracy.

References

[1]
P. Mattson et al., "Mlperf training benchmark," arXiv preprint arXiv:1910.01500, 2019.
[2]
K. He et al., "Deep residual learning for image recognition," in IEEE CVPR, 2016, pp. 770--778.
[3]
Y. Chen et al., "A survey of accelerator architectures for deep neural networks," Engineering, vol. 6, no. 3, pp. 264--274, 2020.
[4]
X. Guo et al., "Resistive computation: Avoiding the power wall with low-leakage, stt-mram based computing," ACM SIGARCH computer architecture news, vol. 38, no. 3, pp. 371--382, 2010.
[5]
W. A. Wulf et al., "Hitting the memory wall: Implications of the obvious," ACM SIGARCH computer architecture news, vol. 23, no. 1, pp. 20--24, 1995.
[6]
Y.-H. Chen et al., "Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks," ACM SIGARCH computer architecture news, vol. 44, no. 3, pp. 367--379, 2016.
[7]
B. Zimmer et al., "A 0.32-128 tops, scalable multi-chip-module-based deep neural network inference accelerator with ground-referenced signaling in 16 nm," JSSC, vol. 55, no. 4, pp. 920--932, 2020.
[8]
F. Devaux, "The true processing in memory accelerator," in 2019 IEEE Hot Chips 31 Symposium (HCS). IEEE Computer Society, 2019, pp. 1--24.
[9]
M. He et al., "Newton: A dram-maker's accelerator-in-memory (aim) architecture for machine learning," in MICRO. IEEE, 2020, pp. 372385.
[10]
S. Lee et al., "A 1ynm 1.25v 8gb, 16gb/s/pin gddr6-based accelerator-in-memory supporting 1tflops mac operation and various activation functions for deep-learning applications," in ISSCC, vol. 65, 2022, pp. 1--3.
[11]
M. Gao et al., "Tetris: Scalable and efficient neural network acceleration with 3d memory," in ASPLOS, 2017, pp. 751--764.
[12]
E. Azarkhish et al., "Neurostream: Scalable and energy efficient deep learning with smart memory cubes," IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 2, pp. 420--434, 2018.
[13]
P. Das et al., "Clu: A near-memory accelerator exploiting the parallelism in convolutional neural networks," ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 17, no. 2, pp. 1--25, 2021.
[14]
P. Das et al., "Hydra: A near hybrid memory accelerator for cnn inference," in DATE. IEEE, 2022, pp. 1017--1022.
[15]
P. Das et al., "Nzespa: A near-3d-memory zero skipping parallel accelerator for cnns," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 40, no. 8, pp. 1573--1585, 2020.
[16]
N. Park et al., "High-throughput near-memory processing on cnns with 3d hbm-like memory," ACM Transactions on Design Automation of Electronic Systems (TODAES), vol. 26, no. 6, pp. 1--20, 2021.
[17]
J. Jeddeloh et al., "Hybrid memory cube new dram architecture increases density and performance," in 2012 symposium on VLSI technology (VLSIT). IEEE, 2012, pp. 87--88.
[18]
J. C. Lee et al., "18.3 a 1.2 v 64gb 8-channel 256gb/s hbm dram with peripheral-base-die architecture and small-swing technique on heavy load interface," in ISSCC. IEEE, 2016, pp. 318--319.
[19]
M.-J. Park et al., "A 192-gb 12-high 896-gb/s hbm3 dram with a tsv auto-calibration scheme and machine-learning-based layout optimization," IEEE Journal of Solid-State Circuits, vol. 58, no. 1, pp. 256--269, 2022.
[20]
Y. Eckert et al., "Thermal feasibility of die-stacked processing in memory," 2014.
[21]
A. Krizhevsky et al., "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, 2012.
[22]
K. Simonyan et al., "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[23]
A. Parashar et al., "Timeloop: A systematic approach to dnn accelerator evaluation," in ISPASS. IEEE, 2019, pp. 304--315.
[24]
Y. N. Wu et al., "Accelergy: An architecture-level energy estimation methodology for accelerator designs," in ICCAD. IEEE, 2019, pp. 1--8.
[25]
N. Muralimanohar et al., "Cacti 6.0: A tool to model large caches," HP laboratories, vol. 27, p. 28, 2009.
[26]
Y. S. Shao et al., "Aladdin: A pre-rtl, power-performance accelerator simulator enabling large design space exploration of customized architectures," in ISCA. IEEE, 2014, pp. 97--108.
[27]
T. Glint et al., "Hardware-software codesign of dnn accelerators using approximate posit multipliers," in ASPDAC, 2023, pp. 469--474.
[28]
V. Gohil et al., "Fixed-posit: A floating-point representation for error-resilient applications," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 68, no. 10, pp. 3341--3345, 2021.
[29]
J. K. Devnath et al., "A mathematical approach towards quantization of floating point weights in low power neural networks," in 2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID), 2020, pp. 177--182.
[30]
J. Gustafson, "Posit arithmetic," Mathematica Notebook describing the posit number system, 2017.
[31]
J. L. Gustafson et al., "Beating floating point at its own game: Posit arithmetic," Supercomputing frontiers and innovations, vol. 4, no. 2, pp. 71--86, 2017.
[32]
M. Alwani et al., "Fused-layer cnn accelerators," in MICRO. IEEE, 2016, pp. 1--12.
[33]
T. Chen et al., "Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning," ACM SIGARCH Computer Architecture News, vol. 42, no. 1, pp. 269--284, 2014.
[34]
S. Han et al., "Eie: Efficient inference engine on compressed deep neural network," ACM SIGARCH Computer Architecture News, vol. 44, no. 3, pp. 243--254, 2016.
[35]
A. Parashar et al., "Scnn: An accelerator for compressed-sparse convolutional neural networks," ACM SIGARCH computer architecture news, vol. 45, no. 2, pp. 27--40, 2017.
[36]
W. Qadeer et al., "Convolution engine: balancing efficiency & flexibility in specialized computing," in ISCA, 2013, pp. 24--35.
[37]
H. Sharma et al., "Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network," in ISCA. IEEE, 2018, pp. 764--775.
[38]
F. Sijstermans, "The nvidia deep learning accelerator," in Hot Chips, vol. 30, 2018, pp. 19--21.
[39]
S. Yin et al., "Parana: A parallel neural architecture considering thermal problem of 3d stacked memory," IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 1, pp. 146--160, 2018.
[40]
Y. Wang et al., "Exploiting parallelism for cnn applications on 3d stacked processing-in-memory architecture," IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 3, pp. 589--600, 2018.
[41]
N. Park et al., "High-throughput near-memory processing on cnns with 3d hbm-like memory," TODAES, vol. 26, no. 6, pp. 1--20, 2021.
[42]
E. Azarkhish et al., "Neurostream: Scalable and energy efficient deep learning with smart memory cubes," IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 2, pp. 420--434, 2017.

Index Terms

  1. Hardware-Software Co-Design of a Collaborative DNN Accelerator for 3D Stacked Memories with Multi-Channel Data
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        ASPDAC '24: Proceedings of the 29th Asia and South Pacific Design Automation Conference
        January 2024
        1008 pages
        ISBN:9798350393545
        DOI:10.1109/3655039

        Sponsors

        Publisher

        IEEE Press

        Publication History

        Published: 03 April 2024

        Check for updates

        Author Tags

        1. hardware-software co-design. deep neural network accelerator
        2. HBM3

        Qualifiers

        • Research-article

        Conference

        ASPDAC '24
        Sponsor:
        ASPDAC '24: 29th Asia and South Pacific Design Automation Conference
        January 22 - 25, 2024
        Incheon, Republic of Korea

        Acceptance Rates

        Overall Acceptance Rate 466 of 1,454 submissions, 32%

        Upcoming Conference

        ASPDAC '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 22
          Total Downloads
        • Downloads (Last 12 months)22
        • Downloads (Last 6 weeks)5
        Reflects downloads up to 09 Nov 2024

        Other Metrics

        Citations

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media