Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3665314.3670832acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article

Hardware Acceleration of Inference on Dynamic GNNs

Published: 09 September 2024 Publication History

Abstract

Dynamic graph neural networks (DGNNs) play a crucial role in applications that require inferencing on graph-structured data, where the connectivity and features of the graph evolve over time. The proposed platform integrates graph neural network (GNN) and recurrent neural network (RNN) components of DGNNs, providing a unified platform that captures spatial and temporal information. Novel contributions include optimized cache reuse, a novel caching policy, and efficient GNN-RNN pipelining. Average energy efficiency gains of 8393X, 183x, and 87X -- 10X, and inference speedups of 1796X, 77X, and 21x -- 2.4X, over Intel Xeon Gold CPU, NVIDIA V100 GPU, and prior approaches, respectively, are demonstrated across multiple graph datasets and multiple DGNNs.

References

[1]
S. Mondal et al. GNNIE: GNN Inference Engine with Load-Balancing and Graph-Specific Caching. In DAC, 2022.
[2]
E. Rossi et al. Temporal Graph Networks for Deep Learning on Dynamic Graphs. ICML, 2020.
[3]
S.M. Kazemi et al. Representation Learning for Dynamic Graphs: A Survey. J. Mach. Learn. Re., 2020.
[4]
C. Wang et al. PiPAD: Pipelined and Parallel Dynamic GNN Training on GPUs. In PPoPP, 2023.
[5]
M. Zhang et al. DeepCPU: Serving RNN-based Deep Learning Models 10x Faster. In USENIX ATC 18, 2018.
[6]
V. T. Chakaravarthy et al. Efficient Scaling of Dynamic Graph Neural Networks. In SC, 2021.
[7]
X. Song et al. Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Gaph Neural Networks. IEEE T. Comput. Aid. D., 2022.
[8]
H. Chen et al. DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference. In FCCM, 2023.
[9]
Y. Huang et al. ReaDy: A ReRAM-Based Processing-in-Memory Accelerator for Dynamic Graph Convolutional Networks. IEEE T. Comput. Aid. D., 2022.
[10]
Y. Huang et al. Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures. In HPCA, 2022.
[11]
G. Panagopoulos et al. Transfer Graph Neural Networks for Pandemic Forecasting. In AAAI, 2021.
[12]
A. Pareja et al. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In AAAI, 2020.
[13]
L. Zhao et al. T-GCN: A Temporal Graph ConvolutionalNetwork for Traffic Prediction. IEEE T. Intell. Tra. Sys., 2019.
[14]
S. Guo et al. Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. In AAAI, 2019.
[15]
S. Mondal et al. A Unified Engine for Accelerating GNN Weighting/Aggregation Operations, with Efficient Load Balancing and Graph-Specific Caching. IEEE T. Comput. Aid. D., 2022.
[16]
M. Yan et al. HyGCN: A GCN Accelerator with Hybrid Architecture. In HPCA, 2020.
[17]
T. Geng et al. AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing. In MICRO, 2020.
[18]
J.R. Stevens et al. GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks. In DAC, 2021.
[19]
Z. Zhou et al. BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices. In DAC, 2021.
[20]
C. Chen et al. DyGNN: Algorithm and Architecture Support of Dynamic Pruning for Graph Neural Networks. In DAC, 2021.
[21]
H. Zhou et al. TGL: A General Framework for Temporal GNN Training on Billion-Scale Graphs. VLDB, 2022.
[22]
CACTI 6.5. https://github.com/Chun-Feng/CACTI-6.5.
[23]
Y. Kim et al. Ramulator: A Fast and Extensible DRAM Simulator. IEEE Comp. Arch. Lett., 15(1), 2015.
[24]
M. O'Connor et al. Fine-Grained DRAM: Energy-Efficient DRAM for Extreme Bandwidth Systems. In ISCA, 2017.
[25]
B. Rozemberczki et al. PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models. In CIKM, 2021.

Index Terms

  1. Hardware Acceleration of Inference on Dynamic GNNs

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ISLPED '24: Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design
    August 2024
    384 pages
    ISBN:9798400706882
    DOI:10.1145/3665314
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 September 2024

    Check for updates

    Author Tags

    1. dynamic graphs
    2. GNN
    3. RNN
    4. hardware accelerator

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ISLPED '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 398 of 1,159 submissions, 34%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 77
      Total Downloads
    • Downloads (Last 12 months)77
    • Downloads (Last 6 weeks)28
    Reflects downloads up to 22 Dec 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media