Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Distributed Graph Neural Network Training: A Survey

Published: 10 April 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review of the optimization techniques for the distributed execution of GNN training. In this survey, we analyze three major challenges in distributed GNN training: massive feature communication, the loss of model accuracy, and workload imbalance. Then, we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories: GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol. We carefully discuss the techniques in each category. In the conclusion, we summarize existing distributed GNN systems for multi–graphics processing units (GPUs), GPU-clusters and central processing unit (CPU)-clusters, respectively, and present a discussion about the future direction of distributed GNN training.

    Supplementary Material

    csur-2022-0818-File003 (csur-2022-0818-file003.zip)
    Supplementary material

    References

    [1]
    Sergi Abadal, Akshay Jain, Robert Guirado, Jorge López-Alonso, and Eduard Alarcón. 2021. Computing graph neural networks: A survey from algorithms to accelerators. ACM Comput. Surv. 54, 9 (2021), 1–38.
    [2]
    Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation. 265–283.
    [3]
    David Ahmedt-Aristizabal, Mohammad Ali Armin, Simon Denman, Clinton Fookes, and Lars Petersson. 2021. Graph-based deep learning for medical diagnosis and analysis: Past, present and future. Sensors 21, 14 (2021), 4758. DOI:
    [4]
    Alexandra Angerd, Keshav Balasubramanian, and Murali Annavaram. 2020. Distributed training of graph convolutional networks using subgraph approximation. arXiv preprint arXiv:2012.04930 (2020), 1–14.
    [5]
    Mehdi Bahri, Gaetan Bahl, and Stefanos Zafeiriou. 2021. Binary graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9492–9501.
    [6]
    Youhui Bai, Cheng Li, Zhiqi Lin, Yufei Wu, Youshan Miao, Yunxin Liu, and Yinlong Xu. 2021. Efficient data loader for fast sampling-based GNN training on large graphs. IEEE Transactions on Parallel and Distributed Systems 32, 10 (2021), 2541–2556.
    [7]
    Ziv Bar-Yossef and Li-Tal Mashiach. 2008. Local approximation of pagerank and reverse pagerank. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. 279–288.
    [8]
    Trinayan Baruah, Kaustubh Shivdikar, Shi Dong, Yifan Sun, Saiful A. Mojumder, Kihoon Jung, José L Abellán, Yash Ukidave, Ajay Joshi, John Kim, et al. 2021. GNNMark: A benchmark suite to characterize graph neural network training on GPUs. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software. 13–23.
    [9]
    Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. 2018. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 (2018), 1–40.
    [10]
    Maciej Besta and Torsten Hoefler. 2022. Parallel and distributed graph neural networks: An in-depth concurrency analysis. arXiv preprint arXiv:2205.09702 (2022), 1–27.
    [11]
    Erik G. Boman, Karen D. Devine, and Sivasankaran Rajamanickam. 2013. Scalable matrix computations on large scale-free graphs using 2D graph partitioning. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. 1–12.
    [12]
    Pietro Bongini, Monica Bianchini, and Franco Scarselli. 2021. Molecular generative graph neural networks for drug discovery. Neurocomputing 450 (2021), 242–252.
    [13]
    Thomas Bradley. 2012. GPU performance analysis and optimisation. NVIDIA Corporation (2012), 1–117.
    [14]
    Zhenkun Cai, Xiao Yan, Yidi Wu, Kaihao Ma, James Cheng, and Fan Yu. 2021. DGCL: An efficient communication library for distributed GNN training. In Proceedings of the European Conference on Computer Systems. 130–144.
    [15]
    Zhenkun Cai, Qihui Zhou, Xiao Yan, Da Zheng, Xiang Song, Chenguang Zheng, James Cheng, and George Karypis. 2023. DSP: Efficient GNN training with multiple GPUs. In Proceedings of the ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming. 392–404.
    [16]
    Umit V. Catalyurek and Cevdet Aykanat. 1999. Hypergraph-partitioning-based decomposition for parallel sparse-matrix vector multiplication. IEEE Transactions on Parallel and Distributed Systems 10, 7 (1999), 673–693.
    [17]
    Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Xingcheng Yao, Aohan Zeng, Shiguang Guo, Peng Zhang, Guohao Dai, et al. 2021. CogDL: An extensive toolkit for deep learning on graphs. arXiv preprint arXiv:2103.00959 (2021), 1–11.
    [18]
    Zheng Chai, Guangji Bai, Liang Zhao, and Yue Cheng. 2022. Distributed graph neural network training with periodic historical embedding synchronization. arXiv preprint arXiv:2206.00057 (2022), 1–20.
    [19]
    Yaomin Chang, Chuan Chen, Weibo Hu, Zibin Zheng, Xiaocong Zhou, and Shouzhi Chen. 2022. MEGNN: Meta-path extracted graph neural network for heterogeneous graph representation learning. Knowledge-Based Systems 235 (2022), 107611.
    [20]
    Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast learning with graph convolutional networks via importance sampling. In International Conference on Learning Representations. 1–15.
    [21]
    Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016), 1–10.
    [22]
    Jianfei Chen, Jun Zhu, and Le Song. 2018. Stochastic training of graph convolutional networks with variance reduction. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80). 942–950.
    [23]
    Xiaobing Chen, Yuke Wang, Xinfeng Xie, Xing Hu, Abanti Basak, Ling Liang, Mingyu Yan, Lei Deng, Yufei Ding, Zidong Du, et al. 2022. Rubik: A hierarchical architecture for efficient graph neural network training. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41, 4 (2022), 936–949.
    [24]
    Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. 2019. Cluster-GCN: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 257–266.
    [25]
    Edward Choi, Zhen Xu, Yujia Li, Michael Dusenberry, Gerardo Flores, Emily Xue, and Andrew Dai. 2020. Learning the graphical structure of electronic health records with graph convolutional transformer. In Proceedings of the AAAI Conference on Artificial Intelligence. 606–613.
    [26]
    Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. 2017. Dawnbench: An end-to-end deep learning benchmark and competition. Training 100, 101 (2017), 102.
    [27]
    Weilin Cong, Rana Forsati, Mahmut Kandemir, and Mehrdad Mahdavi. 2020. Minimal variance sampling with provable guarantees for fast training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1393–1403.
    [28]
    Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velickovic. 2020. Principal neighbourhood aggregation for graph nets. In Advances in Neural Information Processing Systems. 13260–13271.
    [29]
    Gunduz Vehbi Demirci, Aparajita Haldar, and Hakan Ferhatosmanoglu. 2023. Scalable graph convolutional network training on distributed-memory systems. In Proceedings of the VLDB Endowment, Vol. 16. 711–724.
    [30]
    Xiang Deng and Zhongfei Zhang. 2021. Graph-free knowledge distillation for graph neural networks. In Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2321–2327.
    [31]
    Kien Do, Truyen Tran, and Svetha Venkatesh. 2019. Graph transformation policy network for chemical reaction prediction. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 750–760.
    [32]
    Jialin Dong, Da Zheng, Lin F. Yang, and George Karypis. 2021. Global neighbor sampling for mixed CPU-GPU training on giant graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 289–299.
    [33]
    Shi Dong and David Kaeli. 2017. DNNMark: A deep neural network benchmark suite for GPUs. In Proceedings of the General Purpose GPUs. 63–72.
    [34]
    Yuanqi Du, Shiyu Wang, Xiaojie Guo, Hengning Cao, Shujie Hu, Junji Jiang, Aishwarya Varala, Abhinav Angirekula, and Liang Zhao. 2021. GraphGT: Machine learning datasets for graph generation and transformation. In 35th Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). 1–17.
    [35]
    Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2020. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982 (2020), 1–47.
    [36]
    Wenfei Fan, Ruochun Jin, Muyang Liu, Ping Lu, Xiaojian Luo, Ruiqi Xu, Qiang Yin, Wenyuan Yu, and Jingren Zhou. 2020. Application driven graph partitioning. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. 1765–1779.
    [37]
    Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In The World Wide Web Conference. 417–426.
    [38]
    Boyuan Feng, Yuke Wang, Xu Li, Shu Yang, Xueqiao Peng, and Yufei Ding. 2020. SGQuant: Squeezing the last bit on graph neural networks with specialized quantization. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI). 1044–1052.
    [39]
    Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch geometric. arXiv preprint arXiv:1903.02428 (2019), 1–9.
    [40]
    Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. 2017. Protein interface prediction using graph convolutional networks. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6533–6542.
    [41]
    Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. 2020. SIGN: Scalable inception graph neural networks. arXiv preprint arXiv:2004.11198 (2020), 1–17.
    [42]
    Scott Freitas, Yuxiao Dong, Joshua Neil, and Duen Horng Chau. 2021. A large-scale database for graph representation learning. In Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). 1–13.
    [43]
    Qiang Fu, Yuede Ji, and H. Howie Huang. 2022. TLPGNN: A lightweight two-level parallelism paradigm for graph neural network computation on GPU. In Proceedings of the 31st International Symposium on High-Performance Parallel and Distributed Computing. 122–134.
    [44]
    Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020. MAGNN: Metapath aggregated graph neural network for heterogeneous graph embedding. In Proceedings of The Web Conference 2020. 2331–2341.
    [45]
    Yasuhiro Fujiwara, Yasutoshi Ida, Atsutoshi Kumagai, Masahiro Nakano, Akisato Kimura, and Naonori Ueda. 2023. Efficient network representation learning via cluster similarity. Data Sci. Eng. 8, 3 (2023), 279–291.
    [46]
    Swapnil Gandhi and Anand Padmanabha Iyer. 2021. P3: Distributed deep graph learning at scale. In 15th USENIX Symposium on Operating Systems Design and Implementation. 551–568.
    [47]
    Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning. 1263-1272.
    [48]
    Zhangxiaowen Gong, Houxiang Ji, Yao Yao, Christopher W. Fletcher, Christopher J. Hughes, and Josep Torrellas. 2022. Graphite: Optimizing graph neural networks on CPUs through cooperative software-hardware techniques. In Proceedings of the 49th Annual International Symposium on Computer Architecture. 916–931.
    [49]
    Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. 2012. PowerGraph: Distributed graph-parallel computation on natural graphs. In 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI 12). 17–30.
    [50]
    Daniele Grattarola and Cesare Alippi. 2021. Graph neural networks in TensorFlow and Keras with Spektral [application notes]. IEEE Computational Intelligence Magazine 16, 1 (2021), 99–106.
    [51]
    Mingyu Guan, Anand Padmanabha Iyer, and Taesoo Kim. 2022. DynaGraph: Dynamic graph neural networks at scale. In Proceedings of the 5th ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA). 1–10.
    [52]
    William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 1025–1035.
    [53]
    Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, YongDong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
    [54]
    Loc Hoang, Xuhao Chen, Hochan Lee, Roshan Dathathri, Gurbinder Gill, and Keshav Pingali. 2021. Efficient distribution for deep learning on large graphs. In Proceedings of the 1st MLSys Workshop on Graph Neural Networks and Systems. 1–9.
    [55]
    Linmei Hu, Siyong Xu, Chen Li, Cheng Yang, Chuan Shi, Nan Duan, Xing Xie, and Ming Zhou. 2020. Graph neural news recommendation with unsupervised preference disentanglement. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 4255–4264.
    [56]
    Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in Neural Information Processing Systems 33 (2020), 22118–22133.
    [57]
    Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, and Yida Wang. 2020. Featgraph: A flexible and efficient backend for graph neural network systems. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. 1–13.
    [58]
    Guyue Huang, Guohao Dai, Yu Wang, and Huazhong Yang. 2020. GE-SpMM: General-purpose sparse matrix-matrix multiplication on GPUs for graph neural networks. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. 1–12.
    [59]
    Kezhao Huang, Jidong Zhai, Zhen Zheng, Youngmin Yi, and Xipeng Shen. 2021. Understanding and bridging the gaps in current GNN performance optimizations. In Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 119–132.
    [60]
    Linyong Huang, Zhe Zhang, Zhaoyang Du, Shuangchen Li, Hongzhong Zheng, Yuan Xie, and Nianxiong Tan. 2022. EPQuant: A graph neural network compression approach based on product quantization. Neurocomputing 503 (2022), 49–61.
    [61]
    Tinglin Huang, Yuxiao Dong, Ming Ding, Zhen Yang, Wenzheng Feng, Xinyu Wang, and Jie Tang. 2021. MixGCF: An improved training method for graph neural network-based recommender systems. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 665–674.
    [62]
    Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. 2018. Adaptive sampling towards fast graph representation learning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 4563–4572.
    [63]
    Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, Hyoukjoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. 2019. GPipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, Vol. 32. 1–10.
    [64]
    Abhinav Jangda, Sandeep Polisetty, Arjun Guha, and Marco Serafini. 2021. Accelerating graph sampling for graph machine learning using GPUs. In Proceedings of the 16th European Conference on Computer Systems. 311–326.
    [65]
    Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the accuracy, scalability, and performance of graph neural networks with Roc. In Proceedings of Machine Learning and Systems. 187–198.
    [66]
    Peng Jiang and Masuma Akter Rumi. 2021. Communication-efficient sampling for distributed training of graph convolutional networks. arXiv preprint arXiv:2101.07706 (2021), 1–11.
    [67]
    Weiwei Jiang and Jiayun Luo. 2022. Graph neural network for traffic forecasting: A survey. Expert Systems with Applications 207 (2022), 117921.
    [68]
    Chaitanya K. Joshi. 2022. Recent advances in efficient and scalable graph neural networks. Retrieved from https://www.chaitjo.com/post/efficient-gnns/ (2022).
    [69]
    Tim Kaler, Alexandros Iliopoulos, Philip Murzynowski, Tao Schardl, Charles E. Leiserson, and Jie Chen. 2023. Communication-efficient graph neural networks with probabilistic neighborhood expansion analysis and caching. In Proceedings of Machine Learning and Systems. 1–14.
    [70]
    George Karypis and Vipin Kumar. 1998. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing 20, 1 (1998), 359–392.
    [71]
    Yunyong Ko, Kibong Choi, Jiwon Seo, and Sang Wook Kim. 2021. An in-depth analysis of distributed training of deep neural networks. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium. 994–1003.
    [72]
    Süreyya Emre Kurt, Jinghua Yan, Aravind Sukumaran-Rajam, Prashant Pandey, and P. Sadayappan. 2023. Communication optimization for distributed execution of graph neural networks. In IEEE International Parallel and Distributed Processing Symposium. 512–523.
    [73]
    Matthias Langer, Zhen He, Wenny Rahayu, and Yanbo Xue. 2020. Distributed training of deep learning models: A taxonomic perspective. IEEE Transactions on Parallel and Distributed Systems 31, 12 (2020), 2802–2818. DOI:
    [74]
    Haoyang Li and Lei Chen. 2021. Cache-based GNN system for dynamic graphs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 937–946.
    [75]
    Houyi Li, Yongchao Liu, Yongyong Li, Bin Huang, Peng Zhang, Guowei Zhang, Xintan Zeng, Kefeng Deng, Wenguang Chen, and Changhua He. 2021. GraphTheta: A distributed graph neural network learning system with flexible training strategy. arXiv preprint arXiv:2104.10569 (2021), 1–18.
    [76]
    Hongzheng Li, Yingxia Shao, Junping Du, Bin Cui, and Lei Chen. 2022. An I/O-efficient disk-based graph system for scalable second-order random walk of large graphs. Proceedings of the VLDB Endowment 15, 8 (2022), 1619–1631.
    [77]
    Longhai Li, Lei Duan, Junchen Wang, Chengxin He, Zihao Chen, Guicai Xie, Song Deng, and Zhaohang Luo. 2023. Memory-enhanced transformer for representation learning on temporal heterogeneous graphs. Data Sci. Eng. 8, 2 (2023), 98–111.
    [78]
    Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. 2020. PyTorch distributed: Experiences on accelerating data parallel training. Proceedings of the VLDB Endowment 13, 12 (2020), 3005–3018.
    [79]
    Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Weili Huang, Dejing Dou, and Hui Xiong. 2021. Structure-aware interactive graph neural networks for the prediction of protein-ligand binding affinity. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 975–985.
    [80]
    Zhiqi Lin, Cheng Li, Youshan Miao, Yunxin Liu, and Yinlong Xu. 2020. PaGraph: Scaling GNN training on large graphs via computation-aware caching. In Proceedings of the 11th ACM Symposium on Cloud Computing. 401–415.
    [81]
    Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, et al. 2021. DIG: A turnkey library for diving into graph deep learning research. Journal of Machine Learning Research 22 (2021), 1–9.
    [82]
    Tianfeng Liu, Yangrui Chen, Dan Li, Chuan Wu, Yibo Zhu, Jun He, Yanghua Peng, Hongzheng Chen, Hongzhi Chen, and Chuanxiong Guo. 2023. BGL: GPU-efficient GNN training by optimizing graph data I/O and preprocessing. In Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation. 103–118.
    [83]
    Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan, Shirui Pan, and Yuan Xie. 2022. Survey on graph neural network acceleration: An algorithmic perspective. In Proceedings of the 31st International Joint Conference on Artificial Intelligence. 5521–5529.
    [84]
    Zirui Liu, Kaixiong Zhou, Fan Yang, Li Li, Rui Chen, and Xia Hu. 2021. EXACT: Scalable graph neural networks training via extreme activation compression. In International Conference on Learning Representations. 1–32.
    [85]
    Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph M. Hellerstein. 2012. Distributed GraphLab: A framework for machine learning and data mining in the cloud. Proceedings of the VLDB Endowment 5, 8 (2012), 716–727.
    [86]
    Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. 2019. NeuGraph: Parallel deep neural network computation on large graphs. In 2019 USENIX Annual Technical Conference. 443–458.
    [87]
    Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. 2010. Pregel: A system for large-scale graph processing. In Proceedings of the ACM SIGMOD International Conference on Management of Data. 135–146.
    [88]
    Peter Mattson, Christine Cheng, Gregory Diamos, Cody Coleman, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debo Dutta, Udit Gupta, Kim Hazelwood, Andy Hock, Xinyuan Huang, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, and Matei Zaharia. 2020. MLPerf training benchmark. In Proceedings of Machine Learning and Systems. 336–349.
    [89]
    Vasimuddin Md, Sanchit Misra, Guixiang Ma, Ramanarayan Mohanty, Evangelos Georganas, Alexander Heinecke, Dhiraj Kalamkar, Nesreen K. Ahmed, and Sasikanth Avancha. 2021. DistGNN: Scalable distributed training for large-scale graph neural networks. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 1–14.
    [90]
    Seung Won Min, Kun Wu, Mert Hidayetoglu, Jinjun Xiong, Xiang Song, and Wen-mei Hwu. 2022. Graph neural network training and data tiering. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 3555–3565.
    [91]
    Hesham Mostafa. 2022. Sequential aggregation and rematerialization: Distributed full-batch training of graph neural networks on large graphs. In Proceedings of Machine Learning and Systems. 265–275.
    [92]
    Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. 2019. PipeDream: Generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles. 1–15.
    [93]
    Santosh Pandey, Lingda Li, Adolfy Hoisie, Xiaoye S. Li, and Hang Liu. 2020. C-SAW: A framework for graph sampling and random walk on GPUs. In International Conference for High Performance Computing, Networking, Storage and Analysis. 1–15.
    [94]
    Jingshu Peng, Zhao Chen, Yingxia Shao, Yanyan Shen, Lei Chen, and Jiannong Cao. 2022. SANCUS: Staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks. Proceedings of the VLDB Endowment 15, 9 (2022), 1937–1950.
    [95]
    Md Khaledur Rahman, Majedul Haque Sujon, and Ariful Azad. 2021. FusedMM: A unified SDDMM-SpMM kernel for graph embedding and graph neural networks. In 2021 IEEE International Parallel and Distributed Processing Symposium. 256–266.
    [96]
    Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, and Anand Sivasubramaniam. 2021. Learn locally, correct globally: A distributed algorithm for training graph neural networks. arXiv preprint arXiv:2111.08202 (2021), 1–32.
    [97]
    Jiahua Rao, Xiang Zhou, Yutong Lu, Huiying Zhao, and Yuedong Yang. 2021. Imputing single-cell RNA-seq data by combining graph convolution and autoencoder neural networks. Iscience 24, 5 (2021), 102393.
    [98]
    Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. 2011. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems. 1–9.
    [99]
    Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. 2018. Graph networks as learnable physics engines for inference and control. In Proceedings of the 35th International Conference on Machine Learning. 4470–4479.
    [100]
    Oguz Selvitopi, Benjamin Brock, Israt Nisa, Alok Tripathy, Katherine Yelick, and Aydın Buluç. 2021. Distributed-memory parallel algorithms for sparse times tall-skinny-dense matrix multiplication. In Proceedings of the ACM International Conference on Supercomputing. 431–442.
    [101]
    Shihui Song and Peng Jiang. 2022. Rethinking graph data placement for graph neural network training on multiple GPUs. In Proceedings of the 36th ACM International Conference on Supercomputing. 1–10.
    [102]
    Zheng Song, Fengshan Bai, Jianfeng Zhao, and Jie Zhang. 2021. Spammer detection using graph-level classification model of graph neural network. In 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering. 531–538.
    [103]
    Zhen Song, Yu Gu, Jianzhong Qi, Zhigang Wang, and Ge Yu. 2022. EC-Graph: A distributed graph neural network system with error-compensated compression. In 2022 IEEE 38th International Conference on Data Engineering. 648–660.
    [104]
    Isabelle Stanton and Gabriel Kliot. 2012. Streaming graph partitioning for large distributed graphs. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1222–1230.
    [105]
    Jie Sun, Li Su, Zuocheng Shi, Wenting Shen, Zeke Wang, Lei Wang, Jie Zhang, Yong Li, Wenyuan Yu, Jingren Zhou, and Fei Wu. 2023. Legion: Automatically pushing the envelope of multi-GPU system for billion-scale GNN training. In Proceedings of the USENIX Annual Technical Conference. 165—179.
    [106]
    Shyam A. Tailor, Javier Fernandez-Marques, and Nicholas D. Lane. 2020. Degree-Quant: Quantization-aware training for graph neural networks. arXiv preprint arXiv:2008.05000 (2020), 1–22.
    [107]
    Qiaoyu Tan, Ninghao Liu, and Xia Hu. 2019. Deep representation learning for social network analysis. Frontiers in Big Data 2 (2019), 2. DOI:
    [108]
    John Thorpe, Yifan Qiao, Jonathan Eyolfson, Shen Teng, Guanzhou Hu, Zhihao Jia, Jinliang Wei, Keval Vora, Ravi Netravali, Miryung Kim, et al. 2021. Dorylus: Affordable, scalable, and accurate GNN training with distributed CPU servers and serverless threads. In USENIX Symposium on Operating Systems Design and Implementation. 495–514.
    [109]
    Chao Tian, Lingxiao Ma, Zhi Yang, and Yafei Dai. 2020. PCGCN: Partition-centric processing for accelerating graph convolutional network. In 2020 IEEE International Parallel and Distributed Processing Symposium. 936–945. DOI:
    [110]
    Alok Tripathy, Katherine Yelick, and Aydın Buluç. 2020. Reducing communication in graph neural network training. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. 1–14.
    [111]
    Jana Vatter, Ruben Mayer, and Hans-Arno Jacobsen. 2023. The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey. Comput. Surveys (2023), 1–35.
    [112]
    Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations. 1–12.
    [113]
    Joost Verbraeken, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S. Rellermeyer. 2020. A survey on distributed machine learning. ACM Computing Surveys 53, 2 (2020), 1–33.
    [114]
    Oreste Villa, Mark Stephenson, David Nellans, and Stephen W. Keckler. 2019. NVBit: A dynamic binary instrumentation framework for NVIDIA GPUs. In Proceedings of the Annual IEEE/ACM International Symposium on Microarchitecture. 372–383.
    [115]
    Borui Wan, Juntao Zhao, and Chuan Wu. 2023. Adaptive message quantization and parallelization for distributed full-graph GNN training. In Proceedings of Machine Learning and Systems. 1–15.
    [116]
    Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, and Yingyan Lin. 2022. BNS-GCN: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling. In Proceedings of Machine Learning and Systems. 673–693.
    [117]
    Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, and Yingyan Lin. 2022. PipeGCN: Efficient full-graph training of graph convolutional networks with pipelined feature communication. In International Conference on Learning Representations. 1–24.
    [118]
    Xinchen Wan, Kaiqiang Xu, Xudong Liao, Yilun Jin, Kai Chen, and Xin Jin. 2023. Scalable and efficient full-graph GNN training for large graphs. In Proceedings of the 2023 ACM SIGMOD International Conference on Management of Data. 1–23.
    [119]
    Hanchen Wang, Defu Lian, Ying Zhang, Lu Qin, Xiangjian He, Yiguang Lin, and Xuemin Lin. 2021. Binarized graph neural network. In World Wide Web. 825–848. DOI:
    [120]
    Junfu Wang, Yunhong Wang, Zhen Yang, Liang Yang, and Yuanfang Guo. 2021. Bi-GCN: Binary graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1561–1570.
    [121]
    Lei Wang, Qiang Yin, Chao Tian, Jianbang Yang, Rong Chen, Wenyuan Yu, Zihang Yao, and Jingren Zhou. 2021. FlexGraph: A flexible and efficient distributed framework for GNN training. In Proceedings of the 16th European Conference on Computer Systems. 67–82.
    [122]
    M. Wang, W. Fu, X. He, S. Hao, and X. Wu. 2022. A survey on large-scale machine learning. IEEE Transactions on Knowledge & Data Engineering 34, 06 (2022), 2574–2594. DOI:
    [123]
    Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J. Smola, and Zheng Zhang. 2019. Deep graph library: Towards efficient and scalable deep learning on graphs. In ICLR Workshop on Representation Learning on Graphs and Manifolds. 1–7.
    [124]
    Pengyu Wang, Chao Li, Jing Wang, Taolei Wang, Lu Zhang, Jingwen Leng, Quan Chen, and Minyi Guo. 2021. Skywalker: Efficient alias-method-based graph sampling and random walk on GPUs. In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques. 304–317.
    [125]
    Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S. Yu. 2019. Heterogeneous graph attention network. In The World Wide Web Conference. 2022–2032.
    [126]
    Yuke Wang, Boyuan Feng, and Yufei Ding. 2022. QGTC: Accelerating quantized graph neural networks via GPU tensor core. In Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 107–119. DOI:
    [127]
    Yuke Wang, Boyuan Feng, Gushu Li, Shuangchen Li, Lei Deng, Yuan Xie, and Yufei Ding. 2021. GNNAdvisor: An adaptive and efficient runtime system for GNN acceleration on GPUs. In 15th USENIX Symposium on Operating Systems Design and Implementation. 515–531.
    [128]
    Max Welling and Thomas N. Kipf. 2017. Semi-supervised classification with graph convolutional networks. In J. International Conference on Learning Representations. 1–14.
    [129]
    Cameron R. Wolfe, Jingkang Yang, Arindam Chowdhury, Chen Dun, Artun Bayer, Santiago Segarra, and Anastasios Kyrillidis. 2021. GIST: Distributed training for large-scale graph convolutional networks. arXiv preprint arXiv:2102.10424 (2021), 1–28.
    [130]
    Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning. 6861–6871.
    [131]
    Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. 2022. Graph neural networks in recommender systems: A survey. Comput. Surveys (2022), 1–37.
    [132]
    Yongji Wu, Defu Lian, Yiheng Xu, Le Wu, and Enhong Chen. 2020. Graph convolutional networks with Markov random field reasoning for social spammer detection. In Proceedings of the AAAI Conference on Artificial Intelligence. 1054–1061. DOI:
    [133]
    Yidi Wu, Kaihao Ma, Zhenkun Cai, Tatiana Jin, Boyang Li, Chenguang Zheng, James Cheng, and Fan Yu. 2021. Seastar: Vertex-centric programming for graph neural networks. In Proceedings of the 16th European Conference on Computer Systems. 359–375.
    [134]
    Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. 2021. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 1 (2021), 4-24. DOI:
    [135]
    Shuo Xiao, Dongqing Zhu, Chaogang Tang, and Zhenzhen Huang. 2023. Combining graph contrastive embedding and multi-head cross-attention transfer for cross-domain recommendation. Data Sci. Eng. 8, 3 (2023), 247–262.
    [136]
    Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. 2022. Self-supervised learning of graph neural networks: A unified review. IEEE Transactions on Pattern Analysis and Machine Intelligence1 (2022), 1–1.
    [137]
    Zhiqiang Xie, Minjie Wang, Zihao Ye, Zheng Zhang, and Rui Fan. 2022. Graphiler: Optimizing graph neural networks with message passing data flow graph. In Proceedings of Machine Learning and Systems. 515–528.
    [138]
    Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In International Conference on Learning Representations. 1–17.
    [139]
    Ning Xu, Bin Cui, Lei Chen, Zi Huang, and Yingxia Shao. 2015. Heterogeneous environment aware streaming graph partitioning. In IEEE Transactions on Knowledge and Data Engineering. 1560–1572. DOI:
    [140]
    Zihui Xue, Yuedong Yang, Mengtian Yang, and Radu Marculescu. 2022. SUGAR: Efficient subgraph-level training via resource-aware graph partitioning. arXiv preprint arXiv:2202.00075 (2022), 1–16. DOI:
    [141]
    Bencheng Yan, Chaokun Wang, Gaoyang Guo, and Yunkai Lou. 2020. TinyGNN: Learning efficient graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1848–1856. DOI:
    [142]
    Jianbang Yang, Dahai Tang, Xiaoniu Song, Lei Wang, Qiang Yin, Rong Chen, Wenyuan Yu, and Jingren Zhou. 2022. GNNLab: A factored system for sample-based GNN training over GPUs. In Proceedings of the 17th European Conference on Computer Systems. 417–434. DOI:
    [143]
    Ke Yang, MingXing Zhang, Kang Chen, Xiaosong Ma, Yang Bai, and Yong Jiang. 2019. KnightKing: A fast distributed graph random walk engine. In Proceedings of the ACM Symposium on Operating Systems Principles. 524–537.
    [144]
    Hongbo Yin, Yingxia Shao, Xupeng Miao, Yawen Li, and Bin Cui. 2022. Scalable graph sampling on GPUs with compressed graph. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2383–2392.
    [145]
    Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. 2020. L2-GCN: Layer-wise and learned efficient training of graph convolutional networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1–9.
    [146]
    Hanqing Zeng and Viktor Prasanna. 2020. GraphACT: Accelerating GCN training on CPU-FPGA heterogeneous platforms. In 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 255–265. DOI:
    [147]
    Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2020. GraphSAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations. 1–19.
    [148]
    Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V. Chawla. 2019. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 793–803.
    [149]
    Dalong Zhang, Xin Huang, Ziqi Liu, Jun Zhou, Zhiyang Hu, Xianzheng Song, Zhibang Ge, Lin Wang, Zhiqiang Zhang, and Yuan Qi. 2020. AGL: A scalable system for industrial-purpose graph machine learning. Proceedings of the VLDB Endowment 13, 12 (2020), 3125–3137.
    [150]
    Guo Zhang, Hao He, and Dina Katabi. 2019. Circuit-GNN: Graph neural networks for distributed circuit design. In Proceedings of the 36th International Conference on Machine Learning. 7364–7373.
    [151]
    Hengrui Zhang, Zhongming Yu, Guohao Dai, Guyue Huang, Yufei Ding, Yuan Xie, and Yu Wang. 2022. Understanding GNN computational graph: A coordinated computation, IO, and memory perspective. In Proceedings of Machine Learning and Systems. 467–484.
    [152]
    Weijia Zhang, Hao Liu, Yanchi Liu, Jingbo Zhou, and Hui Xiong. 2020. Semi-supervised hierarchical recurrent graph neural network for city-wide parking availability prediction. In The 34th AAAI Conference on Artificial Intelligence. 1186–1193.
    [153]
    Wentao Zhang, Xupeng Miao, Yingxia Shao, Jiawei Jiang, Lei Chen, Olivier Ruas, and Bin Cui. 2020. Reliable data distillation on graph convolutional network. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. 1399–1414. DOI:
    [154]
    Xin Zhang, Yanyan Shen, Yingxia Shao, and Lei Chen. 2023. DUCATI: A dual-cache training system for graph neural networks on giant graphs with the GPU. In Proceedings of the ACM on Management of Data. 1–24.
    [155]
    Xiao-Meng Zhang, Li Liang, Lin Liu, and Ming-Jing Tang. 2021. Graph neural networks and their current applications in bioinformatics. Frontiers in Genetics 12 (2021), 1–22.
    [156]
    Yufeng Zhang, Xueli Yu, Zeyu Cui, Shu Wu, Zhongzhen Wen, and Liang Wang. 2020. Every document owns its structure: Inductive text classification via graph neural networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 334–339. DOI:
    [157]
    Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2022. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering 34, 1 (2022), 249–270.
    [158]
    Guoyi Zhao, Tian Zhou, and Lixin Gao. 2021. CM-GCN: A distributed framework for graph convolutional networks using cohesive mini-batches. In 2021 IEEE International Conference on Big Data. 153–163. DOI:
    [159]
    Jianan Zhao, Xiao Wang, Chuan Shi, Binbin Hu, Guojie Song, and Yanfang Ye. 2021. Heterogeneous graph structure learning for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence. 4697–4705.
    [160]
    Taige Zhao, Xiangyu Song, Jianxin Li, Wei Luo, and Imran Razzak. 2021. Distributed optimization of graph convolutional network using subgraph variance. arXiv preprint arXiv:2110.02987 (2021), 1–12.
    [161]
    Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, and Pietro Lio. 2020. Learned low precision graph neural networks. arXiv preprint arXiv:2009.09232 (2020), 1–14.
    [162]
    Chenguang Zheng, Hongzhi Chen, Yuxuan Cheng, Zhezheng Song, Yifan Wu, Changji Li, James Cheng, Hao Yang, and Shuai Zhang. 2022. ByteGNN: Efficient graph neural network training at large scale. Proceedings of the VLDB Endowment 15, 6 (2022), 1228–1242.
    [163]
    Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, and Jianzhong Qi. 2020. GMAN: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. 1234–1241.
    [164]
    Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, and George Karypis. 2020. DistDGL: Distributed graph neural network training for billion-scale graphs. In 10th IEEE/ACM Workshop on Irregular Applications: Architectures and Algorithms. 36–44. DOI:
    [165]
    Da Zheng, Xiang Song, Chengru Yang, Dominique LaSalle, and George Karypis. 2022. Distributed hybrid CPU and GPU training for graph neural networks on billion-scale heterogeneous graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4582–4591. DOI:
    [166]
    Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. 2020. Learning sparse nonparametric DAGs. In The 23rd International Conference on Artificial Intelligence and Statistics. 3414–3425.
    [167]
    Shanna Zhong, Jiahui Wang, Kun Yue, Liang Duan, Zhengbao Sun, and Yan Fang. 2023. Few-shot relation prediction of knowledge graph via convolutional neural network with self-attention. Data Sci. Eng. 8, 4 (2023), 385–395.
    [168]
    Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 6170–6180. DOI:
    [169]
    Hongkuan Zhou, Ajitesh Srivastava, Hanqing Zeng, Rajgopal Kannan, and Viktor Prasanna. 2021. Accelerating large scale real-time GNN inference using channel pruning. Proceedings of the VLDB Endowment 14, 9 (2021), 1597–1605.
    [170]
    Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. AI Open (2020), 57–81. DOI:
    [171]
    Zhe Zhou, Cong Li, Xuechao Wei, and Guangyu Sun. 2021. GcCNear: A hybrid architecture for efficient GCN training with near-memory processing. arXiv preprint arXiv:2111.00680 (2021), 1–15.
    [172]
    Hongyu Zhu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Anand Jayarajan, Amar Phanishayee, Bianca Schroeder, and Gennady Pekhimenko. 2018. Benchmarking and analyzing deep neural network training. In 2018 IEEE International Symposium on Workload Characterization. 88–100. DOI:
    [173]
    Rong Zhu, Kun Zhao, Hongxia Yang, Wei Lin, Chang Zhou, Baole Ai, Yong Li, and Jingren Zhou. 2019. AliGraph: A comprehensive graph neural network platform. Proceedings of the VLDB Endowment 12, 12 (2019), 2094–2105.
    [174]
    Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. 2019. Layer-dependent importance sampling for training deep and large graph convolutional networks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 11249–11259.

    Cited By

    View all
    • (2024)Nutrition-Related Knowledge Graph Neural Network for Food RecommendationFoods10.3390/foods1313214413:13(2144)Online publication date: 5-Jul-2024
    • (2024)Graph Convolutional Spectral Clustering for Electricity Market Data ClusteringApplied Sciences10.3390/app1412526314:12(5263)Online publication date: 18-Jun-2024
    • (2024)Synergies Between Graph Data Management and Machine Learning in Graph Data Pipeline2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00457(5655-5656)Online publication date: 13-May-2024

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 56, Issue 8
    August 2024
    963 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3613627
    • Editors:
    • David Atienza,
    • Michela Milano
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 April 2024
    Online AM: 16 February 2024
    Accepted: 31 January 2024
    Revised: 09 December 2023
    Received: 31 October 2022
    Published in CSUR Volume 56, Issue 8

    Check for updates

    Author Tags

    1. Surveys and overviews
    2. distributed GNN training
    3. graph data management
    4. communication optimization
    5. distributed GNN systems

    Qualifiers

    • Survey

    Funding Sources

    • National Science and Technology Major Project
    • National Natural Science Foundation of China
    • Beijing Nova Program
    • Xiaomi Young Talents Program
    • National Science Foundation of China (NSFC)
    • Hong Kong RGC GRF Project
    • CRF Project
    • AOE Project
    • RIF Project
    • Theme-based project
    • Guangdong Basic and Applied Basic Research Foundation
    • Hong Kong ITC ITF
    • Microsoft Research Asia Collaborative Research Grant, HKUST-Webank joint research lab grant and HKUST Global Strategic Partnership Fund

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,538
    • Downloads (Last 6 weeks)294
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Nutrition-Related Knowledge Graph Neural Network for Food RecommendationFoods10.3390/foods1313214413:13(2144)Online publication date: 5-Jul-2024
    • (2024)Graph Convolutional Spectral Clustering for Electricity Market Data ClusteringApplied Sciences10.3390/app1412526314:12(5263)Online publication date: 18-Jun-2024
    • (2024)Synergies Between Graph Data Management and Machine Learning in Graph Data Pipeline2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00457(5655-5656)Online publication date: 13-May-2024

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media