Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485447.3512044acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article
Open access

GRAND+: Scalable Graph Random Neural Networks

Published: 25 April 2022 Publication History

Abstract

Graph neural networks (GNNs) have been widely adopted for semi-supervised learning on graphs. A recent study shows that the graph random neural network (GRAND) model can generate state-of-the-art performance for this problem. However, it is difficult for GRAND to handle large-scale graphs since its effectiveness relies on computationally expensive data augmentation procedures. In this work, we present a scalable and high-performance GNN framework GRAND+ for semi-supervised graph learning. To address the above issue, we develop a generalized forward push (GFPush) algorithm in GRAND+ to pre-compute a general propagation matrix and employ it to perform graph data augmentation in a mini-batch manner. We show that both the low time and space complexities of GFPush enable GRAND+ to efficiently scale to large graphs. Furthermore, we introduce a confidence-aware consistency loss into the model optimization of GRAND+, facilitating GRAND+’s generalization superiority. We conduct extensive experiments on seven public datasets of different sizes. The results demonstrate that GRAND+ 1) is able to scale to large graphs and costs less running time than existing scalable GNNs, and 2) can offer consistent accuracy improvements over both full-batch and scalable GNNs across all datasets.

References

[1]
Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Hrayr Harutyunyan, Nazanin Alipourfard, Kristina Lerman, Greg Ver Steeg, and Aram Galstyan. 2019. Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing. ICML’19 (2019).
[2]
Reid Andersen, Fan Chung, and Kevin Lang. 2006. Local graph partitioning using pagerank vectors. In FOCS’06. IEEE, 475–486.
[3]
Siddhartha Banerjee and Peter Lofgren. 2015. Fast Bidirectional Probability Estimation in Markov Models. NeurIPS (2015).
[4]
David Berthelot, Nicholas Carlini, J. Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. MixMatch: A Holistic Approach to Semi-Supervised Learning. NeurIPS (2019), 5050–5060.
[5]
Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. 2020. Scaling graph neural networks with approximate pagerank. In KDD’20. 2464–2473.
[6]
Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. 2006. Semi-supervised learning. The MIT Press. https://doi.org/10.7551/mitpress/9780262033589.001.0001
[7]
Jie Chen, Tengfei Ma, and Cao Xiao. 2018. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. ICLR (2018).
[8]
Ming Chen, Zhewei Wei, Bolin Ding, Yaliang Li, Ye Yuan, Xiaoyong Du, and Ji-Rong Wen. 2020. Scalable Graph Neural Networks via Bidirectional Propagation. NeurIPS (2020).
[9]
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020. Simple and deep graph convolutional networks. In ICML. PMLR, 1725–1735.
[10]
Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. 2019. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In KDD’19.
[11]
Ming Ding, Jie Tang, and Jie Zhang. 2018. Semi-supervised learning on graphs with generative adversarial nets. CIKM’18 (2018).
[12]
Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. 2020. Graph Random Neural Network for Semi-Supervised Learning on Graphs. NeurIPS (2020).
[13]
Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677(2017).
[14]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. NeurIPS (2017), 1025–1035.
[15]
Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML (2015).
[16]
Shirish Nitish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Tak Peter Ping Tang. 2017. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. ICLR (2017).
[17]
Sohn Kihyuk, Berthelot David, Li Chun-Liang, Zhang Zizhao, Carlini Nicholas, Ekin Cubuk D., Kurakin Alex, Zhang Han, and Raffel Colin. 2020. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. NeurIPS (2020).
[18]
P. Diederik Kingma and Lei Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. ICLR (2015).
[19]
N. Thomas Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. ICLR (2017).
[20]
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. ICLR (2019).
[21]
Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. 2019. Diffusion improves graph learning. arXiv preprint arXiv:1911.05485(2019).
[22]
Pan Li, Eli Chien, and Olgica Milenkovic. 2019. Optimizing generalized pagerank methods for seed-expansion community detection. arXiv preprint arXiv:1905.10881(2019).
[23]
Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI’18.
[24]
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. ICML (2013), 1310–1318.
[25]
Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. 2018. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In WSDM’18. 459–467.
[26]
Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Paul Bo-June Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. WWW (Companion Volume)(2015), 243–246.
[27]
Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. 2008. Arnetminer: extraction and mining of academic social networks. In KDD’08.
[28]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. ICLR (2018).
[29]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. ICML (2019), 6861–6871.
[30]
Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2016. Revisiting semi-supervised learning with graph embeddings. ICML (2016).
[31]
Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2020. GraphSAINT: Graph Sampling Based Inductive Learning Method. ICLR (2020).
[32]
Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. 2004. Learning with local and global consistency. NeurIPS (2004), 321–328.
[33]
Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. ICML (2003).
[34]
Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. 2019. Layer-dependent importance sampling for training deep and large graph convolutional networks. NeurIPS (2019).

Cited By

View all
  • (2024)LightDiC: A Simple Yet Effective Approach for Large-Scale Digraph Representation LearningProceedings of the VLDB Endowment10.14778/3654621.365462317:7(1542-1551)Online publication date: 30-May-2024
  • (2024)Rethinking Node-wise Propagation for Large-scale Graph LearningProceedings of the ACM Web Conference 202410.1145/3589334.3645450(560-569)Online publication date: 13-May-2024
  • (2023)RSCProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619318(21951-21968)Online publication date: 23-Jul-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '22: Proceedings of the ACM Web Conference 2022
April 2022
3764 pages
ISBN:9781450390965
DOI:10.1145/3485447
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 April 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Graph Neural Networks
  2. Scalability
  3. Semi-Supervised Learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • NSFC for Distinguished Young Scholar

Conference

WWW '22
Sponsor:
WWW '22: The ACM Web Conference 2022
April 25 - 29, 2022
Virtual Event, Lyon, France

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)361
  • Downloads (Last 6 weeks)39
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)LightDiC: A Simple Yet Effective Approach for Large-Scale Digraph Representation LearningProceedings of the VLDB Endowment10.14778/3654621.365462317:7(1542-1551)Online publication date: 30-May-2024
  • (2024)Rethinking Node-wise Propagation for Large-scale Graph LearningProceedings of the ACM Web Conference 202410.1145/3589334.3645450(560-569)Online publication date: 13-May-2024
  • (2023)RSCProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619318(21951-21968)Online publication date: 23-Jul-2023
  • (2023)FedGTA: Topology-Aware Averaging for Federated Graph LearningProceedings of the VLDB Endowment10.14778/3617838.361784217:1(41-50)Online publication date: 1-Sep-2023
  • (2023)Towards a Better Tradeoff between Quality and Efficiency of Community Detection: An Inductive Embedding Method across GraphsACM Transactions on Knowledge Discovery from Data10.1145/359660517:9(1-34)Online publication date: 15-Jun-2023
  • (2023)Label Efficient Regularization and Propagation for Graph Node ClassificationIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.330997045:12(14856-14871)Online publication date: 30-Aug-2023
  • (2023)DropConn: Dropout Connection Based Random GNNs for Molecular Property PredictionIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.329003236:2(518-529)Online publication date: 27-Jun-2023
  • (2023)Semi-Supervised Social Bot Detection with Initial Residual Relation Attention NetworksMachine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track10.1007/978-3-031-43427-3_13(207-224)Online publication date: 18-Sep-2023
  • (2022)Data Augmentation for Deep Graph LearningACM SIGKDD Explorations Newsletter10.1145/3575637.357564624:2(61-77)Online publication date: 8-Dec-2022
  • (2022)MixOT: Graph Representation Learning based on Mix-order Sampling and Transport Aggregator for Social Networks2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta)10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00264(1817-1824)Online publication date: Dec-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media