Abstract
The burdensome training costs on large-scale graphs have aroused significant interest in graph condensation, which involves tuning Graph Neural Networks (GNNs) on a small condensed graph for use on the large-scale original graph. Existing methods primarily focus on aligning key metrics between the condensed and original graphs, such as gradients, output distribution and trajectories of GNNs, yielding satisfactory performance on downstream tasks. However, these complex metrics necessitate intricate external parameters and can potentially disrupt the optimization process of the condensation graph, making the condensation process highly demanding and unstable. Motivated by the recent success of simplified models across various domains, we propose a simplified approach to metric alignment in graph condensation, aiming to reduce unnecessary complexity inherited from intricate metrics. We introduce the Simple Graph Condensation (SimGC) framework, which aligns the condensed graph with the original graph from the input layer to the prediction layer, guided by a pre-trained Simple Graph Convolution (SGC) model on the original graph. Importantly, SimGC eliminates external parameters and exclusively retains the target condensed graph during the condensation process. This straightforward yet effective strategy achieves a significant speedup of up to 10 times compared to existing graph condensation methods while performing on par with state-of-the-art baselines. Comprehensive experiments conducted on seven benchmark datasets demonstrate the effectiveness of SimGC in prediction accuracy, condensation time, and generalization capability. Our code is available at https://github.com/BangHonor/SimGC.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brody, S., Alon, U., Yahav, E.: How attentive are graph attention networks? In: ICLR (2021)
Cheng, D., Wang, X., Zhang, Y., Zhang, L.: Graph neural network for fraud detection via spatial-temporal attention. TKDE, 3800–3813 (2020)
Chiang, W.L., Liu, X., Si, S., Li, Y., Bengio, S., Hsieh, C.J.: Cluster-GCN: an efficient algorithm for training deep and large graph convolutional networks. In: SIGKDD, pp. 257–266 (2019)
Deng, Z., Russakovsky, O.: Remember the past: Distilling datasets into addressable memories for neural networks. arXiv preprint arXiv:2206.02916 (2022)
Dong, T., Zhao, B., Lyu, L.: Privacy for free: How does dataset condensation help privacy? In: ICML, pp. 5378–5396 (2022)
Fan, W., et al.: Graph neural networks for social recommendation. In: WWW, pp. 417–426 (2019)
Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)
He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: LightGCN: simplifying and powering graph convolution network for recommendation. In: SIGIR, pp. 639–648 (2020)
Hu, W., et al.: Open graph benchmark: datasets for machine learning on graphs. In: NeurIPS, pp. 22118–22133 (2020)
Jin, W., et al.: Condensing graphs via one-step gradient matching. In: SIGKDD, pp. 720–730 (2022)
Jin, W., Zhao, L., Zhang, S., Liu, Y., Tang, J., Shah, N.: Graph condensation for graph neural networks. In: ICLR (2022)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
Liu, J., Zheng, T., Zhang, G., Hao, Q.: Graph-based knowledge distillation: a survey and experimental evaluation. arXiv preprint arXiv:2302.14643 (2023)
Liu, M., Li, S., Chen, X., Song, L.: Graph condensation via receptive field distribution matching. arXiv preprint arXiv:2206.13697 (2022)
Liu, S., et al.: Transmission interface power flow adjustment: a deep reinforcement learning approach based on multi-task attribution map. IEEE Trans. Power Syst. 3324–3335 (2024)
Liu, S., Zhou, Y., Song, M., Bu, G., Guo, J., Chen, C.: Progressive decision-making framework for power system topology control. Expert Syst. Appl. 121070 (2024)
Liu, S., Ye, J., Yu, R., Wang, X.: Slimmable dataset condensation. In: CVPR, pp. 3759–3768 (2023)
Loukas, A.: Graph reduction with spectral and cut guarantees. J. Mach. Learn. Res. 1–42 (2019)
Loukas, A., Vandergheynst, P.: Spectrally approximating large graphs with smaller graphs. In: ICML (2018)
Nguyen, T., Chen, Z., Lee, J.: Dataset meta-learning from kernel ridge-regression. In: ICLR (2021)
Peleg, D., Schäffer, A.A.: Graph spanners. J. Graph Theory (1989)
Peng, J., Chen, Z., Shao, Y., Shen, Y., Chen, L., Cao, J.: Sancus: staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks. VLDB, 1937–1950 (2022)
Reiser, P., et al.: Graph neural networks for materials science and chemistry. Commun. Mater. 93 (2022)
Sener, O., Savarese, S.: Active learning for convolutional neural networks: a core-set approach. In: ICLR (2018)
Spielman, D.A., Teng, S.H.: Spectral sparsification of graphs. SIAM J. Comput. 981–1025 (2011)
Such, F.P., Rawal, A., Lehman, J., Stanley, K., Clune, J.: Generative teaching networks: accelerating neural architecture search by learning to generate synthetic training data. In: ICML, pp. 9206–9216 (2020)
Tolstikhin, I.O., et al.: MLP-mixer: an all-MLP architecture for vision. In: NeurIPS, pp. 24261–24272 (2021)
Wan, C., Li, Y., Wolfe, C.R., Kyrillidis, A., Kim, N.S., Lin, Y.: PipeGCN: efficient full-graph training of graph convolutional networks with pipelined feature communication. In: ICLR (2021)
Wang, K., et al.: Cafe: learning to condense dataset by aligning features. In: CVPR, pp. 12196–12205 (2022)
Wang, T., Zhu, J.Y., Torralba, A., Efros, A.A.: Dataset distillation. arXiv preprint (2018)
Welling, M.: Herding dynamical weights to learn. In: ICML, pp. 1121–1128 (2009)
Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., Weinberger, K.: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)
Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples on graph data: deep insights into attack and defense. arXiv preprint arXiv:1903.01610 (2019)
Wu, Q., et al.: SGFormer: simplifying and empowering transformers for large-graph representations. In: NeurIPS (2023)
Wu, Z., Pi, D., Chen, J., Xie, M., Cao, J.: Rumor detection based on propagation graph neural network with attention mechanism. Expert Syst. Appl. 113595 (2020)
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. TNNLS, 4–24 (2020)
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2018)
Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.I., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. In: ICML, pp. 5453–5462 (2018)
Yang, B., et al.: Does graph distillation see like vision dataset counterpart? In: NeurIPS (2023)
Ying, C., et al.: Do transformers really perform badly for graph representation? In: NeurIPS, pp. 28877–28888 (2021)
Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W.L., Leskovec, J.: Graph convolutional neural networks for web-scale recommender systems. In: SIGKDD, pp. 974–983 (2018)
Zeng, A., Chen, M., Zhang, L., Xu, Q.: Are transformers effective for time series forecasting? In: AAAI (2020)
Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.K.: Graphsaint: graph sampling based inductive learning method. In: ICLR (2020)
Zhang, M., Chen, Y.: Link prediction based on graph neural networks. In: NeurIPS (2018)
Zhao, B., Bilen, H.: Dataset condensation with distribution matching. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 6514–6523 (2023)
Zhao, B., Mopuri, K.R., Bilen, H.: Dataset condensation with gradient matching. In: ICLR (2021)
Zheng, X., Zhang, M., Chen, C., Nguyen, Q.V.H., Zhu, X., Pan, S.: Structure-free graph condensation: From large-scale graphs to condensed graph-free data. In: NeurIPS (2023)
Zhou, J., et al.: Graph neural networks: a review of methods and applications. AI Open (2020)
Acknowledgements
This research was supported by the Joint Funds of the Zhejiang Provincial Natural Science Foundation of China (No. LHZSD24F020001), Zhejiang Province “JianBingLingYan+X” Research and Development Plan (No. 2024C01114), Ningbo Natural Science Foundation (No. 2023J281), and Zhejiang Province High-Level Talents Special Support Program “Leading Talent of Technological Innovation of Ten-Thousands Talents Program” (No. 2022R52046).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xiao, Z., Wang, Y., Liu, S., Wang, H., Song, M., Zheng, T. (2024). Simple Graph Condensation. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14942. Springer, Cham. https://doi.org/10.1007/978-3-031-70344-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-70344-7_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70343-0
Online ISBN: 978-3-031-70344-7
eBook Packages: Computer ScienceComputer Science (R0)