Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Simple Graph Condensation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14942))

  • 793 Accesses

Abstract

The burdensome training costs on large-scale graphs have aroused significant interest in graph condensation, which involves tuning Graph Neural Networks (GNNs) on a small condensed graph for use on the large-scale original graph. Existing methods primarily focus on aligning key metrics between the condensed and original graphs, such as gradients, output distribution and trajectories of GNNs, yielding satisfactory performance on downstream tasks. However, these complex metrics necessitate intricate external parameters and can potentially disrupt the optimization process of the condensation graph, making the condensation process highly demanding and unstable. Motivated by the recent success of simplified models across various domains, we propose a simplified approach to metric alignment in graph condensation, aiming to reduce unnecessary complexity inherited from intricate metrics. We introduce the Simple Graph Condensation (SimGC) framework, which aligns the condensed graph with the original graph from the input layer to the prediction layer, guided by a pre-trained Simple Graph Convolution (SGC) model on the original graph. Importantly, SimGC eliminates external parameters and exclusively retains the target condensed graph during the condensation process. This straightforward yet effective strategy achieves a significant speedup of up to 10 times compared to existing graph condensation methods while performing on par with state-of-the-art baselines. Comprehensive experiments conducted on seven benchmark datasets demonstrate the effectiveness of SimGC in prediction accuracy, condensation time, and generalization capability. Our code is available at https://github.com/BangHonor/SimGC.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brody, S., Alon, U., Yahav, E.: How attentive are graph attention networks? In: ICLR (2021)

    Google Scholar 

  2. Cheng, D., Wang, X., Zhang, Y., Zhang, L.: Graph neural network for fraud detection via spatial-temporal attention. TKDE, 3800–3813 (2020)

    Google Scholar 

  3. Chiang, W.L., Liu, X., Si, S., Li, Y., Bengio, S., Hsieh, C.J.: Cluster-GCN: an efficient algorithm for training deep and large graph convolutional networks. In: SIGKDD, pp. 257–266 (2019)

    Google Scholar 

  4. Deng, Z., Russakovsky, O.: Remember the past: Distilling datasets into addressable memories for neural networks. arXiv preprint arXiv:2206.02916 (2022)

  5. Dong, T., Zhao, B., Lyu, L.: Privacy for free: How does dataset condensation help privacy? In: ICML, pp. 5378–5396 (2022)

    Google Scholar 

  6. Fan, W., et al.: Graph neural networks for social recommendation. In: WWW, pp. 417–426 (2019)

    Google Scholar 

  7. Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)

    Google Scholar 

  8. He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: LightGCN: simplifying and powering graph convolution network for recommendation. In: SIGIR, pp. 639–648 (2020)

    Google Scholar 

  9. Hu, W., et al.: Open graph benchmark: datasets for machine learning on graphs. In: NeurIPS, pp. 22118–22133 (2020)

    Google Scholar 

  10. Jin, W., et al.: Condensing graphs via one-step gradient matching. In: SIGKDD, pp. 720–730 (2022)

    Google Scholar 

  11. Jin, W., Zhao, L., Zhang, S., Liu, Y., Tang, J., Shah, N.: Graph condensation for graph neural networks. In: ICLR (2022)

    Google Scholar 

  12. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  13. Liu, J., Zheng, T., Zhang, G., Hao, Q.: Graph-based knowledge distillation: a survey and experimental evaluation. arXiv preprint arXiv:2302.14643 (2023)

  14. Liu, M., Li, S., Chen, X., Song, L.: Graph condensation via receptive field distribution matching. arXiv preprint arXiv:2206.13697 (2022)

  15. Liu, S., et al.: Transmission interface power flow adjustment: a deep reinforcement learning approach based on multi-task attribution map. IEEE Trans. Power Syst. 3324–3335 (2024)

    Google Scholar 

  16. Liu, S., Zhou, Y., Song, M., Bu, G., Guo, J., Chen, C.: Progressive decision-making framework for power system topology control. Expert Syst. Appl. 121070 (2024)

    Google Scholar 

  17. Liu, S., Ye, J., Yu, R., Wang, X.: Slimmable dataset condensation. In: CVPR, pp. 3759–3768 (2023)

    Google Scholar 

  18. Loukas, A.: Graph reduction with spectral and cut guarantees. J. Mach. Learn. Res. 1–42 (2019)

    Google Scholar 

  19. Loukas, A., Vandergheynst, P.: Spectrally approximating large graphs with smaller graphs. In: ICML (2018)

    Google Scholar 

  20. Nguyen, T., Chen, Z., Lee, J.: Dataset meta-learning from kernel ridge-regression. In: ICLR (2021)

    Google Scholar 

  21. Peleg, D., Schäffer, A.A.: Graph spanners. J. Graph Theory (1989)

    Google Scholar 

  22. Peng, J., Chen, Z., Shao, Y., Shen, Y., Chen, L., Cao, J.: Sancus: staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks. VLDB, 1937–1950 (2022)

    Google Scholar 

  23. Reiser, P., et al.: Graph neural networks for materials science and chemistry. Commun. Mater. 93 (2022)

    Google Scholar 

  24. Sener, O., Savarese, S.: Active learning for convolutional neural networks: a core-set approach. In: ICLR (2018)

    Google Scholar 

  25. Spielman, D.A., Teng, S.H.: Spectral sparsification of graphs. SIAM J. Comput. 981–1025 (2011)

    Google Scholar 

  26. Such, F.P., Rawal, A., Lehman, J., Stanley, K., Clune, J.: Generative teaching networks: accelerating neural architecture search by learning to generate synthetic training data. In: ICML, pp. 9206–9216 (2020)

    Google Scholar 

  27. Tolstikhin, I.O., et al.: MLP-mixer: an all-MLP architecture for vision. In: NeurIPS, pp. 24261–24272 (2021)

    Google Scholar 

  28. Wan, C., Li, Y., Wolfe, C.R., Kyrillidis, A., Kim, N.S., Lin, Y.: PipeGCN: efficient full-graph training of graph convolutional networks with pipelined feature communication. In: ICLR (2021)

    Google Scholar 

  29. Wang, K., et al.: Cafe: learning to condense dataset by aligning features. In: CVPR, pp. 12196–12205 (2022)

    Google Scholar 

  30. Wang, T., Zhu, J.Y., Torralba, A., Efros, A.A.: Dataset distillation. arXiv preprint (2018)

    Google Scholar 

  31. Welling, M.: Herding dynamical weights to learn. In: ICML, pp. 1121–1128 (2009)

    Google Scholar 

  32. Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., Weinberger, K.: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)

    Google Scholar 

  33. Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples on graph data: deep insights into attack and defense. arXiv preprint arXiv:1903.01610 (2019)

  34. Wu, Q., et al.: SGFormer: simplifying and empowering transformers for large-graph representations. In: NeurIPS (2023)

    Google Scholar 

  35. Wu, Z., Pi, D., Chen, J., Xie, M., Cao, J.: Rumor detection based on propagation graph neural network with attention mechanism. Expert Syst. Appl. 113595 (2020)

    Google Scholar 

  36. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. TNNLS, 4–24 (2020)

    Google Scholar 

  37. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2018)

    Google Scholar 

  38. Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.I., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. In: ICML, pp. 5453–5462 (2018)

    Google Scholar 

  39. Yang, B., et al.: Does graph distillation see like vision dataset counterpart? In: NeurIPS (2023)

    Google Scholar 

  40. Ying, C., et al.: Do transformers really perform badly for graph representation? In: NeurIPS, pp. 28877–28888 (2021)

    Google Scholar 

  41. Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W.L., Leskovec, J.: Graph convolutional neural networks for web-scale recommender systems. In: SIGKDD, pp. 974–983 (2018)

    Google Scholar 

  42. Zeng, A., Chen, M., Zhang, L., Xu, Q.: Are transformers effective for time series forecasting? In: AAAI (2020)

    Google Scholar 

  43. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.K.: Graphsaint: graph sampling based inductive learning method. In: ICLR (2020)

    Google Scholar 

  44. Zhang, M., Chen, Y.: Link prediction based on graph neural networks. In: NeurIPS (2018)

    Google Scholar 

  45. Zhao, B., Bilen, H.: Dataset condensation with distribution matching. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 6514–6523 (2023)

    Google Scholar 

  46. Zhao, B., Mopuri, K.R., Bilen, H.: Dataset condensation with gradient matching. In: ICLR (2021)

    Google Scholar 

  47. Zheng, X., Zhang, M., Chen, C., Nguyen, Q.V.H., Zhu, X., Pan, S.: Structure-free graph condensation: From large-scale graphs to condensed graph-free data. In: NeurIPS (2023)

    Google Scholar 

  48. Zhou, J., et al.: Graph neural networks: a review of methods and applications. AI Open (2020)

    Google Scholar 

Download references

Acknowledgements

This research was supported by the Joint Funds of the Zhejiang Provincial Natural Science Foundation of China (No. LHZSD24F020001), Zhejiang Province “JianBingLingYan+X” Research and Development Plan (No. 2024C01114), Ningbo Natural Science Foundation (No. 2023J281), and Zhejiang Province High-Level Talents Special Support Program “Leading Talent of Technological Innovation of Ten-Thousands Talents Program” (No. 2022R52046).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huiqiong Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xiao, Z., Wang, Y., Liu, S., Wang, H., Song, M., Zheng, T. (2024). Simple Graph Condensation. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14942. Springer, Cham. https://doi.org/10.1007/978-3-031-70344-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70344-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70343-0

  • Online ISBN: 978-3-031-70344-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics