Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/3618408.3619148guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Featured graph coarsening with similarity guarantees

Published: 23 July 2023 Publication History

Abstract

Graph coarsening is a dimensionality reduction technique that aims to learn a smaller-tractable graph while preserving the properties of the original input graph. However, many real-world graphs also have features or contexts associated with each node. The existing graph coarsening methods do not consider the node features and rely solely on a graph matrix(e.g., adjacency and Laplacian) to coarsen graphs. However, some recent deep learning-based graph coarsening methods are designed for specific tasks considering both node features and graph matrix. In this paper, we introduce a novel optimization-based framework for graph coarsening that takes both the graph matrix and the node features as the input and jointly learns the coarsened graph matrix and the coarsened feature matrix while ensuring desired properties. To the best of our knowledge, this is the first work that guarantees that the learned coarsened graph is ε ∈ [0, 1) similar to the original graph. Extensive experiments with both real and synthetic benchmark datasets elucidate the proposed framework's efficacy and applicability for numerous graph-based applications, including graph clustering, node classification, stochastic block model identification, and graph summarization.

References

[1]
Beck, A. and Pan, D. Convergence of an Inexact Majorization-Minimization Method for Solving a Class of Composite Optimization Problems, pp. 375-410. 01 2018. ISBN 978-3-319-97477-4.
[2]
Bravo Hermsdorff, G. and Gunderson, L. A unifying framework for spectrum-preserving graph sparsification and coarsening. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
[3]
Briggs, W. L., Henson, V. E., and McCormick, S. F. A multigrid tutorial. Society for Industrial and Applied Mathematics., 2000.
[4]
Cai, C., Wang, D., and Wang, Y. Graph coarsening with neural networks. In International Conference on Learning Representations, 2021.
[5]
Chen, J., Saad, Y., and Zhang, Z. Graph coarsening: from scientific computing to machine learning. Journal of the Spanish Society of Applied Mathematics (SeMA), 79(1): 187-223, 2022.
[6]
Chung, F. R. and Graham, F. C. Spectral graph theory, volume 92. American Mathematical Soc., 1997.
[7]
Dhillon, I. S., Guan, Y., and Kulis, B. Weighted graph cuts without eigenvectors a multilevel approach. IEEE transactions on pattern analysis and machine intelligence, 29(11):1944-1957, 2007.
[8]
Dong, X., Thanou, D., Frossard, P., and Vandergheynst, P. Learning laplacian matrix in smooth graph signal representations. IEEE Transactions on Signal Processing, 64(23):6160-6173, 2016.
[9]
Fortunato, S. Community detection in graphs. Physics reports, 486(3-5):75-174, 2010.
[10]
Gasteiger, J., Bojchevski, A., and Gunnemann, S. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018.
[11]
Gavish, M., Nadler, B., and Coifman, R. R. Multiscale wavelets on trees, graphs and high dimensional data: Theory and applications to semi supervised learning. In International Conference on Machine Learning, 2010.
[12]
Hackbusch, W. Multi-grid methods and applications, volume 4. Springer Science & Business Media, 2013.
[13]
Hajiabadi, M., Singh, J., Srinivasan, V., and Thomo, A. Graph summarization with controlled utility loss. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 536-546, 2021.
[14]
Hendrickson, B., Leland, R. W., et al. A multi-level algorithm for partitioning graphs. SC, 95(28):1-14, 1995.
[15]
Huang, Z., Zhang, S., Xi, C., Liu, T., and Zhou, M. Scaling up graph neural networks via graph coarsening. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 675-684, 2021.
[16]
Jin, W., Zhao, L., Zhang, S., Liu, Y., Tang, J., and Shah, N. Graph condensation for graph neural networks. arXiv preprint arXiv:2110.07580, 2021.
[17]
Jin, W., Tang, X., Jiang, H., Li, Z., Zhang, D., Tang, J., and Yin, B. Condensing graphs via one-step gradient matching. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 720-730, 2022.
[18]
Kalofolias, V. How to learn a graph from smooth signals. In Gretton, A. and Robert, C. C. (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 920-929, Cadiz, Spain, 09-11 May 2016. Proceedings of Machine Learning Research.
[19]
Kang, S., Lee, K., and Shin, K. Personalized graph summarization: formulation, scalable algorithms, and applications. arXiv preprint arXiv:2203.14755, 2022.
[20]
Karypis, G. and Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on scientific Computing, 20(1):359-392, 1998.
[21]
Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
[22]
Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
[23]
Ko, J., Kook, Y., and Shin, K. Incremental lossless graph summarization. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 317-327, 2020.
[24]
Kumar, M., Sharma, A., and Kumar, S. A unified framework for optimization-based graph coarsening. Journal of Machine Learning Research, 24(118):1-50, 2023.
[25]
Kumar, S., Ying, J., de Miranda Cardoso, J. V., and Palomar, D. Structured graph learning via laplacian spectral constraints. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
[26]
Kumar, S., Ying, J., de Miranda Cardoso, J. V., and Palomar, D. P. A unified framework for structured graph learning via spectral constraints. Journal of Machine Learning Research, 21(22):1-60, 2020.
[27]
Kushnir, D., Galun, M., and Brandt, A. Fast multiscale clustering and manifold identification. Pattern Recognition, 39(10):1876-1891, 2006.
[28]
Lafon, S. and Lee, A. B. Diffusion maps and coarse-graining: A unified framework for dimensionality reduction, graph partitioning, and data set parameterization. IEEE transactions on pattern analysis and machine intelligence, 28(9):1393-1403, 2006.
[29]
Langley, P. Crafting papers on machine learning. In Langley, P. (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
[30]
Liu, Y., Safavi, T., Dighe, A., and Koutra, D. Graph summarization methods and applications: A survey. ACM computing surveys (CSUR), 51(3):1-34, 2018.
[31]
Loukas, A. Graph reduction with spectral and cut guarantees. Journal of Machine Learning Research, 20(116):1-42, 2019.
[32]
Loukas, A. and Vandergheynst, P. Spectrally approximating large graphs with smaller graphs. In International Conference on Machine Learning, pp. 3237-3246. Proceedings of Machine Learning Research, 2018.
[33]
Ma, T. and Chen, J. Unsupervised learning of graph hierarchical abstractions with differentiable coarsening and optimal transport, 2020.
[34]
Ma, T. and Chen, J. Unsupervised learning of graph hierarchical abstractions with differentiable coarsening and optimal transport. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8856-8864, 2021.
[35]
Ming, D., Ding, C., and Nie, F. A probabilistic derivation of lasso and l12-norm feature selections. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4586-4593, 2019.
[36]
Ng, A., Jordan, M., and Weiss, Y. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 14, 2001.
[37]
Purohit, M., Prakash, B. A., Kang, C., Zhang, Y., and Subrahmanian, V. Fast influence-based coarsening for large networks. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1296-1305, 2014.
[38]
Qiu, Y., Zhou, G., and Xie, K. Deep approximately orthogonal nonnegative matrix factorization for clustering. arXiv preprint arXiv:1711.07437, 2017.
[39]
Rajawat, K. and Kumar, S. Stochastic multidimensional scaling. IEEE Transactions on Signal and Information Processing over Networks, 3(2):360-375, 2017.
[40]
Razaviyayn, M., Hong, M., and Luo, Z.-Q. A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM Journal on Optimization, 23, 09 2012.
[41]
Riondato, M., Garcia-Soriano, D., and Bonchi, F. Graph summarization with quality guarantees. Data mining and knowledge discovery, 31(2):314-349, 2017.
[42]
Ruge, J. W. and Stuben, K. Algebraic multigrid. In Multigrid methods, pp. 73-130. SIAM, 1987.
[43]
Shuman, D. I., Faraji, M. J., and Vandergheynst, P. A multiscale pyramid transform for graph signals. IEEE Transactions on Signal Processing, 64(8):2119-2134, 2015.
[44]
Sun, Y., Babu, P., and Palomar, D. P. Majorizationminimization algorithms in signal processing, communications, and machine learning. IEEE Transactions on Signal Processing, 65(3):794-816, 2017.
[45]
Velivcković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
[46]
Wang, X., Ji, H., Shi, C., Wang, B., Ye, Y., Cui, P., and Yu, P. S. Heterogeneous graph attention network. In The world wide web conference, pp. 2022-2032, 2019.
[47]
Wang, X., Pun, Y.-M., and So, A. Learning graphs from smooth signals under moment uncertainty. 05 2021.
[48]
Ying, J., de Miranda Cardoso, J. V., and Palomar, D. Nonconvex sparse graph learning under laplacian constrained graphical model. Advances in Neural Information Processing Systems, 33:7101-7113, 2020.
[49]
Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., and Leskovec, J. Hierarchical graph representation learning with differentiable pooling. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
[50]
Yuan, M. and Lin, Y. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68 (1):49-67, 2006.
[51]
Zachary, W. W. An information flow model for conflict and fission in small groups. Journal of anthropological research, 33(4):452-473, 1977.
[52]
Zhu, P., Zhu, W., Hu, Q., Zhang, C., and Zuo, W. Subspace clustering guided unsupervised feature selection. Pattern Recognition, 66:364-374, 2017.
[53]
Zügner, D. and Gunnemann, S. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412, 2019.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
ICML'23: Proceedings of the 40th International Conference on Machine Learning
July 2023
43479 pages

Publisher

JMLR.org

Publication History

Published: 23 July 2023

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 27 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media