Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Generation-based Multi-view Contrast for Self-supervised Graph Representation Learning

Published: 26 March 2024 Publication History

Abstract

Graph contrastive learning has made remarkable achievements in the self-supervised representation learning of graph-structured data. By employing perturbation function (i.e., perturbation on the nodes or edges of graph), most graph contrastive learning methods construct contrastive samples on the original graph. However, the perturbation-based data augmentation methods randomly change the inherent information (e.g., attributes or structures) of the graph. Therefore, after nodes embedding on the perturbed graph, we cannot guarantee the validity of the contrastive samples as well as the learned performance of graph contrastive learning. To this end, in this article, we propose a novel generation-based multi-view contrastive learning framework (GMVC) for self-supervised graph representation learning, which generates the contrastive samples based on our generator rather than perturbation function. Specifically, after nodes embedding on the original graph we first employ random walk in the neighborhood to develop multiple relevant node sequences for each anchor node. We then utilize the transformer to generate the representations of relevant contrastive samples of anchor node based on the features and structures of the sampled node sequences. Finally, by maximizing the consistency between the anchor view and the generated views, we force the model to effectively encode graph information into nodes embeddings. We perform extensive experiments of node classification and link prediction tasks on eight benchmark datasets, which verify the effectiveness of our generation-based multi-view graph contrastive learning method.

References

[1]
Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, and Magnus Sahlgren. 2020. Semantic re-tuning with contrastive tension. In Proceedings of the International Conference on Learning Representations.
[2]
Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 135–144.
[3]
Vijay Prakash Dwivedi and Xavier Bresson. 2021. A generalization of transformer networks to graphs. arXiv:2012.09699 [cs.LG].
[4]
Evgeniy Faerman, Otto Voggenreiter, Felix Borutta, Tobias Emrich, Max Berrendorf, and Matthias Schubert. 2019. Graph alignment networks with node matching scores. Proceedings of Advances in Neural Information Processing Systems (NIPS) 2 (2019).
[5]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 855–864.
[6]
Hakim Hafidi, Mounir Ghogho, Philippe Ciblat, and Ananthram Swami. 2020. GraphCL: Contrastive self-supervised learning of graph representations. arXiv:2007.08025 [cs.LG].
[7]
William L. Hamilton. 2020. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 14, 3 (2020), 1–159.
[8]
William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 1025–1035.
[9]
William L. Hamilton, Rex Ying, and Jure Leskovec. 2018. Representation learning on graphs: methods and applications. arXiv:1709.05584 [cs.SI].
[10]
Yuehui Han, Le Hui, Haobo Jiang, Jianjun Qian, and Jin Xie. 2022. Generative subgraph contrast for self-supervised graph representation learning. In Proceedings of the 17th European Conference on Computer Vision–ECCV 2022. Springer, 91–107.
[11]
Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In Proceedings of the International Conference on Machine Learning. PMLR, 4116–4126.
[12]
R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670 [stat.ML].
[13]
Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. 2020. A survey on contrastive self-supervised learning. Technologies 9, 1 (2020), 2.
[14]
Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. 2021. A survey on contrastive self-supervised learning. Technologies 9, 1 (2021), 2.
[15]
Yizhu Jiao, Yun Xiong, Jiawei Zhang, Yao Zhang, Tianqi Zhang, and Yangyong Zhu. 2020. Sub-graph contrast for scalable self-supervised graph representation learning. arXiv:2009.10273 [cs.LG].
[16]
Nikhil Ketkar and Eder Santana. 2017. Deep Learning with Python. Springer.
[17]
Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization. arXiv:1412.6980 [cs.LG].
[18]
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907 [cs.LG].
[19]
Junhyun Lee, Inyeop Lee, and Jaewoo Kang. 2019. Self-attention graph pooling. In Proceedings of the International Conference on Machine Learning. PMLR, 3734–3743.
[20]
Yunfan Li, Mouxing Yang, Dezhong Peng, Taihao Li, Jiantao Huang, and Xi Peng. 2022. Twin contrastive learning for online clustering. International Journal of Computer Vision 130, 9 (2022), 2205–2221.
[21]
Yijie Lin, Yuanbiao Gou, Xiaotian Liu, Jinfeng Bai, Jiancheng Lv, and Xi Peng. 2022. Dual contrastive prediction for incomplete multi-view representation learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 4 (2022), 4447–4461.
[22]
Giannis Nikolentzos and Michalis Vazirgiannis. 2020. Random walk graph neural networks. Advances in Neural Information Processing Systems 33 (2020), 16211–16222.
[23]
Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. 2020. Graph representation learning via graphical mutual information maximization. In Proceedings of the Web Conference 2020. 259–270.
[24]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 701–710.
[25]
Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1150–1160.
[26]
Leonardo F. R. Ribeiro, Pedro H. P. Saverese, and Daniel R. Figueiredo. 2017. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 385–394.
[27]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Magazine 29, 3 (2008), 93–93.
[28]
Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Pitfalls of graph neural network evaluation. arXiv:1811.05868 [cs.LG].
[29]
Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2020. InfoGraph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv:1908.01000 [cs.LG].
[30]
Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. arXiv:2106.05819 [cs.LG].
[31]
Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, and Changhu Wang. 2021. Directed graph contrastive learning. Advances in Neural Information Processing Systems 34 (2021), 19580–19593.
[32]
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research 9, 86 (2008), 2579–2605. http://jmlr.org/papers/v9/vandermaaten08a.html
[33]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. 5998–6008.
[34]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. arXiv:1710.10903 [stat.ML].
[35]
Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R. Devon Hjelm. 2018. Deep graph infomax. arXiv:1809.10341 [stat.ML].
[36]
Qianqian Wang, Zhiqiang Tao, Wei Xia, Quanxue Gao, Xiaochun Cao, and Licheng Jiao. 2023. Adversarial multiview clustering networks with adaptive fusion. IEEE Transactions on Neural Networks and Learning Systems 34, 10 (2023), 7635–7647.
[37]
Yingheng Wang, Yaosen Min, Xin Chen, and Ji Wu. 2021. Multi-view graph contrastive representation learning for drug-drug interaction prediction. In Proceedings of the Web Conference 2021. 2921–2933.
[38]
Yanling Wang, Jing Zhang, Haoyang Li, Yuxiao Dong, Hongzhi Yin, Cuiping Li, and Hong Chen. 2022. ClusterSCL: cluster-aware supervised contrastive learning on graphs. In Proceedings of the ACM Web Conference 2022. 1611–1621.
[39]
Jie Wen, Zheng Zhang, Yong Xu, and Zuofeng Zhong. 2018. Incomplete multi-view clustering via graph regularized matrix factorization. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 0–0.
[40]
Lirong Wu, Haitao Lin, Zhangyang Gao, Cheng Tan, and Stan. Z. Li. 2021. Self-supervised learning on graphs: contrastive, generative,or predictive. arXiv:2105.07342 [cs.LG].
[41]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 1 (2020), 4–24.
[42]
Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z. Li. 2022. SimGRACE: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022 (WWW’22). ACM.
[43]
Wei Xia, Qianqian Wang, Quanxue Gao, Ming Yang, and Xinbo Gao. 2023. Self-consistent contrastive attributed graph clustering with pseudo-label prompt. IEEE Transactions on Multimedia 25 (2023), 6665–6677.
[44]
Wei Xia, Tianxiu Wang, Quanxue Gao, Ming Yang, and Xinbo Gao. 2023. Graph embedding contrastive multi-modal representation learning for clustering. IEEE Transactions on Image Processing 32 (2023), 1170–1183.
[45]
Ben Yang, Xuetao Zhang, Badong Chen, Feiping Nie, Zhiping Lin, and Zhixiong Nan. 2022. Efficient correntropy-based multi-view clustering with anchor graph embedding. Neural Netw. 146, C (Feb 2022), 290–302.
[46]
Mouxing Yang, Yunfan Li, Peng Hu, Jinfeng Bai, Jiancheng Lv, and Xi Peng. 2022. Robust multi-view clustering with incomplete information. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 1 (2022), 1055–1069.
[47]
Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, and Xiang Zhang. 2022. AutoGCL: Automated graph contrastive learning via learnable view generators. arXiv:2109.10259 [cs.LG].
[48]
Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. 2021. Graph contrastive learning automated. arXiv:2106.07594 [cs.LG].
[49]
Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems 33 (2020), 5812–5823.
[50]
Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. 2022. Bringing your own view: graph contrastive learning without prefabricated data augmentations. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (WSDM’22). ACM.
[51]
Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J. Kim. 2019. Graph transformer networks. Advances in Neural Information Processing Systems 32 (2019), 11983–11993.
[52]
Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. 2018. Network representation learning: A survey. IEEE Transactions on Big Data 6, 1 (2018), 3–28.
[53]
Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in Neural Information Processing Systems 31 (2018), 5165–5175.
[54]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv:2006.04131 [cs.LG].
[55]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021. Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021. 2069–2080.

Index Terms

  1. Generation-based Multi-view Contrast for Self-supervised Graph Representation Learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Knowledge Discovery from Data
    ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 5
    June 2024
    699 pages
    EISSN:1556-472X
    DOI:10.1145/3613659
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 March 2024
    Online AM: 09 February 2024
    Accepted: 28 January 2024
    Revised: 16 October 2023
    Received: 29 December 2022
    Published in TKDD Volume 18, Issue 5

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Graph representation learning
    2. contrastive learning
    3. multi-view generation

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 309
      Total Downloads
    • Downloads (Last 12 months)309
    • Downloads (Last 6 weeks)26
    Reflects downloads up to 14 Oct 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media