Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Graph Self-Supervised Learning: A Survey

Published: 01 June 2023 Publication History

Abstract

Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of <italic>graph self-supervised learning</italic>, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further describe the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field.

References

[1]
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. Int. Conf. Learn. Representations, 2017, pp. 1–14.
[2]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in Proc. Int. Conf. Learn. Representations, 2018, pp. 1–12.
[3]
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–17.
[4]
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, Jan. 2021.
[5]
Z. Li et al., “Hierarchical bipartite graph neural networks: Towards large-scale E-commerce applications,” in Proc. Int. Conf. Data Eng., 2020, pp. 1677–1688.
[6]
Z. Wu, S. Pan, G. Long, J. Jiang, and C. Zhang, “Graph wavenet for deep spatial-temporal graph modeling,” in Proc. Int. Joint Conf. Artif. Intell., 2019, pp. 1907–1913.
[7]
Q. Liu, M. Allamanis, M. Brockschmidt, and A. L. Gaunt, “Constrained graph variational autoencoders for molecule design,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2018, pp. 7806–7815.
[8]
S. Ji, S. Pan, E. Cambria, P. Marttinen, and P. S. Yu, “A survey on knowledge graphs: Representation, acquisition, and applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 2, pp. 494–514, Feb. 2021.
[9]
Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun, “GPT-GNN: Generative pre-training of graph neural networks,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 1857–1867.
[10]
Y. Rong et al., “Self-supervised graph transformer on large-scale molecular data,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020, pp. 12 559–12 571.
[11]
Y. Rong, W. Huang, T. Xu, and J. Huang, “DropEdge: Towards deep graph convolutional networks on node classification,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–17.
[12]
M. Zhang, L. Hu, C. Shi, and X. Wang, “Adversarial label-flipping attack and defense for graph neural networks,” in Proc. Int. Conf. Des. Mater., 2020, pp. 791–800.
[13]
P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–17.
[14]
K. Hassani and A. H. Khasahmadi, “Contrastive multi-view representation learning on graphs,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 4116–4126.
[15]
J. Qiu et al., “GCC: Graph contrastive coding for graph neural network pre-training,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 1150–1160.
[16]
W. Hu et al., “Strategies for pre-training graph neural networks,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–22.
[17]
Y. You, T. Chen, Z. Wang, and Y. Shen, “When does self-supervision help graph convolutional networks?,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 10 871–10 880.
[18]
N. Jovanović, Z. Meng, L. Faber, and R. Wattenhofer, “Towards robust graph contrastive learning,” in Proc. Int. World Wide Web Conf. Workshop, 2021, pp. 1–6.
[19]
L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp. 4037–4058, Nov. 2020.
[20]
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proc. Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2536–2544.
[21]
R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 649–666.
[22]
M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving Jigsaw puzzles,” in Proc. Eur. Conf. Comput. Vis., 2016, pp. 69–84.
[23]
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proc. Conf. Comput. Vis. Pattern Recognit., 2020, pp. 9729–9738.
[24]
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 1597–1607.
[25]
J.-B. Grill et al., “Bootstrap your own latent - a new approach to self-supervised learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020, pp. 21 271–21 284.
[26]
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in Proc. Int. Conf. Learn. Representations, 2013, pp. 1–12.
[27]
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2013, pp. 3111–3119.
[28]
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics, 2019, pp. 4171–4186.
[29]
Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, “XLNet: Generalized autoregressive pretraining for language understanding,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2019, pp. 5753–5763.
[30]
B. Perozzi, R. Al-Rfou, and S. Skiena, “DeepWalk: Online learning of social representations,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2014, pp. 701–710.
[31]
A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2016, pp. 855–864.
[32]
T. N. Kipf and M. Welling, “Variational graph auto-encoders,” in Proc. Int. Conf. Neural Inf. Process. Syst. Workshop, 2016, pp. 1–3.
[33]
Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep Graph Contrastive Representation Learning,” in Proc. Int. Conf. Mach. Learn. Workshop, 2020, pp. 1–17.
[34]
X. Liu et al., “Self-supervised learning: Generative or contrastive,” IEEE Trans. Knowl. Data Eng., early access, Jun. 22, 2021.
[35]
A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” Technol., vol. 9, no. 1, 2021, Art. no.
[36]
Y. Xie, Z. Xu, J. Zhang, Z. Wang, and S. Ji, “Self-supervised learning of graph neural networks: A unified review,” 2021,.
[37]
L. Wu et al., “Self-supervised on graphs: Contrastive, generative, or predictive,” 2021,.
[38]
Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020, pp. 5812–5823.
[39]
J. Zhang, H. Zhang, C. Xia, and L. Sun, “Graph-Bert: Only attention is needed for learning graph representations,” 2020,.
[40]
K. Sun, Z. Lin, and Z. Zhu, “Multi-stage self-supervised learning for graph convolutional networks on graphs with few labeled nodes,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2020, pp. 5892–5899.
[41]
Z. Peng et al., “Graph representation learning via graphical mutual information maximization,” in Proc. Int. World Wide Web Conf., 2020, pp. 259–270.
[42]
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
[43]
W. Jin et al., “Self-supervised learning on graphs: Deep insights and new direction,” in Proc. Int. World Wide Web Conf. Workshop, 2021, pp. 1–17.
[44]
C. Wang, S. Pan, G. Long, X. Zhu, and J. Jiang, “MGAE: Marginalized graph autoencoder for graph clustering,” in Proc. Conf. Inf. Knowl. Manage., 2017, pp. 889–898.
[45]
F. Manessi and A. Rozza, “Graph-based neural network models with multiple self-supervised auxiliary tasks,” Pattern Recognit. Lett., vol. 148, pp. 15–21, 2021.
[46]
J. Park, M. Lee, H. J. Chang, K. Lee, and J. Y. Choi, “Symmetric graph convolutional autoencoder for unsupervised graph representation learning,” in Proc. Int. Conf. Comput. Vis., 2019, pp. 6519–6528.
[47]
E. Hajiramezanali, A. Hasanzadeh, N. Duffield, K. Narayanan, M. Zhou, and X. Qian, “Semi-implicit graph variational auto-encoders,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2019, pp. 10 712–10 723.
[48]
S. Pan, R. Hu, G. Long, J. Jiang, L. Yao, and C. Zhang, “Adversarially regularized graph autoencoder for graph embedding,” in Proc. Int. Joint Conf. Artif. Intell., 2018, pp. 2609–2615.
[49]
D. Kim and A. Oh, “How to find your friendly neighborhood: Graph attention design with self-supervision,” in Proc. Int. Conf. Learn. Representations, 2021, pp. 1–25.
[50]
Z. Hu, C. Fan, T. Chen, K.-W. Chang, and Y. Sun, “Pre-training graph neural networks for generic structural feature extraction,” 2019,.
[51]
Q. Zhu, B. Du, and P. Yan, “Self-supervised training of graph convolutional networks,” 2020,.
[52]
S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics Intell. Lab. Syst., 1987.
[53]
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, and L. Bottou, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Mach. Learn. Res., vol. 11, no. 12, pp. 3371–3408, 2010.
[54]
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in Proc. Int. Conf. Learn. Representations, 2014, pp. 1–14.
[55]
I. J. Goodfellow et al., “Generative adversarial nets,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[56]
Y. Zhu, Y. Xu, F. Yu, S. Wu, and L. Wang, “CAGNN: Cluster-aware graph neural networks for unsupervised graph representation learning,” 2020,.
[57]
Z. Peng, Y. Dong, M. Luo, X.-M. Wu, and Q. Zheng, “Self-supervised graph representation learning via global context prediction,” 2020,.
[58]
W. Jin, T. Derr, Y. Wang, Y. Ma, Z. Liu, and J. Tang, “Node similarity preserving graph convolutional networks,” in Proc. Int. Conf. Web Search Data Mining, 2021, pp. 148–156.
[59]
G. Karypis and V. Kumar, “Multilevel graph partitioning schemes,” in Proc. Int. Conf. Parallel Process., 1995, pp. 113–122.
[60]
Z. Lin, Z. Kang, L. Zhang, and L. Tian, “Multi-view attributed graph clustering,” IEEE Trans. Knowl. Data Eng., early access, Aug. 06, 2021.
[61]
M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 132–149.
[62]
Z. Kang, Z. Lin, X. Zhu, and W. Xu, “Structured graph learning for scalable subspace clustering: From single view to multiview,” IEEE Trans. Cybern., early access, Mar. 17, 2021.
[63]
S. E. Schaeffer, “Graph clustering,” Comput. Sci. Rev., vol. 1, no. 1, pp. 27–64, 2007.
[64]
G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 359–392, 1998.
[65]
R. D. Hjelm et al., “Learning deep representations by mutual information estimation and maximization,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–24.
[66]
Y. Tian, D. Krishnan, and P. Isola, “Contrastive multiview coding,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 776–794.
[67]
X. Wang and G.-J. Qi, “Contrastive learning with stronger augmentations,” 2021,.
[68]
M. Jin, Y. Zheng, Y.-F. Li, C. Gong, C. Zhou, and S. Pan, “Multi-scale contrastive siamese networks for self-supervised graph representation learning,” in Proc. Int. Joint Conf. Artif. Intell., 2021, pp. 1477–1483.
[69]
Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” in Proc. Web Conf., 2021, pp. 2069–2080.
[70]
Y. You, T. Chen, Y. Shen, and Z. Wang, “Graph contrastive learning automated,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 12 121–12 132.
[71]
F. L. Opolka, A. Solomon, C. Cangea, P. Veličković, P. Liò, and R. D. Hjelm, “Spatio-temporal deep graph infomax,” in Proc. Int. Conf. Learn. Representations Workshop, 2019, pp. 1–6.
[72]
Y. Ren, B. Liu, C. Huang, P. Dai, L. Bo, and J. Zhang, “HDGI: An unsupervised graph neural network for representation learning in heterogeneous graph,” in Proc. Conf. Assoc. Advance. Artif. Intell. Workshop, 2020, pp. 1–6.
[73]
H. Zhang et al., “Iterative graph self-distillation,” in Proc. Web Conf. Workshop, 2021, pp. 1–9.
[74]
J. Zeng and P. Xie, “Contrastive self-supervised learning for graph classification,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2021, pp. 10824–10832.
[75]
S. Suresh, P. Li, C. Hao, and J. Neville, “Adversarial graph augmentation to improve graph contrastive learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2021, pp. 15920–15933.
[76]
J. Klicpera, S. Weißenberger, and S. Günnemann, “Diffusion improves graph learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2019, pp. 13366–13378.
[77]
Y. Jiao, Y. Xiong, J. Zhang, Y. Zhang, T. Zhang, and Y. Zhu, “Sub-graph contrast for scalable self-supervised graph representation learning,” in Proc. Int. Conf. Des. Mater., 2020, pp. 222–231.
[78]
W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2017, pp. 1025–1035.
[79]
J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “LINE: Large-scale information network embedding,” in Proc. Int. World Wide Web Conf., 2015, pp. 1067–1077.
[80]
D. Hwang, J. Park, S. Kwon, K. Kim, J.-W. Ha, and H. J. Kim, “Self-supervised auxiliary learning with meta-paths for heterogeneous graphs,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020, pp. 10 294–10 305.
[81]
H. Hafidi, M. Ghogho, P. Ciblat, and A. Swami, “GraphCL: Contrastive self-supervised learning of graph representations,” 2020,.
[82]
S. Wan, Y. Zhan, L. Liu, B. Yu, S. Pan, and C. Gong, “Contrastive graph poisson networks: Semi-supervised learning with extremely limited labels,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2021, pp. 6316–6327.
[83]
X. Wang, N. Liu, H. Han, and C. Shi, “Self-supervised heterogeneous graph neural network with co-contrastive learning,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2021, pp. 1726–1736.
[84]
S. Thakoor, C. Tallec, M. G. Azar, R. Munos, P. Veličković, and M. Valko, “Large-scale representation learning on graphs via bootstrapping,” in Proc. Int. Conf. Learn. Representations, 2022, pp. 1–21.
[85]
Z. T. Kefato and S. Girdzijauskas, “Self-supervised graph neural networks without explicit negative sampling,” in Proc. Int. World Wide Web Conf. Workshop, 2021, pp. 1–8.
[86]
J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins: Self-supervised learning via redundancy reduction,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 12310–12320.
[87]
P. Bielak, T. Kajdanowicz, and N. V. Chawla, “Graph Barlow Twins: A self-supervised representation learning framework for graphs,” 2021,.
[88]
X. Chen and K. He, “Exploring simple siamese representation learning,” in Proc. Conf. Comput. Vis. Pattern Recognit., 2021, pp. 15 750–15 758.
[89]
V. Verma, T. Luong, K. Kawaguchi, H. Pham, and Q. Le, “Towards domain-agnostic contrastive learning,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 10 530–10 541.
[90]
Y. Ren, J. Bai, and J. Zhang, “Label contrastive coding based graph neural network for graph classification,” in Proc. Conf. Database Syst. Adv. Appl., 2021, pp. 123–140.
[91]
C. Mavromatis and G. Karypis, “Graph infoclust: Maximizing coarse-grain mutual information in graphs,” in Proc. Pacific-Asia Conf. Knowl. Discov. Data Mining, 2021, pp. 541–553.
[92]
Y. Ren and B. Liu, “Heterogeneous deep graph infomax,” in Proc. Conf. Assoc. Advance. Artif. Intell. Workshop, 2020, pp. 1–6.
[93]
X. Li, D. Ding, B. Kao, Y. Sun, and N. Mamoulis, “Leveraging meta-path contexts for classification in heterogeneous information networks,” in Proc. Int. Conf. Data Eng., 2021, pp. 912–923.
[94]
C. Park, D. Kim, J. Han, and H. Yu, “Unsupervised attributed multiplex network embedding,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2020, pp. 5371–5378.
[95]
Q. Zhu, Y. Xu, H. Wang, C. Zhang, J. Han, and C. Yang, “Transfer learning of graph neural networks with ego-graph information maximization,” in Proc. Int. World Wide Web Conf., 2021, pp. 1766–1779.
[96]
P. Wang, K. Agarwal, C. Ham, S. Choudhury, and C. K. Reddy, “Self-supervised learning of contextual embeddings for link prediction in heterogeneous networks,” in Proc. Int. World Wide Web Conf., 2021, pp. 2946–2957.
[97]
F.-Y. Sun, J. Hoffman, V. Verma, and J. Tang, “InfoGraph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–16.
[98]
J. D. Robinson, C.-Y. Chuang, S. Sra, and S. Jegelka, “Contrastive learning with hard negative samples,” in Proc. Int. Conf. Learn. Representations, 2021, pp. 1–29.
[99]
J. Cao, X. Lin, S. Guo, L. Liu, T. Liu, and B. Wang, “Bipartite graph embedding via mutual information maximization,” in Proc. Int. Conf. Web Search Data Mining, 2021, pp. 635–643.
[100]
C. Wang and Z. Liu, “Learning graph representation by aggregating subgraphs via mutual information maximization,” 2021,.
[101]
A. Subramonian, “Motif-driven contrastive learning of graph representations,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2021, pp. 15 980–15 981.
[102]
Q. Sun et al., “SUGAR: Subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism,” in Proc. Int. World Wide Web Conf., 2021, pp. 2081–2091.
[103]
M. Tschannen, J. Djolonga, P. K. Rubenstein, S. Gelly, and M. Lucic, “On mutual information maximization for representation learning,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–16.
[104]
A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” 2018,.
[105]
J. Zhang, K. Chen, and Y. Wang, “Pre-training on dynamic graph neural networks,” 2021,.
[106]
S. Wan, S. Pan, J. Yang, and C. Gong, “Contrastive and generative graph convolutional networks for graph-based semi-supervised learning,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2021, pp. 10 049–10 057.
[107]
X. Fan, M. Gong, Y. Wu, and H. Li, “Maximizing mutual information across feature and topology views for learning graph representations,” 2021,.
[108]
M. Xu, H. Wang, B. Ni, H. Guo, and J. Tang, “Self-supervised graph-level representation learning with local and global structure,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 11 548–11 558.
[109]
B. Jing, C. Park, and H. Tong, “HDMI: High-order deep multiplex infomax,” in Proc. Int. World Wide Web Conf., 2021, pp. 2414–2424.
[110]
Y. Zheng et al., “Towards graph self-supervised learning with contrastive adjusted zooming,” 2021,.
[111]
K. K. Roy, A. Roy, A. Rahman, M. A. Amin, and A. A. Ali, “Node embedding using mutual information and self-supervision based bi-level aggregation,” in Proc. Int. Joint Conf. Neural Netw., 2021, pp. 1–8.
[112]
S. Kou, W. Xia, X. Zhang, Q. Gao, and X. Gao, “Self-supervised graph convolutional clustering by preserving latent distribution,” Neurocomputing, vol. 437, pp. 218–226, 2021.
[113]
P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI Mag., vol. 29, no. 3, pp. 93–93, 2008.
[114]
B. Hao, J. Zhang, H. Yin, C. Li, and H. Chen, “Pre-training graph neural networks for cold-start users and items representation,” in Proc. Int. Conf. Web Search Data Mining, 2021, pp. 265–273.
[115]
J. Yu, H. Yin, J. Li, Q. Wang, N. Q. V. Hung, and X. Zhang, “Self-supervised multi-channel hypergraph convolutional network for social recommendation,” in Proc. Int. World Wide Web Conf., 2021, pp. 413–424.
[116]
X. Xia, H. Yin, J. Yu, Q. Wang, L. Cui, and X. Zhang, “Self-supervised hypergraph convolutional networks for session-based recommendation,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2021, pp. 4503–4511.
[117]
Z. Liu, Y. Ma, Y. Ouyang, and Z. Xiong, “Contrastive learning for recommender system,” 2021,.
[118]
Y. Liu et al., “Pre-training graph transformer with multimodal side information for recommendation,” in Proc. 29th ACM Int. Conf. Multimedia, 2021, pp. 2853–2861.
[119]
Y. Liu, Z. Li, S. Pan, C. Gong, C. Zhou, and G. Karypis, “Anomaly detection on attributed networks via contrastive self-supervised learning,” IEEE Trans. Neural Netw. Learn. Syst., early access, Apr. 05, 2021.
[120]
K. Ding, J. Li, R. Bhanushali, and H. Liu, “Deep anomaly detection on attributed networks,” in Proc. SIAM Int. Conf. Data Mining, 2019, pp. 594–602.
[121]
Y. Li, X. Huang, J. Li, M. Du, and N. Zou, “SpecAE: Spectral autoencoder for anomaly detection in attributed networks,” in Proc. Conf. Inf. Knowl. Manage., 2019, pp. 2233–2236.
[122]
K. Ding, J. Li, N. Agarwal, and H. Liu, “Inductive anomaly detection on attributed networks,” in Proc. Int. Joint Conf. Artif. Intell., 2020, pp. 1288–1294.
[123]
M. Jin, Y. Liu, Y. Zheng, L. Chi, Y.-F. Li, and S. Pan, “Anemone: Graph anomaly detection with multi-scale contrastive learning,” in Proc. Conf. Inf. Knowl. Manage., 2021, pp. 3122–3126.
[124]
Y. Zheng, M. Jin, Y. Liu, L. Chi, K. T. Phan, and Y.-P. P. Chen, “Generative and contrastive self-supervised learning for graph anomaly detection,” IEEE Trans. Knowl. Data Eng., early access, Oct. 12, 2021.
[125]
T. Huang, Y. Pei, V. Menkovski, and M. Pechenizkiy, “Hop-count based self-supervised anomaly detection on attributed networks,” 2021,.
[126]
Y. Wang, J. Wang, Z. Cao, and A. B. Farimani, “MolCLR: Molecular contrastive learning of representations via graph neural networks,” 2021,.
[127]
Y. Fang, H. Yang, X. Zhuang, X. Shao, X. Fan, and H. Chen, “Knowledge-aware contrastive molecular graph learning,” 2021,.
[128]
S. Cheng et al., “GraphMS: Drug target prediction using graph representation learning with substructures,” Appl. Sci., vol. 11, 2021, Art. no.
[129]
Y. Wang, Y. Min, X. Chen, and J. Wu, “Multi-view graph contrastive representation learning for drug-drug interaction prediction,” in Proc. Int. World Wide Web Conf., 2021, pp. 2921–2933.
[130]
D. Jin, Z. Yu, P. Jiao, S. Pan, P. S. Yu, and W. Zhang, “A survey of community detection approaches: From statistical modeling to deep learning,” IEEE Trans. Knowl. Data Eng., early access, Aug. 11, 2021.
[131]
R. Ying, J. You, C. Morris, X. Ren, W. L. Hamilton, and J. Leskovec, “Hierarchical graph representation learning with differentiable pooling,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2018, pp. 4805–4815.
[132]
Y. G. Wang, M. Li, Z. Ma, G. Montufar, X. Zhuang, and Y. Fan, “Haar graph pooling,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 9952–9962.
[133]
P. Yanardag and S. Vishwanathan, “Deep graph kernels,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2015, pp. 1365–1374.
[134]
O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann, “Pitfalls of graph neural network evaluation,” in Proc. Int. Conf. Neural Inf. Process. Syst. Workshop, 2018, pp. 1–11.
[135]
P. Mernyei and C. Cangea, “Wiki-CS: A wikipedia-based benchmark for graph neural networks,” in Proc. Int. Conf. Mach. Learn. Workshop, 2020, pp. 1–7.
[136]
W. Hu et al., “Open Graph Benchmark: Datasets for machine learning on graphs,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020, pp. 22 118–22 133.
[137]
M. Zitnik and J. Leskovec, “Predicting multicellular function through multi-layer tissue networks,” Bioinformatics, vol. 33, no. 14, pp. i190–i198, 2017.
[138]
A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch, “Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity,” J. Med. Chem., vol. 34, no. 2, pp. 786–797, 1991.
[139]
K. M. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. J. Smola, and H.-P. Kriegel, “Protein function prediction via graph kernels,” Bioinformatics, vol. 21, no. suppl_1, pp. i47–i56, 2005.
[140]
P. D. Dobson and A. J. Doig, “Distinguishing enzyme structures from non-enzymes without alignments,” J. Mol. Biol., vol. 330, no. 4, pp. 771–783, 2003.
[141]
H. Toivonen, A. Srinivasan, R. D. King, S. Kramer, and C. Helma, “Statistical evaluation of the predictive toxicology challenge 2000–2001,” Bioinformatics, vol. 19, no. 10, pp. 1183–1193, 2003.
[142]
C. Helma, R. D. King, S. Kramer, and A. Srinivasan, “The predictive toxicology challenge 2000–2001,” Bioinformatics, 2001.
[143]
N. Wale, I. A. Watson, and G. Karypis, “Comparison of descriptor spaces for chemical compound retrieval and classification,” Knowl. Informat. Syst., vol. 14, no. 3, pp. 347–375, 2008.
[144]
I. F. Martins, A. L. Teixeira, L. Pinheiro, and A. Falcao, “A Bayesian approach to in silico blood-brain barrier penetration modeling,” J. Chem. Inf. Model., vol. 52, no. 6, pp. 1686–1697, 2012.
[145]
Tox21, “Tox21 data challenge 2014,” 2014. [Online]. Available: https://tripod.nih.gov/tox21/challenge/
[146]
A. M. Richard et al., “ToxCast chemical landscape: Paving the road to 21st century toxicology,” Chem. Res. Toxicol., vol. 29, no. 8, pp. 1225–1251, 2016.
[147]
M. Kuhn, I. Letunic, L. J. Jensen, and P. Bork, “The SIDER database of drugs and side effects,” Nucleic Acids Res., vol. 44, no. D1, pp. D1075–D1079, 2016.
[148]
P. A. Novick, O. F. Ortiz, J. Poelman, A. Y. Abdulhay, and V. S. Pande, “SWEETLEAD: An in silico database of approved drugs, regulated chemicals, and herbal isolates for computer-aided drug discovery,” PLoS One, vol. 8, no. 11, 2013, Art. no.
[149]
E. J. Gardiner, J. D. Holliday, C. O’Dowd, and P. Willett, “Effectiveness of 2D fingerprints for scaffold hopping,” Future Med. Chem., vol. 3, no. 4, pp. 405–414, 2011.
[150]
“AIDS antiviral screen data,” 2017. [Online]. Available: http://wiki.nci.nih.gov/display/NCIDTPdata/AIDS
[151]
G. Subramanian, B. Ramsundar, V. Pande, and R. A. Denny, “Computational modeling of β-secretase 1 (BACE-1) inhibitors using ligand based approaches,” J. Chem. Inf. Model., vol. 56, no. 10, pp. 1936–1949, 2016.
[152]
B. Chen et al., “COAD: Contrastive pre-training with adversarial fine-tuning for zero-shot expert linking,” 2020,.
[153]
B. Fatemi, L. E. Asri, and S. M. Kazemi, “SLAPS: Self-supervision improves structure learning for graph neural networks,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2021, pp. 22667–22681.
[154]
C. Liu, L. Wen, Z. Kang, G. Luo, and L. Tian, “Self-supervised consensus representation learning for attributed graph,” in Proc. ACM Conf. Multimedia, 2021, pp. 2654–2662.
[155]
S. Xu, S. Liu, and L. Feng, “Self-supervised deep graph embedding with high-order information fusion for community discovery,” 2021,.
[156]
M. Yasunaga and P. Liang, “Graph-based, self-supervised program repair from diagnostic feedback,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 10 799–10 808.
[157]
T. Kipf, E. van der Pol, and M. Welling, “Contrastive learning of structured world models,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–21.
[158]
A. Sehanobish, N. G. Ravindra, and D. van Dijk, “Self-supervised edge features for improved graph neural network training,” 2020,.
[159]
L. Sun, K. Yu, and K. Batmanghelich, “Context matters: Graph-based self-supervised representation learning for medical images,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2021, pp. 4874–4882.
[160]
Y. Tan et al., “Fedproto: Federated prototype learning over heterogeneous devices,” in Proc. AAAI Conf. Artif. Intell., 2022, pp. 1–8.
[161]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. 25th Int. Conf. Artif. Intell. Statist., 2017, pp. 1273–1282.
[162]
C. Chen, W. Hu, Z. Xu, and Z. Zheng, “FedGL: Federated graph learning framework with global self-supervision,” 2021,.

Cited By

View all
  • (2024)Bootstrap Deep Metric for Seed Expansion in Attributed NetworksProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657687(1629-1638)Online publication date: 10-Jul-2024
  • (2024)Self-Supervised Node Representation Learning via Node-to-Neighbourhood AlignmentIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.335854146:6(4218-4233)Online publication date: 25-Jan-2024
  • (2024)Semi-Supervised Graph Contrastive Learning With Virtual Adversarial AugmentationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.336639636:8(4232-4244)Online publication date: 1-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering  Volume 35, Issue 6
June 2023
1074 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 June 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Bootstrap Deep Metric for Seed Expansion in Attributed NetworksProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657687(1629-1638)Online publication date: 10-Jul-2024
  • (2024)Self-Supervised Node Representation Learning via Node-to-Neighbourhood AlignmentIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.335854146:6(4218-4233)Online publication date: 25-Jan-2024
  • (2024)Semi-Supervised Graph Contrastive Learning With Virtual Adversarial AugmentationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.336639636:8(4232-4244)Online publication date: 1-Aug-2024
  • (2024)RARE: Robust Masked Graph AutoencoderIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.333522236:10(5340-5353)Online publication date: 1-Oct-2024
  • (2024)Network Controllability Perspectives on Graph RepresentationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.333131836:8(4116-4128)Online publication date: 1-Aug-2024
  • (2024)DIOR: Learning to Hash With Label Noise Via Dual Partition and Contrastive LearningIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331210936:4(1502-1517)Online publication date: 1-Apr-2024
  • (2024)Hierarchical Aggregations for High-Dimensional Multiplex Graph EmbeddingIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.330580936:4(1624-1637)Online publication date: 1-Apr-2024
  • (2024)Semi-supervised domain adaptation on graphs with contrastive learning and minimax entropyNeurocomputing10.1016/j.neucom.2024.127469580:COnline publication date: 1-May-2024
  • (2024)Contrastive Hawkes graph neural networks with dynamic sampling for event predictionNeurocomputing10.1016/j.neucom.2024.127265575:COnline publication date: 28-Mar-2024
  • (2024)Spatiotemporal feature learning for no-reference gaming content video quality assessmentJournal of Visual Communication and Image Representation10.1016/j.jvcir.2024.104118100:COnline publication date: 17-Jul-2024
  • Show More Cited By

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media