Abstract
Graph Neural Networks (GNNs) have shown to be vulnerable against adversarial examples in many works, which encourages researchers to drop substantial attention to its robustness and security. However, so far, the reasons for the success of adversarial attacks and the intrinsic vulnerability of GNNs still remain unclear. The work presented here outlines an empirical study to further investigate these observations and provide several insights. Experimental results, analyzed across a variety of benchmark GNNs on two datasets, indicate that GNNs are indeed sensitive to adversarial attacks due to its non-robust message functions. To exploit the adversarial patterns, we introduce two measurements to depict the randomness of node labels and features for a graph, noticing that the neighborhood entropy significantly increased under adversarial attacks. Furthermore, we find out that the adversarially manipulated graphs typically tend to be much denser and high-rank, where most of the dissimilar nodes are intentionally linked. And the stronger the attacks are, such as Metattack, the patterns are more apparent. To sum up, our findings shed light on understanding adversarial attacks on graph data and lead potential advancement in enhancing the robustness of GNNs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The edges of the graph perturbed by RAND and DICE almost unchanged since they randomly choose to add or remove edges with the same probability.
- 2.
In fact, removing edges may result in singleton nodes and it is also not beneficial for attacks.
References
Bojchevski, A., Günnemann, S.: Adversarial attacks on node embeddings via graph poisoning. In: ICML, pp. 695–704 (2019)
Cai, D., Shao, Z., He, X., Yan, X., Han, J.: Mining hidden community in heterogeneous social networks. In: Proceedings of the 3rd International Workshop on Link Discovery, pp. 58–65. ACM (2005)
Chang, H., et al.: A restricted black-box adversarial framework towards attacking graph embedding models. In: AAAI, pp. 3389–3396. AAAI Press (2020)
Chen, L., et al.: A survey of adversarial learning on graph. arXiv preprint arXiv:2003.05730 (2020)
Chen, L., Liu, Y., He, X., Gao, L., Zheng, Z.: Matching user with item set: collaborative bundle recommendation with deep attention network. In: IJCAI, pp. 2095–2101 (2019)
Chen, L., Liu, Y., Zheng, Z., Yu, P.: Heterogeneous neural attentive factorization machine for rating prediction. In: CIKM, pp. 833–842. ACM (2018)
Chiang, W.L., Liu, X., Si, S., Li, Y., Bengio, S., Hsieh, C.J.: Cluster-GCN: an efficient algorithm for training deep and large graph convolutional networks. In: KDD, pp. 257–266 (2019)
Dai, H., et al.: Adversarial attack on graph structured data. In: ICML, pp. 1123–1132. PMLR (2018)
Entezari, N., Al-Sayouri, S.A., Darvishzadeh, A., Papalexakis, E.E.: All you need is low (rank) defending against adversarial attacks on graphs. In: WSDM, pp. 169–177 (2020)
Errica, F., Podda, M., Bacciu, D., Micheli, A.: A fair comparison of graph neural networks for graph classification. In: ICLR (2020)
Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: ICML, pp. 1263–1272. JMLR. org (2017)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017)
Jin, W., Li, Y., Xu, H., Wang, Y., Tang, J.: Adversarial attacks and defenses on graphs: A review and empirical study. arXiv preprint arXiv:2003.00653 (2020)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
Li, J., Zhang, H., Han, Z., Rong, Y., Cheng, H., Huang, J.: Adversarial attack on community detection by hiding individuals. In: WWW, pp. 917–927. ACM/IW3C2 (2020)
Loukas, A.: What graph neural networks cannot learn: depth vs width. In: ICLR. OpenReview.net (2020)
Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)
Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93 (2008)
Sun, L., Dou, Y., Yang, C., Wang, J., Yu, P.S., Li, B.: Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528 (2018)
Wu, F., de Souza Jr., A.H., Zhang, T., Fifty, C., Yu, T., Weinberger, K.Q.: Simplifying graph convolutional networks. In: ICML, vol. 97, pp. 6861–6871. PMLR (2019)
Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: IJCAI, pp. 4816–4823 (2019)
Xie, Y., Li, S., Yang, C., Wong, R.C., Han, J.: When do GNNs work: understanding and improving neighborhood aggregation. In: IJCAI, pp. 1303–1309 (2020)
Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. In: IJCAI, pp. 3961–3967 (2019)
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2019)
Ye, F., Liu, J., Chen, C., Ling, G., Zheng, Z., Zhou, Y.: Identifying influential individuals on large-scale social networks: a community based approach. IEEE Access 6, 47240–47257 (2018)
Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: KDD, pp. 1399–1407. ACM (2019)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: KDD, pp. 2847–2856. ACM (2018)
Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: ICLR (2019)
Acknowledgements
The work described in this paper was supported by the Key-Area Research and Development Program of Guangdong Province (2020B010165003), the National Natural Science Foundation of China (61702568, U1711267), the Guangdong Basic and Applied Basic Research Foundation (2020A1515010831), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (2017ZT07X355), and the Key Research and Development Program of Guangdong Province of China (2018B030325001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, J., Gu, Z., Peng, Q., Xu, K., Chen, L., Zheng, Z. (2021). Deep Insights into Graph Adversarial Learning: An Empirical Study Perspective. In: Wang, Y. (eds) Human Brain and Artificial Intelligence. HBAI 2021. Communications in Computer and Information Science, vol 1369. Springer, Singapore. https://doi.org/10.1007/978-981-16-1288-6_6
Download citation
DOI: https://doi.org/10.1007/978-981-16-1288-6_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-1287-9
Online ISBN: 978-981-16-1288-6
eBook Packages: Computer ScienceComputer Science (R0)