Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Deep Insights into Graph Adversarial Learning: An Empirical Study Perspective

  • Conference paper
  • First Online:
Human Brain and Artificial Intelligence (HBAI 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1369))

Included in the following conference series:

Abstract

Graph Neural Networks (GNNs) have shown to be vulnerable against adversarial examples in many works, which encourages researchers to drop substantial attention to its robustness and security. However, so far, the reasons for the success of adversarial attacks and the intrinsic vulnerability of GNNs still remain unclear. The work presented here outlines an empirical study to further investigate these observations and provide several insights. Experimental results, analyzed across a variety of benchmark GNNs on two datasets, indicate that GNNs are indeed sensitive to adversarial attacks due to its non-robust message functions. To exploit the adversarial patterns, we introduce two measurements to depict the randomness of node labels and features for a graph, noticing that the neighborhood entropy significantly increased under adversarial attacks. Furthermore, we find out that the adversarially manipulated graphs typically tend to be much denser and high-rank, where most of the dissimilar nodes are intentionally linked. And the stronger the attacks are, such as Metattack, the patterns are more apparent. To sum up, our findings shed light on understanding adversarial attacks on graph data and lead potential advancement in enhancing the robustness of GNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The edges of the graph perturbed by RAND and DICE almost unchanged since they randomly choose to add or remove edges with the same probability.

  2. 2.

    In fact, removing edges may result in singleton nodes and it is also not beneficial for attacks.

References

  1. Bojchevski, A., Günnemann, S.: Adversarial attacks on node embeddings via graph poisoning. In: ICML, pp. 695–704 (2019)

    Google Scholar 

  2. Cai, D., Shao, Z., He, X., Yan, X., Han, J.: Mining hidden community in heterogeneous social networks. In: Proceedings of the 3rd International Workshop on Link Discovery, pp. 58–65. ACM (2005)

    Google Scholar 

  3. Chang, H., et al.: A restricted black-box adversarial framework towards attacking graph embedding models. In: AAAI, pp. 3389–3396. AAAI Press (2020)

    Google Scholar 

  4. Chen, L., et al.: A survey of adversarial learning on graph. arXiv preprint arXiv:2003.05730 (2020)

  5. Chen, L., Liu, Y., He, X., Gao, L., Zheng, Z.: Matching user with item set: collaborative bundle recommendation with deep attention network. In: IJCAI, pp. 2095–2101 (2019)

    Google Scholar 

  6. Chen, L., Liu, Y., Zheng, Z., Yu, P.: Heterogeneous neural attentive factorization machine for rating prediction. In: CIKM, pp. 833–842. ACM (2018)

    Google Scholar 

  7. Chiang, W.L., Liu, X., Si, S., Li, Y., Bengio, S., Hsieh, C.J.: Cluster-GCN: an efficient algorithm for training deep and large graph convolutional networks. In: KDD, pp. 257–266 (2019)

    Google Scholar 

  8. Dai, H., et al.: Adversarial attack on graph structured data. In: ICML, pp. 1123–1132. PMLR (2018)

    Google Scholar 

  9. Entezari, N., Al-Sayouri, S.A., Darvishzadeh, A., Papalexakis, E.E.: All you need is low (rank) defending against adversarial attacks on graphs. In: WSDM, pp. 169–177 (2020)

    Google Scholar 

  10. Errica, F., Podda, M., Bacciu, D., Micheli, A.: A fair comparison of graph neural networks for graph classification. In: ICLR (2020)

    Google Scholar 

  11. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: ICML, pp. 1263–1272. JMLR. org (2017)

    Google Scholar 

  12. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  13. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017)

    Google Scholar 

  14. Jin, W., Li, Y., Xu, H., Wang, Y., Tang, J.: Adversarial attacks and defenses on graphs: A review and empirical study. arXiv preprint arXiv:2003.00653 (2020)

  15. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  16. Li, J., Zhang, H., Han, Z., Rong, Y., Cheng, H., Huang, J.: Adversarial attack on community detection by hiding individuals. In: WWW, pp. 917–927. ACM/IW3C2 (2020)

    Google Scholar 

  17. Loukas, A.: What graph neural networks cannot learn: depth vs width. In: ICLR. OpenReview.net (2020)

    Google Scholar 

  18. Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  19. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93 (2008)

    Google Scholar 

  20. Sun, L., Dou, Y., Yang, C., Wang, J., Yu, P.S., Li, B.: Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528 (2018)

  21. Wu, F., de Souza Jr., A.H., Zhang, T., Fifty, C., Yu, T., Weinberger, K.Q.: Simplifying graph convolutional networks. In: ICML, vol. 97, pp. 6861–6871. PMLR (2019)

    Google Scholar 

  22. Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: IJCAI, pp. 4816–4823 (2019)

    Google Scholar 

  23. Xie, Y., Li, S., Yang, C., Wong, R.C., Han, J.: When do GNNs work: understanding and improving neighborhood aggregation. In: IJCAI, pp. 1303–1309 (2020)

    Google Scholar 

  24. Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. In: IJCAI, pp. 3961–3967 (2019)

    Google Scholar 

  25. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2019)

    Google Scholar 

  26. Ye, F., Liu, J., Chen, C., Ling, G., Zheng, Z., Zhou, Y.: Identifying influential individuals on large-scale social networks: a community based approach. IEEE Access 6, 47240–47257 (2018)

    Article  Google Scholar 

  27. Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: KDD, pp. 1399–1407. ACM (2019)

    Google Scholar 

  28. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: KDD, pp. 2847–2856. ACM (2018)

    Google Scholar 

  29. Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: ICLR (2019)

    Google Scholar 

Download references

Acknowledgements

The work described in this paper was supported by the Key-Area Research and Development Program of Guangdong Province (2020B010165003), the National Natural Science Foundation of China (61702568, U1711267), the Guangdong Basic and Applied Basic Research Foundation (2020A1515010831), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (2017ZT07X355), and the Key Research and Development Program of Guangdong Province of China (2018B030325001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, J., Gu, Z., Peng, Q., Xu, K., Chen, L., Zheng, Z. (2021). Deep Insights into Graph Adversarial Learning: An Empirical Study Perspective. In: Wang, Y. (eds) Human Brain and Artificial Intelligence. HBAI 2021. Communications in Computer and Information Science, vol 1369. Springer, Singapore. https://doi.org/10.1007/978-981-16-1288-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-1288-6_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-1287-9

  • Online ISBN: 978-981-16-1288-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics