Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns

Published: 21 June 2020 Publication History

Abstract

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, little is known about their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g., the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we present a study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node’s features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain, we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given. For the first time, we successfully identify important patterns of adversarial attacks on graph neural networks (GNNs) — a first step towards being able to detect adversarial attacks on GNNs.

References

[1]
Lada A. Adamic and Natalie Glance. 2005. The political blogosphere and the 2004 US election: Divided they blog. In Proceedings of the International Workshop on Link Discovery. 36--43.
[2]
Alessandro Bessi. 2015. Two samples test for discrete power-law distributions. Arxiv Preprint Arxiv:1503.00643 (2015).
[3]
Battista Biggio, Giorgio Fumera, and Fabio Roli. 2014. Security evaluation of pattern classifiers under attack. IEEE Transactions on Knowledge and Data Engineering 26, 4 (2014), 984--996.
[4]
Aleksandar Bojchevski and Stephan Günnemann. 2018. Bayesian robust attributed graph clustering: Joint learning of partial anomalies and group structure. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2738--2745.
[5]
Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In Proceedings of the International Conference on Learning Representations.
[6]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In Proceedings of the 36th International Conference on Machine Learning.
[7]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Certifiable robustness to graph perturbations. In Proceedings of the Advances in Neural Information Processing Systems. 8319--8330.
[8]
Aleksandar Bojchevski, Yves Matkovic, and Stephan Günnemann. 2017. Robust spectral clustering for noisy data: Modeling sparse corruptions improves latent embeddings. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 737--746.
[9]
Hongyun Cai, Vincent W. Zheng, and Kevin Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques and applications. IEEE Transactions on Knowledge and Data Engineering 30, 9 (2018), 1616--1637.
[10]
Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. 2006. Semi-Supervised Learning. Adaptive Computation and Machine Learning Series. The MIT Press.
[11]
Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou. 2017. Practical attacks against graph-based clustering. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1125--1142.
[12]
Aaron Clauset, Cosma Rohilla Shalizi, and Mark E. J. Newman. 2009. Power-law distributions in empirical data. SIAM Review 51, 4 (2009), 661--703.
[13]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial Attack on Graph Structured Data. In Proceedings of the International Conference on Machine Learning.
[14]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems. 3837--3845.
[15]
Dhivya Eswaran, Stephan Günnemann, Christos Faloutsos, Disha Makhija, and Mohit Kumar. 2017. ZooBP: Belief propagation for heterogeneous networks. Proceedings of the VLDB Endowment 10, 5 (2017), 625--636.
[16]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning. 1263--1272.
[17]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations.
[18]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In Proceedings of the European Symposium on Research in Computer Security. 62--79.
[19]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 855--864.
[20]
William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems.
[21]
Bryan Hooi, Neil Shah, Alex Beutel, Stephan Günnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, and Christos Faloutsos. 2016. BIRDNEST: Bayesian inference for ratings-fraud detection. In Proceedings of the SIAM International Conference on Data Mining. 495--503.
[22]
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations.
[23]
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Combining neural networks with personalized pagerank for classification on graphs. In Proceedings of the International Conference on Learning Representations.
[24]
Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In Proceedings of the 30th International Conference on Neural Information Processing Systems. 1885--1893.
[25]
Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3, 3 (2009), 225--331.
[26]
Ben London and Lise Getoor. 2014. Collective classification of network data. Data Classification: Algorithms and Applications. 399--416. https://books.google.de/books?id=qm_SBQAAQBAJ8lpg=PP18ots=CSL_pk0IqH8dq=Data%20Classification%3A%20Algorithms%20and%20Applications%2020148lr8hl=de8pg=PR16#v=onepage8q=Data%20Classification:%20Algorithms%20and%20Applications%202014&f==false.
[27]
Henry B. Mann and Donald R. Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics 18, 1 (1947), 50--60.
[28]
Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval 3, 2 (2000), 127--163.
[29]
Shike Mei and Xiaojin Zhu. 2015. Using machine teaching to identify optimal training-set attacks on machine learners. In Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2871--2877.
[30]
Ahmed Mohamed Mohamed El-Sayed. 2016. Modeling multivariate correlated binary data. American Journal of Theoretical and Applied Statistics 5, 4 (2016), 225--233.
[31]
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M. Bronstein. 2017. Geometric deep learning on graphs and manifolds using mixture model CNNs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition.
[32]
Mark E. J. Newman. 2003. Mixing patterns in networks. Physical Review E 67, 2 (2003), 026126.
[33]
Mark E. J. Newman. 2010. Networks: An introduction. Oxford University Press, Sections 8.3--8.4.
[34]
Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking: Bringing Order to the Web. Technical Report. Stanford InfoLab.
[35]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Proceedings of the IEEE European Symposium on Security and Privacy. 372--387.
[36]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 701--710.
[37]
Trang Pham, Truyen Tran, Dinh Q. Phung, and Svetha Venkatesh. 2017. Column networks for collective classification. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. 2485--2491.
[38]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Magazine 29, 3 (2008), 93.
[39]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Google Inc, Joan Bruna, Dumitru Erhan, Google Inc, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations.
[40]
Gnana Thedchanamoorthy, Mahendra Piraveenan, Dharshana Kasthuriratna, and Upul Senanayake. 2014. Node assortativity in complex networks: An alternative approach. Procedia Computer Science 29, 2014 (2014), 2449--2461.
[41]
Mohamad Ali Torkamani and Daniel Lowd. 2013. Convex adversarial collective classification. In Proceedings of the International Conference on Machine Learning. 642--650.
[42]
Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The space of transferable adversarial examples. Arxiv Preprint Arxiv:1704.03453 (2017).
[43]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems.
[44]
Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: Theory and algorithm. In Proceedings of the International Conference on Machine Learning. 1192--1199.
[45]
In-Kwon Yeo and Richard A. Johnson. 2000. A new family of power transformations to improve normality or symmetry. Biometrika 87, 4 (2000), 954--959.
[46]
Mengchen Zhao, Bo An, Yaodong Yu, Sulin Liu, and Sinno Jialin Pan. 2018. Data poisoning attacks on multi-task relationship learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 2628--2635.
[47]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery 8 Data Mining. 2847--2856.
[48]
Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. In Proceedings of the International Conference on Learning Representations.
[49]
Daniel Zügner and Stephan Günnemann. 2019. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery 8 Data Mining. ACM, New York, NY.

Cited By

View all
  • (2024)An Imperceptible and Owner-unique Watermarking Method for Graph Neural NetworksProceedings of the ACM Turing Award Celebration Conference - China 202410.1145/3674399.3674443(108-113)Online publication date: 5-Jul-2024
  • (2024)AdverSPAM: Adversarial SPam Account Manipulation in Online Social NetworksACM Transactions on Privacy and Security10.1145/364356327:2(1-31)Online publication date: 26-Jan-2024
  • (2024)Improving Robustness of Hyperbolic Neural Networks by Lipschitz AnalysisProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671875(1713-1724)Online publication date: 25-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Knowledge Discovery from Data
ACM Transactions on Knowledge Discovery from Data  Volume 14, Issue 5
Special Issue on KDD 2018, Regular Papers and Survey Paper
October 2020
376 pages
ISSN:1556-4681
EISSN:1556-472X
DOI:10.1145/3407672
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 June 2020
Online AM: 07 May 2020
Accepted: 01 April 2020
Revised: 01 March 2020
Received: 01 March 2019
Published in TKDD Volume 14, Issue 5

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Relational data
  2. adversarial attacks
  3. graph neural networks
  4. poisoning attacks

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)401
  • Downloads (Last 6 weeks)40
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)An Imperceptible and Owner-unique Watermarking Method for Graph Neural NetworksProceedings of the ACM Turing Award Celebration Conference - China 202410.1145/3674399.3674443(108-113)Online publication date: 5-Jul-2024
  • (2024)AdverSPAM: Adversarial SPam Account Manipulation in Online Social NetworksACM Transactions on Privacy and Security10.1145/364356327:2(1-31)Online publication date: 26-Jan-2024
  • (2024)Improving Robustness of Hyperbolic Neural Networks by Lipschitz AnalysisProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671875(1713-1724)Online publication date: 25-Aug-2024
  • (2024)Fast Inference of Removal-Based Node InfluenceProceedings of the ACM Web Conference 202410.1145/3589334.3645389(422-433)Online publication date: 13-May-2024
  • (2024)Adversarial Attacks on Graph Neural Networks Based Spatial Resource Management in P2P Wireless CommunicationsIEEE Transactions on Vehicular Technology10.1109/TVT.2024.336014573:6(8847-8863)Online publication date: Jun-2024
  • (2024)A New Strategy of Graph Structure Attack: Multi-View Perturbation Candidate Edge LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340086011:5(4158-4168)Online publication date: Sep-2024
  • (2024)Graph Adversarial Immunization for Certifiable RobustnessIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331110536:4(1597-1610)Online publication date: Apr-2024
  • (2024)Detecting Targets of Graph Adversarial Attacks With Edge and Feature PerturbationsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.334464211:3(3218-3231)Online publication date: Jun-2024
  • (2024)GrOVe: Ownership Verification of Graph Neural Networks using Embeddings2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00050(2460-2477)Online publication date: 19-May-2024
  • (2024)Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance GuaranteesIEEE Internet of Things Journal10.1109/JIOT.2024.336098211:10(18193-18204)Online publication date: 15-May-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media