Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

SAM: Query-efficient Adversarial Attacks against Graph Neural Networks

Published: 13 November 2023 Publication History

Abstract

Recent studies indicate that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. Particularly, adversarially perturbing the graph structure, e.g., flipping edges, can lead to salient degeneration of GNNs’ accuracy. In general, efficiency and stealthiness are two significant metrics to evaluate an attack method in practical use. However, most prevailing graph structure-based attack methods are query intensive, which impacts their practical use. Furthermore, while the stealthiness of perturbations has been discussed in previous studies, the majority of them focus on the attack scenario targeting a single node. To fill the research gap, we present a global attack method against GNNs, Saturation adversarial Attack with Meta-gradient, in this article. We first propose an enhanced meta-learning-based optimization method to obtain useful gradient information concerning graph structural perturbations. Then, leveraging the notion of saturation attack, we devise an effective algorithm to determine the perturbations based on the derived meta-gradients. Meanwhile, to ensure stealthiness, we introduce a similarity constraint to suppress the number of perturbed edges. Thorough experiments demonstrate that our method can effectively depreciate the accuracy of GNNs with a small number of queries. While achieving a higher misclassification rate, we also show that the perturbations developed by our method are not noticeable.

References

[1]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In Proceedings of the International Conference on Machine Learning. 695–704.
[2]
Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, New York, NY.
[3]
Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright. 2020. Hopskipjumpattack: A query-efficient decision-based attack. In Proceedings of the IEEE Symposium on Security and Privacy (SP’20). 1277–1294.
[4]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In Proceedings of the International Conference on Machine Learning. 1115–1124.
[5]
Claire Donnat and Susan Holmes. 2018. Tracking network dynamics: A survey using graph distances. Ann. Appl. Stat. 12, 2 (2018), 971–1012.
[6]
Jiawei Du, Hu Zhang, Joey Tianyi Zhou, Yi Yang, and Jiashi Feng. 2019. Query-efficient meta attack to deep neural networks. In Proceedings of the International Conference on Learning Representations.
[7]
Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann. 2021. Robustness of graph neural networks at scale. Adv. Neural Inf. Process. Syst. 34 (2021).
[8]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations.
[9]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Proceedings of the International Conference on Neural Information Processing Systems (2017).
[10]
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2021. Stealing links from graph neural networks. In Proceedings of the 30th USENIX Security Symposium (USENIX Security’21). 2669–2686.
[11]
Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 66–74.
[12]
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations.
[13]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In Proceedings of the International Conference on Learning Representations.
[14]
Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of the Workshop on Challenges in Representation Learning (ICML’13), Vol. 3. 896.
[15]
Jintang Li, Tao Xie, Chen Liang, Fenfang Xie, Xiangnan He, and Zibin Zheng. 2021. Adversarial attack on large scale graph. IEEE Trans. Knowl. Data Eng. (2021).
[16]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
[17]
Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, and Zhuotao Liu. 2021. A hard label black-box adversarial attack against graph neural networks. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 108–125.
[18]
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the ACM Workshop on Artificial Intelligence and Security. 27–38.
[19]
Mark E. J. Newman. 2016. Equivalence between modularity optimization and maximum likelihood methods for community detection. Phys. Rev. E 94, 5 (2016), 052315.
[20]
Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv:1803.02999. Retrieved from https://arxiv.org/abs/1803.02999
[21]
Adnan Qayyum, Muhammad Usama, Junaid Qadir, and Ala Al-Fuqaha. 2020. Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward. IEEE Commun. Surv. Tutor. 22, 2 (2020), 998–1026.
[22]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Mag. 29, 3 (2008), 93–93.
[23]
Lichao Sun, Yingtong Dou, Carl Yang, Kai Zhang, Ji Wang, Philip S. Yu, Lifang He, and Bo Li. 2022. Adversarial attack and defense on graph data: A survey. IEEE Trans. Knowl. Data Eng. (2022), 1–20.
[24]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR’14).
[25]
Tsubasa Takahashi. 2019. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In Proceedings of the IEEE International Conference on Big Data (Big Data’19). 1395–1400.
[26]
Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, and Suhang Wang. 2020. Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the International Conference on Web Search and Data Mining. 600–608.
[27]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations.
[28]
Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking graph-based classification via manipulating the graph structure. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 2023–2040.
[29]
Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Cai Fu, Hai Li, and Yiran Chen. 2020. Evasion attacks to graph neural networks via influence function. arXiv:2009.00203. Retrieved from https://arxiv.org/abs/2009.00203
[30]
Haopei Wang, Lei Xu, and Guofei Gu. 2015. Floodguard: A dos attack prevention extension in software-defined networks. In Proceedings of the Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 239–250.
[31]
Shen Wang and S. Yu Philip. 2019. Heterogeneous graph matching networks: Application to unknown malware detection. In Proceedings of the IEEE International Conference on Big Data (Big Data’19). 5401–5408.
[32]
Marcin Waniek, Tomasz P. Michalak, Michael J. Wooldridge, and Talal Rahwan. 2018. Hiding individuals and communities in a social network. Nat. Hum. Behav. 2, 2 (2018), 139–147.
[33]
Bingzhe Wu, Yatao Bian, Hengtong Zhang, Jintang Li, Junchi Yu, Liang Chen, Chaochao Chen, and Junzhou Huang. 2022. Trustworthy graph learning: Reliability, explainability, and privacy protection. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 4838–4839.
[34]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32, 1 (2020), 4–24.
[35]
Feng Xia, Ke Sun, Shuo Yu, Abdul Aziz, Liangtian Wan, Shirui Pan, and Huan Liu. 2021. Graph learning: A survey. IEEE Trans. Artif. Intell. 2, 2 (2021), 109–127.
[36]
Kaidi Xu, Hongge Chen, Sijia Liu, Pin Yu Chen, Tsui Wei Weng, Mingyi Hong, and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. In Proceedings of the International Joint Conference on Artificial Intelligence. 3961–3967.
[37]
Zheng Yuan, Jie Zhang, Yunpei Jia, Chuanqi Tan, Tao Xue, and Shiguang Shan. 2021. Meta gradient adversarial attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 7748–7757.
[38]
Xiao Zang, Yi Xie, Jie Chen, and Bo Yuan. 2021. Graph universal adversarial attacks: A few bad actors ruin graph learning models. In Proceedings of the International Joint Conference on Artificial Intelligence. 3328–3334.
[39]
He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, and Jian Pei. 2022. Trustworthy graph neural networks: Aspects, methods and trends. arXiv:2205.07424. Retrieved from https://arxiv.org/abs/2205.07424
[40]
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Backdoor attacks to graph neural networks. In Proceedings of the ACM Symposium on Access Control Models and Technologies. 15–26.
[41]
Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren Liu, and Enhong Chen. 2021. GraphMI: Extracting private graph data from graph neural networks. In Proceedings of the International Joint Conference on Artificial Intelligence. 3749–3755.
[42]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847–2856.
[43]
Daniel Zügner and Stephan Günnemann. 2018. Adversarial attacks on graph neural networks via meta learning. In Proceedings of the International Conference on Learning Representations.

Cited By

View all
  • (2024)GMFITD: Graph Meta-Learning for Effective Few-Shot Insider Threat DetectionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.343010619(7161-7175)Online publication date: 1-Jan-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security  Volume 26, Issue 4
November 2023
260 pages
ISSN:2471-2566
EISSN:2471-2574
DOI:10.1145/3614236
  • Editor:
  • Ninghui Li
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 November 2023
Online AM: 27 July 2023
Accepted: 17 July 2023
Revised: 29 May 2023
Received: 05 November 2022
Published in TOPS Volume 26, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial attack
  2. poisoning attack
  3. graph neural network
  4. topology attack

Qualifiers

  • Research-article

Funding Sources

  • Australian Research Council (ARC)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)346
  • Downloads (Last 6 weeks)31
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)GMFITD: Graph Meta-Learning for Effective Few-Shot Insider Threat DetectionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.343010619(7161-7175)Online publication date: 1-Jan-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media