Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3450569.3463560acmconferencesArticle/Chapter ViewAbstractPublication PagessacmatConference Proceedingsconference-collections
research-article
Public Access

Backdoor Attacks to Graph Neural Networks

Published: 11 June 2021 Publication History

Abstract

In this work, we propose the first backdoor attack to graph neural networks (GNN). Specifically, we propose a subgraph based backdoor attack to GNN for graph classification. In our backdoor attack, a GNN classifier predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph. Our empirical results on three real-world graph datasets show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs. Moreover, we generalize a randomized smoothing based certified defense to defend against our backdoor attacks. Our empirical results show that the defense is effective in some cases but ineffective in other cases, highlighting the needs of new defenses for our backdoor attacks.

References

[1]
Han Altae-Tran, Bharath Ramsundar, Aneesh S Pappu, and Vijay Pande. 2017. Low data drug discovery with one-shot learning. ACS central science (2017).
[2]
Albert-László Barabási and Réka Albert. 1999. Emergence of scaling in random networks. science (1999).
[3]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial Attacks on Node Embeddings via Graph Poisoning. In ICML.
[4]
Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating evasion attacks to deep neural networks via region-based classification. In ACSAC.
[5]
Hongming Chen, Ola Engkvist, Yinhai Wang, Marcus Olivecrona, and Thomas Blaschke. 2018. The rise of deep learning in drug discovery. Drug Discov. (2018).
[6]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017a. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv (2017).
[7]
Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, and Nikolaos Vasiloglou. 2017b. Practical Attacks Against Graph-based Clustering. In CCS.
[8]
Joseph Clements and Yingjie Lao. 2018. Hardware trojan attacks on neural networks. arXiv preprint arXiv:1806.05768 (2018).
[9]
Charles J Clopper and Egon S Pearson. 1934. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika (1934).
[10]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified Adversarial Robustness via Randomized Smoothing. In ICML.
[11]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial Attack on Graph Structured Data. In ICML.
[12]
Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. 2019. Strip: A defence against trojan attacks on deep neural networks. In ACSAC.
[13]
Edgar N Gilbert. 1959. Random graphs. Ann. Math. Stat. (1959).
[14]
Neil Zhenqiang Gong, Mario Frank, and Prateek Mittal. 2014. Sybilbelief: A semi-supervised learning approach for structure-based sybil detection. IEEE Trans. Inf. Forensics Secur., Vol. 9 (2014).
[15]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. In Proc. of Machine Learning and Computer Security Workshop.
[16]
Wenbo Guo, Lun Wang, Xinyu Xing, Min Du, and Dawn Song. 2019. Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems. arXiv preprint arXiv:1908.01763 (2019).
[17]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NeurIPS.
[18]
Mehadi Hassen and Philip K Chan. 2017. Scalable function call graph-based malware classification. In CODASPY.
[19]
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. 2020 a. Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. In ICLR.
[20]
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, and Neil Zhenqiang Gong. 2020 b. Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. In WWW.
[21]
Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2017. Random walk based fake account detection in online social networks. In DSN.
[22]
Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR.
[23]
Deguang Kong and Guanhua Yan. 2013. Discriminant malware distance learning on structural information for automated malware classification. In KDD.
[24]
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In IEEE S & P.
[25]
Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. 2019 b. Tight certificates of adversarial robustness for randomly smoothed classifiers. In NeurIPS.
[26]
Junhyun Lee, Inyeop Lee, and Jaewoo Kang. 2019 a. Self-attention graph pooling. arXiv preprint arXiv:1904.08082 (2019).
[27]
Alexander Levine and Soheil Feizi. 2020. Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. In AAAI.
[28]
Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. 2019. Certified Adversarial Robustness with Additive Noise. In NeurIPS.
[29]
Junying Li, Deng Cai, and Xiaofei He. 2017. Learning graph-level representation for drug discovery. arXiv preprint arXiv:1709.03741 (2017).
[30]
Wenshuo Li, Jincheng Yu, Xuefei Ning, Pengjun Wang, Qi Wei, Yu Wang, and Huazhong Yang. 2018. Hu-fu: Hardware and software collaborative attack framework against neural networks. In ISVLSI. IEEE.
[31]
Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018b. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID.
[32]
Xuanqing Liu, Minhao Cheng, Huan Zhang, and Cho-Jui Hsieh. 2018a. Towards robust neural networks via random self-ensemble. In ECCV. 369--385.
[33]
Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. 2019. ABS: Scanning neural networks for back-doors by artificial brain stimulation. In SIGSAC.
[34]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018c. Trojaning attack on neural networks. In NDSS.
[35]
Yuntao Liu, Yang Xie, and Ankur Srivastava. 2017. Neural trojans. In 2017 IEEE International Conference on Computer Design (ICCD). IEEE.
[36]
Jerzy Neyman and Egon Sharpe Pearson. 1933. IX. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A (1933).
[37]
Stavros D Nikolopoulos and Iosif Polenakis. 2017. A graph-based model for malware detection and classification using system-call groups. Journal of Computer Virology and Hacking Techniques (2017).
[38]
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang. 2020. Dynamic Backdoor Attacks Against Machine Learning Models. arXiv (2020).
[39]
Brandon Tran, Jerry Li, and Aleksander Madry. 2018. Spectral signatures in backdoor attacks. In NeurIPS.
[40]
Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.
[41]
Binghui Wang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2020. On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. In CVPR Workshop.
[42]
Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking graph-based classification via manipulating the graph structure. In SIGSAC.
[43]
Binghui Wang, Neil Zhenqiang Gong, and Hao Fu. 2017a. GANG: Detecting fraudulent users in online social networks via guilt-by-association on directed graphs. In ICDM.
[44]
Binghui Wang, Jinyuan Jia, and Neil Zhenqiang Gong. 2019 a. Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation. In NDSS.
[45]
Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. 2019 b. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In IEEE S&P.
[46]
Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. 2017b. SybilSCAR: Sybil detection in online social networks via local rule based propagation. In INFOCOM.
[47]
Duncan J Watts and Steven H Strogatz. 1998. Collective dynamics of 'small-world' networks. nature (1998).
[48]
Mark Weber, Giacomo Domeniconi, Jie Chen, Daniel Karl I Weidele, Claudio Bellei, Tom Robinson, and Charles E Leiserson. 2019. Anti-money laundering in bitcoin: Experimenting with graph convolutional networks for financial forensics. arXiv preprint arXiv:1908.02591 (2019).
[49]
Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang, and Bo Li. 2020. RAB: Provable Robustness Against Backdoor Attacks. arXiv (2020).
[50]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In ICLR.
[51]
Jiaqi Yan, Guanhua Yan, and Dong Jin. 2019. Classifying Malware Represented as Control Flow Graphs using Deep Graph Convolutional Neural Network. In DSN.
[52]
Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In KDD.
[53]
Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y Zhao. 2019. Latent Backdoor Attacks on Deep Neural Networks. In CCS.
[54]
Ganzhao Yuan and Bernard Ghanem. 2017. An exact penalty method for binary optimization based on MPEC formulation. In AAAI.
[55]
Zhen Zhang, Jiajun Bu, Martin Ester, Jianfeng Zhang, Chengwei Yao, Zhi Yu, and Can Wang. 2019. Hierarchical Graph Pooling with Structure Learning. arXiv preprint arXiv:1911.05954 (2019).
[56]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD. 2847--2856.
[57]
Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. In ICLR.

Cited By

View all
  • (2024)Backdoor Attacks on Graph Neural Networks Trained with Data AugmentationIEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences10.1587/transfun.2023CIL0007E107.A:3(355-358)Online publication date: 1-Mar-2024
  • (2024)Survey of Federated Learning Models for Spatial-Temporal Mobility ApplicationsACM Transactions on Spatial Algorithms and Systems10.1145/366608910:3(1-39)Online publication date: 1-Jun-2024
  • (2024)Cross-Context Backdoor Attacks against Graph Prompt LearningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671956(2094-2105)Online publication date: 25-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SACMAT '21: Proceedings of the 26th ACM Symposium on Access Control Models and Technologies
June 2021
194 pages
ISBN:9781450383653
DOI:10.1145/3450569
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. backdoor attack
  2. graph neural networks

Qualifiers

  • Research-article

Funding Sources

Conference

SACMAT '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 177 of 597 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)835
  • Downloads (Last 6 weeks)112
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Backdoor Attacks on Graph Neural Networks Trained with Data AugmentationIEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences10.1587/transfun.2023CIL0007E107.A:3(355-358)Online publication date: 1-Mar-2024
  • (2024)Survey of Federated Learning Models for Spatial-Temporal Mobility ApplicationsACM Transactions on Spatial Algorithms and Systems10.1145/366608910:3(1-39)Online publication date: 1-Jun-2024
  • (2024)Cross-Context Backdoor Attacks against Graph Prompt LearningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671956(2094-2105)Online publication date: 25-Aug-2024
  • (2024)Rethinking Graph Backdoor Attacks: A Distribution-Preserving PerspectiveProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671910(4386-4397)Online publication date: 25-Aug-2024
  • (2024)Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural NetworksDigital Threats: Research and Practice10.1145/36332065:2(1-29)Online publication date: 20-Jun-2024
  • (2024)Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence FunctionProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635826(693-701)Online publication date: 4-Mar-2024
  • (2024)GNNFingers: A Fingerprinting Framework for Verifying Ownerships of Graph Neural NetworksProceedings of the ACM Web Conference 202410.1145/3589334.3645489(652-663)Online publication date: 13-May-2024
  • (2024)Dyn-Backdoor: Backdoor Attack on Dynamic Link PredictionIEEE Transactions on Network Science and Engineering10.1109/TNSE.2023.330167311:1(525-542)Online publication date: Jan-2024
  • (2024)Backdoor Learning: A SurveyIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.318297935:1(5-22)Online publication date: Jan-2024
  • (2024)Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via MotifsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.326709411:2(2479-2493)Online publication date: Apr-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media