Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3336191.3371851acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article
Public Access

Transferring Robustness for Graph Neural Network Against Poisoning Attacks

Published: 22 January 2020 Publication History
  • Get Citation Alerts
  • Abstract

    Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.

    References

    [1]
    Leman Akoglu, Hanghang Tong, and Danai Koutra. 2015. Graph based anomaly detection and description: a survey. Data mining and knowledge discovery, Vol. 29, 3 (2015), 626--688.
    [2]
    Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial Attacks on Node Embeddings via Graph Poisoning. In ICML.
    [3]
    Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203 (2013).
    [4]
    Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018).
    [5]
    Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. 2018. Query-efficient hard-label black-box attack: An optimization-based approach. arXiv preprint arXiv:1807.04457 (2018).
    [6]
    Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. ICML (2018).
    [7]
    Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems. 3844--3852.
    [8]
    Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. 2019 a. Deep Anomaly Detection on Attributed Networks. In SDM.
    [9]
    Kaize Ding, Yichuan Li, Jundong Li, Chenghao Liu, and Huan Liu. 2019 b. Graph Neural Networks with High-order Feature Interactions. arXiv preprint arXiv:1908.07110 (2019).
    [10]
    Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph Neural Networks for Social Recommendation. In The World Wide Web Conference. ACM, 417--426.
    [11]
    Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML.
    [12]
    Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. 2018. Large-scale learnable graph convolutional networks. In KDD.
    [13]
    Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
    [14]
    Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NeurIPS.
    [15]
    Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015).
    [16]
    Sepp Hochreiter, A Steven Younger, and Peter R Conwell. 2001. Learning to learn using gradient descent. In ICANN. Springer, 87--94.
    [17]
    Chao Huang, Xian Wu, Xuchao Zhang, Chuxu Zhang, Jiashu Zhao, Dawei Yin, and Nitesh V Chawla. 2019. Online Purchase Prediction via Multi-Scale Modeling of Behavior Dynamics. In KDD. ACM, 2613--2622.
    [18]
    Ming Jin, Heng Chang, Wenwu Zhu, and Somayeh Sojoudi. 2019. Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering. arXiv preprint arXiv:1905.10029 (2019).
    [19]
    Thomas N Kipf and Max Welling. 2016. Semi-Supervised Classification with Graph Convolutional Networks. arXiv preprint arXiv:1609.02907 (2016).
    [20]
    Jaekoo Lee, Hyunjae Kim, Jongsun Lee, and Sungroh Yoon. 2017. Transfer learning for deep learning on graph-structured data. In AAAI.
    [21]
    Ruirui Li, Liangda Li, Xian Wu, Yunhong Zhou, and Wei Wang. 2019 c. Click Feedback-Aware Query Recommendation Using Adversarial Examples. In The World Wide Web Conference. ACM, 2978--2984.
    [22]
    Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. 2018b. Adaptive graph convolutional neural networks. In AAAI.
    [23]
    Yingwei Li, Song Bai, Cihang Xie, Zhenyu Liao, Xiaohui Shen, and Alan L Yuille. 2019 a. Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses. arXiv preprint arXiv:1904.00979 (2019).
    [24]
    Yingwei Li, Song Bai, Yuyin Zhou, Cihang Xie, Zhishuai Zhang, and Alan Yuille. 2018a. Learning Transferable Adversarial Examples via Ghost Networks. arXiv preprint arXiv:1812.03413 (2018).
    [25]
    Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, and Boqing Gong. 2019 b. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks. ICML (2019).
    [26]
    Yao Ma, Suhang Wang, Charu C. Aggarwal, and Jiliang Tang. 2019 a. Graph Convolutional Networks with EigenPooling. In KDD.
    [27]
    Yao Ma, Suhang Wang, Charu C. Aggarwal, Dawei Yin, and Jiliang Tang. 2019 b. Multi-dimensional Graph Convolutional Networks. In SDM.
    [28]
    Yao Ma, Suhang Wang, Lingfei Wu, and Jiliang Tang. 2019 c. Attacking Graph Convolutional Networks via Rewiring. arXiv preprint:1906.03750 (2019).
    [29]
    Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. 2017. Geometric deep learning on graphs and manifolds using mixture model cnns. In CVPR.
    [30]
    Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In ICML.
    [31]
    Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP.
    [32]
    Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Meta-learning with memory-augmented neural networks. In ICML.
    [33]
    Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI magazine, Vol. 29, 3 (2008), 93--93.
    [34]
    Kai Shu, Suhang Wang, Jiliang Tang, Yilin Wang, and Huan Liu. 2018. Crossfire: Cross media joint friend and item recommendations. In WSDM.
    [35]
    Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2019. Node Injection Attacks on Graphs via Reinforcement Learning. arXiv preprint arXiv:1909.06543 (2019).
    [36]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.
    [37]
    Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
    [38]
    Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et almbox. 2016. Matching networks for one shot learning. In Advances in neural information processing systems. 3630--3638.
    [39]
    Felix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr, Christopher Fifty, Tao Yu, and Kilian Q Weinberger. 2019 c. Simplifying graph convolutional networks. arXiv preprint arXiv:1902.07153 (2019).
    [40]
    Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019 b. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. In IJCAI.
    [41]
    Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. 2019 a. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596 (2019).
    [42]
    Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil Jain. 2019 b. Adversarial attacks and defenses in images, graphs and text: A review. arXiv preprint arXiv:1909.08072 (2019).
    [43]
    Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019 a. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. arXiv preprint arXiv:1906.04214 (2019).
    [44]
    Huaxiu Yao, Yiding Liu, Ying Wei, Xianfeng Tang, and Zhenhui Li. 2019 a. Learning from Multiple Cities: A Meta-Learning Approach for Spatial-Temporal Prediction. In The World Wide Web Conference. ACM, 2181--2191.
    [45]
    Huaxiu Yao, Ying Wei, Junzhou Huang, and Zhenhui Li. 2019 b. Hierarchically Structured Meta-learning. In ICML. 7045--7054.
    [46]
    Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V Chawla, and Zhenhui Li. 2019 c. Graph Few-shot Learning via Knowledge Transfer. arXiv preprint arXiv:1910.03053 (2019).
    [47]
    Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit-Yan Yeung. 2018. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. arXiv preprint arXiv:1803.07294 (2018).
    [48]
    Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust Graph Convolutional Networks Against Adversarial Attacks. In KDD.
    [49]
    Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD.
    [50]
    Daniel Zügner and Stephan Günnemann. 2019. Certifiable robustness and robust training for graph convolutional networks. In KDD.
    [51]
    Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In ICLR.

    Cited By

    View all
    • (2024)Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov TheoryChinese Journal of Electronics10.23919/cje.2022.00.34233:3(732-741)Online publication date: May-2024
    • (2024)Prov2vec: Learning Provenance Graph Representation for Anomaly Detection in Computer SystemsProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3664494(1-14)Online publication date: 30-Jul-2024
    • (2024)Self-Guided Robust Graph Structure RefinementProceedings of the ACM on Web Conference 202410.1145/3589334.3645522(697-708)Online publication date: 13-May-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '20: Proceedings of the 13th International Conference on Web Search and Data Mining
    January 2020
    950 pages
    ISBN:9781450368223
    DOI:10.1145/3336191
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 January 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adversarial defense
    2. robust graph neural networks

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    WSDM '20

    Acceptance Rates

    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)453
    • Downloads (Last 6 weeks)31
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov TheoryChinese Journal of Electronics10.23919/cje.2022.00.34233:3(732-741)Online publication date: May-2024
    • (2024)Prov2vec: Learning Provenance Graph Representation for Anomaly Detection in Computer SystemsProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3664494(1-14)Online publication date: 30-Jul-2024
    • (2024)Self-Guided Robust Graph Structure RefinementProceedings of the ACM on Web Conference 202410.1145/3589334.3645522(697-708)Online publication date: 13-May-2024
    • (2024)Adversarial Danger Identification on Temporally Dynamic GraphsIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.3252175(1-12)Online publication date: 2024
    • (2024)AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing AttacksIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.317229635:1(88-102)Online publication date: Jan-2024
    • (2024)Graph Adversarial Immunization for Certifiable RobustnessIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331110536:4(1597-1610)Online publication date: Apr-2024
    • (2024)Detecting Targets of Graph Adversarial Attacks With Edge and Feature PerturbationsIEEE Transactions on Computational Social Systems10.1109/TCSS.2023.334464211:3(3218-3231)Online publication date: Jun-2024
    • (2024)Trustworthy Graph Neural Networks: Aspects, Methods, and TrendsProceedings of the IEEE10.1109/JPROC.2024.3369017112:2(97-139)Online publication date: Feb-2024
    • (2024)A Dual Robust Graph Neural Network Against Graph Adversarial AttacksNeural Networks10.1016/j.neunet.2024.106276175(106276)Online publication date: Jul-2024
    • (2024)A Comprehensive Survey on Deep Graph Representation LearningNeural Networks10.1016/j.neunet.2024.106207173(106207)Online publication date: May-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media