Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485447.3512179acmconferencesArticle/Chapter ViewAbstractPublication PageswebconfConference Proceedingsconference-collections
research-article

Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation

Published: 25 April 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Graph contrastive learning is the state-of-the-art unsupervised graph representation learning framework and has shown comparable performance with supervised approaches. However, evaluating whether the graph contrastive learning is robust to adversarial attacks is still an open problem because most existing graph adversarial attacks are supervised models, which means they heavily rely on labels and can only be used to evaluate the graph contrastive learning in a specific scenario. For unsupervised graph representation methods such as graph contrastive learning, it is difficult to acquire labels in real-world scenarios, making traditional supervised graph attack methods difficult to be applied to test their robustness. In this paper, we propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning. We compute the gradients of the adjacency matrices of the two views and flip the edges with gradient ascent to maximize the contrastive loss. In this way, we can fully use multiple views generated by the graph contrastive learning models and pick the most informative edges without knowing their labels, and therefore can promisingly support our model adapted to more kinds of downstream tasks. Extensive experiments show that our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks including node classification and link prediction. We further show that our attack can be transferred to other graph representation models as well.

    References

    [1]
    Lada A Adamic and Natalie Glance. 2005. The political blogosphere and the 2004 US election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery. 36–43.
    [2]
    Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning. PMLR, 695–704.
    [3]
    Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. 2019. The General Black-box Attack Method for Graph Neural Networks. arXiv preprint arXiv:1908.01297(2019).
    [4]
    Feihu Che, Guohua Yang, Dawei Zhang, Jianhua Tao, Pengpeng Shao, and Tong Liu. 2020. Self-supervised Graph Representation Learning via Bootstrapping. arXiv preprint arXiv:2011.05126(2020).
    [5]
    Hongxu Chen, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Wen-Chih Peng, and Xue Li. 2019. Exploiting centrality information with graph convolutions for network representation learning. In 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 590–601.
    [6]
    Hongxu Chen, Hongzhi Yin, Weiqing Wang, Hao Wang, Quoc Viet Hung Nguyen, and Xue Li. 2018. PME: projected metric embedding on heterogeneous networks for link prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1177–1186.
    [7]
    Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797(2018).
    [8]
    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597–1607.
    [9]
    Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International conference on machine learning. PMLR, 1115–1124.
    [10]
    Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. 855–864.
    [11]
    Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9729–9738.
    [12]
    Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, and Jiliang Tang. 2020. Adversarial attacks and defenses on graphs: A review and empirical study. arXiv preprint arXiv:2003.00653(2020).
    [13]
    Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 66–74.
    [14]
    Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of ICLR.
    [15]
    Hao Ma, Haixuan Yang, Michael R Lyu, and Irwin King. 2008. Sorec: social recommendation using probabilistic matrix factorization. In Proceedings of the 17th ACM conference on Information and knowledge management. 931–940.
    [16]
    Yao Ma, Suhang Wang, Tyler Derr, Lingfei Wu, and Jiliang Tang. 2019. Attacking graph convolutional networks via rewiring. arXiv preprint arXiv:1906.03750(2019).
    [17]
    Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 701–710.
    [18]
    Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1150–1160.
    [19]
    Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618(2012).
    [20]
    Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks 20, 1 (2008), 61–80.
    [21]
    Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. 2019. Node injection attacks on graphs via reinforcement learning. arXiv preprint arXiv:1909.06543(2019).
    [22]
    Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Rémi Munos, Petar Veličković, and Michal Valko. 2021. Bootstrapped Representation Learning on Graphs. arXiv preprint arXiv:2102.06514(2021).
    [23]
    Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE.Journal of machine learning research 9, 11 (2008).
    [24]
    Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of ICLR.
    [25]
    Petar Veličković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2019. Deep graph infomax. In Proceedings of ICLR.
    [26]
    Marcin Waniek, Tomasz P Michalak, Michael J Wooldridge, and Talal Rahwan. 2018. Hiding individuals and communities in a social network. Nature Human Behaviour 2, 2 (2018), 139–147.
    [27]
    Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214(2019).
    [28]
    Haoran Yang, Hongxu Chen, Lin Li, Philip S Yu, and Guandong Xu. 2021. Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation. arXiv preprint arXiv:2109.02859(2021).
    [29]
    Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning. PMLR, 40–48.
    [30]
    Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems 33 (2020).
    [31]
    Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. 2019. Graphsaint: Graph sampling based inductive learning method. arXiv preprint arXiv:1907.04931(2019).
    [32]
    Jiaqi Zeng and Pengtao Xie. 2020. Contrastive self-supervised learning for graph classification. arXiv preprint arXiv:2009.05923(2020).
    [33]
    Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019. Towards data poisoning attack against knowledge graph embedding.
    [34]
    Sixiao Zhang, Hongxu Chen, Xiao Ming, Lizhen Cui, Hongzhi Yin, and Guandong Xu. 2021. Where are we in embedding spaces? A Comprehensive Analysis on Network Embedding Approaches for Recommender Systems. arXiv preprint arXiv:2105.08908(2021).
    [35]
    Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, and Xiangliang Zhang. 2021. Graph Embedding for Recommendation against Attribute Inference Attacks. In Proceedings of the Web Conference 2021. 3002–3014.
    [36]
    Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1399–1407.
    [37]
    Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131(2020).
    [38]
    Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Graph Contrastive Learning with Adaptive Augmentation. arXiv preprint arXiv:2010.14945(2020).
    [39]
    Marinka Zitnik and Jure Leskovec. 2017. Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33, 14 (2017), i190–i198.
    [40]
    Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847–2856.
    [41]
    Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412(2019).

    Cited By

    View all
    • (2024)How to Design Reinforcement Learning Methods for the Edge: An Integrated Approach toward Intelligent Decision MakingElectronics10.3390/electronics1307128113:7(1281)Online publication date: 29-Mar-2024
    • (2024)MCGCL:Adversarial attack on graph contrastive learning based on momentum gradient candidatesPLOS ONE10.1371/journal.pone.030232719:6(e0302327)Online publication date: 6-Jun-2024
    • (2024)Fast Inference of Removal-Based Node InfluenceProceedings of the ACM on Web Conference 202410.1145/3589334.3645389(422-433)Online publication date: 13-May-2024
    • Show More Cited By

    Index Terms

    1. Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        WWW '22: Proceedings of the ACM Web Conference 2022
        April 2022
        3764 pages
        ISBN:9781450390965
        DOI:10.1145/3485447
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 25 April 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Adversarial Attack.
        2. Graph Contrastive Learning
        3. Graph Representation Learning

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        WWW '22
        Sponsor:
        WWW '22: The ACM Web Conference 2022
        April 25 - 29, 2022
        Virtual Event, Lyon, France

        Acceptance Rates

        Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)152
        • Downloads (Last 6 weeks)8
        Reflects downloads up to 27 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)How to Design Reinforcement Learning Methods for the Edge: An Integrated Approach toward Intelligent Decision MakingElectronics10.3390/electronics1307128113:7(1281)Online publication date: 29-Mar-2024
        • (2024)MCGCL:Adversarial attack on graph contrastive learning based on momentum gradient candidatesPLOS ONE10.1371/journal.pone.030232719:6(e0302327)Online publication date: 6-Jun-2024
        • (2024)Fast Inference of Removal-Based Node InfluenceProceedings of the ACM on Web Conference 202410.1145/3589334.3645389(422-433)Online publication date: 13-May-2024
        • (2024)Toward Enhanced Robustness in Unsupervised Graph Representation Learning: A Graph Information Bottleneck PerspectiveIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.333068436:8(4290-4303)Online publication date: Aug-2024
        • (2024)DeepInsight: Topology Changes Assisting Detection of Adversarial Samples on GraphsIEEE Transactions on Computational Social Systems10.1109/TCSS.2022.321332911:1(76-88)Online publication date: Mar-2024
        • (2024)EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00022(309-326)Online publication date: 9-Apr-2024
        • (2024)Black-Box Attacks Against Signed Graph Analysis via Balance Poisoning2024 International Conference on Computing, Networking and Communications (ICNC)10.1109/ICNC59896.2024.10556322(530-535)Online publication date: 19-Feb-2024
        • (2024)Towards More Effective and Transferable Poisoning Attacks against Link Prediction on Graphs2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)10.1109/CSCWD61410.2024.10580418(525-530)Online publication date: 8-May-2024
        • (2024)A graph transformer defence against graph perturbation by a flexible-pass filterInformation Fusion10.1016/j.inffus.2024.102296107:COnline publication date: 2-Jul-2024
        • (2024)Graph neural networks: a survey on the links between privacy and securityArtificial Intelligence Review10.1007/s10462-023-10656-457:2Online publication date: 8-Feb-2024
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media