Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3583780.3615104acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Unveiling the Role of Message Passing in Dual-Privacy Preservation on GNNs

Published: 21 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Graph Neural Networks (GNNs) are powerful tools for learning representations on graphs, such as social networks. However, their vulnerability to privacy inference attacks restricts their practicality, especially in high-stake domains. To address this issue, privacy-preserving GNNs have been proposed, focusing on preserving node and/or link privacy. This work takes a step back and investigates how GNNs contribute to privacy leakage. Through theoretical analysis and simulations, we identify message passing under structural bias as the core component that allows GNNs to propagate andamplify privacy leakage. Building upon these findings, we propose a principled privacy-preserving GNN framework that effectively safeguards both node and link privacy, referred to as dual-privacy preservation. The framework comprises three major modules: a Sensitive Information Obfuscation Module that removes sensitive information from node embeddings, a Dynamic Structure Debiasing Module that dynamically corrects the structural bias, and an Adversarial Learning Module that optimizes the privacy-utility trade-off. Experimental results on four benchmark datasets validate the effectiveness of the proposed model in protecting both node and link privacy while preserving high utility for downstream tasks, such as node classification.

    References

    [1]
    Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. 2021. Towards a Unified Framework for Fair and Stable Graph Representation Learning. arXiv preprint arXiv:2102.13186 (2021).
    [2]
    Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214--223.
    [3]
    Anil Bhattacharyya. 1943. On a measure of divergence between two statistical populations defined by their probability distribution. Bulletin of the Calcutta Mathematical Society 35 (1943), 99--110.
    [4]
    Anil Bhattacharyya. 1946. On a measure of divergence between two multinomial populations. Sankhy¯a: the indian journal of statistics (1946), 401--406.
    [5]
    Dawei Cheng, Fangzhou Yang, Sheng Xiang, and Jin Liu. 2022. Financial time series forecasting with multi-modality graph neural network. Pattern Recognition 121 (2022), 108218.
    [6]
    Lu Cheng and Huan Liu. 2023. Socially Responsible AI: Theories and Practices. World Scientific.
    [7]
    Lu Cheng, Kush R Varshney, and Huan Liu. 2021. Socially responsible ai algorithms: Issues, purposes, and challenges. Journal of Artificial Intelligence Research 71 (2021), 1137--1181.
    [8]
    Yash Deshpande, Subhabrata Sen, Andrea Montanari, and Elchanan Mossel. 2018. Contextual stochastic block models. Advances in Neural Information Processing Systems 31 (2018).
    [9]
    Frederik Diehl, Thomas Brunner, Michael Truong Le, and Alois Knoll. 2019. Graph neural networks for modelling traffic participant interaction. In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 695--701.
    [10]
    Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. 2020. Quantifying Privacy Leakage in Graph Embedding. arXiv preprint arXiv:2010.00906 (2020).
    [11]
    Thomas Gaudelet, Ben Day, Arian R Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy BR Hayter, Richard Vickers, Charles Roberts, Jian Tang, et al. 2021. Utilizing graph machine learning within drug discovery and development. Briefings in bioinformatics 22, 6 (2021), bbab159.
    [12]
    Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In International conference on machine learning. PMLR, 1263--1272.
    [13]
    Neil Zhenqiang Gong and Bin Liu. 2016. You are who you know and how you behave: Attribute inference attacks via users' social friends and behaviors. In 25th { USENIX } Security Symposium ( { USENIX } Security 16). 979--995.
    [14]
    Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. 2020. Stealing Links from Graph Neural Networks. arXiv preprint arXiv:2005.02131 (2020).
    [15]
    Xinlei He, Rui Wen, and et al. Wu. 2021. Node-Level Membership Inference Attacks Against Graph Neural Networks. arXiv preprint arXiv:2102.05429 (2021).
    [16]
    Hui Hu, Lu Cheng, Jayden Parker Vap, and Mike Borowczak. 2022. Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes. In Proceedings of the ACM Web Conference 2022. 3552--3561.
    [17]
    Hui Hu, Jessa Gegax-Randazzo, Clay Carper, and Mike Borowczak. 2022. TP-NET: Training Privacy-Preserving Deep Neural Networks under Side-Channel Power Attacks. In 2022 IEEE International Symposium on Smart Electronic Systems (iSES). IEEE, 439--444.
    [18]
    Svante Janson, Andrzej Rucinski, and Tomasz Luczak. 2011. Random graphs. John Wiley & Sons.
    [19]
    Mika Juuti, Sebastian Szyller, Samuel Marchal, and N Asokan. 2019. PRADA: protecting against DNN model stealing attacks. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 512--527.
    [20]
    Kazi Zainab Khanam, Gautam Srivastava, and Vijay Mago. 2022. The homophily principle in social network analysis: A survey. Multimedia Tools and Applications (2022), 1--44.
    [21]
    Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
    [22]
    Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
    [23]
    Kaiyang Li, Guangchun Luo, Yang Ye, Wei Li, Shihao Ji, and Zhipeng Cai. 2020. Adversarial Privacy Preserving Graph Embedding against Inference Attack. IEEE Internet of Things Journal (2020).
    [24]
    Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, and Hongfu Liu. 2021. On dyadic fairness: Exploring and mitigating bias in graph connections. In International Conference on Learning Representations.
    [25]
    Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey Gordon, Stefanie Jegelka, and Ruslan Salakhutdinov. 2020. Graph Adversarial Networks: Protecting Information against Adversarial Attacks. arXiv preprint arXiv:2009.13504 (2020).
    [26]
    Wanyu Lin, Baochun Li, and Cong Wang. 2022. Towards private learning on decentralized graphs with local differential privacy. arXiv preprint arXiv:2201.09398 (2022).
    [27]
    Guangxu Mei, Ziyu Guo, Shijun Liu, and Li Pan. 2019. Sgnn: A graph neural network based federated learning approach by hiding structure. In Big Data. IEEE, 2560--2568.
    [28]
    Robert K Merton. 1988. The Matthew effect in science, II: Cumulative advantage and the symbolism of intellectual property. isis 79, 4 (1988), 606--623.
    [29]
    Iyiola E Olatunji, Wolfgang Nejdl, and Megha Khosla. 2021. Membership inference attack on graph neural networks. In 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). IEEE, 11--20.
    [30]
    Sina Sajadmanesh and Daniel Gatica-Perez. 2020. When Differential Privacy Meets Graph Neural Networks. arXiv preprint arXiv:2006.05535 (2020).
    [31]
    Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).
    [32]
    Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3--18.
    [33]
    Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In 25th { USENIX } Security Symposium ( { USENIX } Security 16). 601--618.
    [34]
    Amanda L Traud, Peter J Mucha, and Mason A Porter. 2012. Social structure of facebook networks. Physica A: Statistical Mechanics and its Applications 391, 16 (2012), 4165--4180.
    [35]
    Ardhendu Tripathy, Ye Wang, and Prakash Ishwar. 2019. Privacy-preserving adversarial networks. In 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 495--505.
    [36]
    Cédric Villani. 2009. Optimal transport: old and new. Vol. 338. Springer.
    [37]
    Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, and Hai Li. 2021. PrivacyPreserving Representation Learning on Graphs: A Mutual Information Perspective. arXiv preprint arXiv:2107.01475 (2021).
    [38]
    Qianru Wang, Bin Guo, Lu Cheng, Zhiwen Yu, and Huan Liu. 2023. CausalSE: Understanding Varied Spatial Effects with Missing Data Toward Adding New Bike-sharing Stations. ACM Transactions on Knowledge Discovery from Data 17, 2 (2023), 1--24.
    [39]
    Weiran Wang and Miguel A Carreira-Perpinán. 2013. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. arXiv preprint arXiv:1309.1541 (2013).
    [40]
    Xiaoyang Wang, Yao Ma, Yiqi Wang, Wei Jin, Xin Wang, Jiliang Tang, Caiyan Jia, and Jian Yu. 2020. Traffic flow prediction via spatial temporal graph neural network. In Proceedings of the web conference 2020. 1082--1092.
    [41]
    Bang Wu, Xiangwen Yang, Shirui Pan, and Xingliang Yuan. 2021. Adapting membership inference attacks to gnn for graph classification: Approaches and implications. In 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 1421--1426.
    [42]
    Fan Wu, Yunhui Long, Ce Zhang, and Bo Li. 2022. Linkteller: Recovering private edges from graph neural networks via influence analysis. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2005--2024.
    [43]
    Taihong Xiao, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker, and MingHsuan Yang. 2020. Adversarial learning of privacy-preserving and task-oriented representations. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 12434--12441.
    [44]
    Depeng Xu, Shuhan Yuan, Xintao Wu, and HaiNhat Phan. 2018. DPNE: Differentially private network embedding. In PAKDD. Springer, 235--246.
    [45]
    Shuo Yang, Zhiqiang Zhang, Jun Zhou, Yang Wang, Wang Sun, Xingyu Zhong, Yanming Fang, Quan Yu, and Yuan Qi. 2021. Financial risk analysis for SMEs with graph-based supply chain mining. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 4661--4667.
    [46]
    Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, and Yang Zhang. 2022. Inference attacks against graph neural networks. In Proceedings of the 31th USENIX Security Symposium. 1--18.
    [47]
    Zaixi Zhang, Qi Liu, Zhenya Huang, Hao Wang, Chengqiang Lu, Chuanren Liu, and Enhong Chen. 2021. GraphMI: Extracting Private Graph Data from Graph Neural Networks. arXiv preprint arXiv:2106.02820 (2021).
    [48]
    Jun Zhou, Chaochao Chen, Longfei Zheng, Xiaolin Zheng, Bingzhe Wu, Ziqi Liu, and Li Wang. 2020. Privacy-Preserving Graph Neural Network for Node Classification. arXiv preprint arXiv:2005.11903 (2020).
    [49]
    Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. 2020. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems 33 (2020), 7793--7804.

    Index Terms

    1. Unveiling the Role of Message Passing in Dual-Privacy Preservation on GNNs

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. graph neural networks
      2. privacy preservation
      3. structural bias

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      CIKM '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 80
        Total Downloads
      • Downloads (Last 12 months)80
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 09 Aug 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media