Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3548606.3560662acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Group Property Inference Attacks Against Graph Neural Networks

Published: 07 November 2022 Publication History

Abstract

Recent research has shown that machine learning (ML) models are vulnerable to privacy attacks that leak information about the training data. In this work, we consider Graph Neural Networks (GNNs) as the target model, and focus on a particular type of privacy attack named property inference attack (PIA) which infers the sensitive properties of the training graph through the access to GNNs. While the existing work has investigated PIAs against graph-level properties (e.g., node degree and graph density), we are the first to perform a systematic study of the group property inference attacks (GPIAs) that infer the distribution of particular groups of nodes and links (e.g., there are more links between male nodes than those between female nodes) in the training graph. First, we consider a taxonomy of threat models with various types of adversary knowledge, and design six different attacks for these settings. Second, we demonstrate the effectiveness of these attacks through extensive experiments on three representative GNN models and three real-world graphs. Third, we analyze the underlying factors that contribute to GPIA's success, and show that the GNN model trained on the graphs with or without the target property represents some dissimilarity in model parameters and/or model outputs, which enables the adversary to infer the existence of the property. Further, we design a set of defense mechanisms against the GPIA attacks, and demonstrate empirically that these mechanisms can reduce attack accuracy effectively with small loss on GNN model accuracy.

References

[1]
Amazon aws. https://aws.amazon.com/marketplace/solutions/machine-learning.
[2]
Bigml inc. https://bigml.com/.
[3]
Caffe model zoo. https://caffe.berkeleyvision.org/model_zoo.html.
[4]
Google cloud. https://www.googleadservices.com/.
[5]
Modzy: Ai model marketplace. https://www.modzy.com/marketplace/.
[6]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 308--318, 2016.
[7]
Mohammad Al Hasan, Vineet Chaoji, Saeed Salem, and Mohammed Zaki. Link prediction using supervised learning. In Workshop on link analysis, counterterrorism and security, volume 30, pages 798--805, 2006.
[8]
Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks, 10(3):137--150, 2015.
[9]
Y-Lan Boureau, Nicolas Le Roux, Francis Bach, Jean Ponce, and Yann LeCun. Ask the locals: multi-way local pooling for image recognition. In Proceedings of the International Conference on Computer Vision, pages 2651--2658, 2011.
[10]
Melissa Chase, Esha Ghosh, and Saeed Mahloujifar. Property inference from poisoning. arXiv preprint arXiv:2101.11073, 2021.
[11]
Alexandra Chouldechova and Aaron Roth. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810, 2018.
[12]
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. Quantifying privacy leakage in graph embedding. In Proceedings of 17th EAI International Conference on Mobile and Ubiquitous Systems, pages 76--85, 2020.
[13]
Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3--4):211--407, 2014.
[14]
Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 954--959, 2020.
[15]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322--1333, 2015.
[16]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In Proceedings of 23rd USENIX Security Symposium, pages 17--32, 2014.
[17]
Zuohui Fu, Yikun Xian, Ruoyuan Gao, Jieyu Zhao, Qiaoying Huang, Yingqiang Ge, Shuyuan Xu, Shijie Geng, Chirag Shah, Yongfeng Zhang, et al. Fairness-aware explainable recommendation over knowledge graphs. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 69--78, 2020.
[18]
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 619--633, 2018.
[19]
William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the Conference on Neural Information Processing Systems, 2018.
[20]
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. Stealing links from graph neural networks. In Proceedings of the 30th USENIX Security Symposium, pages 2669--2686, 2021.
[21]
Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, and Yang Zhang. Node-level membership inference attacks against graph neural networks. arXiv preprint arXiv:2102.05429, 2021.
[22]
Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504--507, 2006.
[23]
Bargav Jayaraman and David Evans. Evaluating differentially private machine learning in practice. In Proceedings of the 28th USENIX Security Symposium, 2019.
[24]
Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of International Conference on Learning Representations (ICLR), 2017.
[25]
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of International Conference on Machine Learning, pages 1885--1894, 2017.
[26]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-- 2324, 1998.
[27]
Changchang Liu, Supriyo Chakraborty, and Prateek Mittal. Dependence makes you vulnberable: Differential privacy under dependent tuples. In Proceedings of the Network and Distributed System Security (NDSS) Symposium, volume 16, pages 21--24, 2016.
[28]
Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications, 390(6):1150--1170, 2011.
[29]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the IEEE Symposium on Security and Privacy (S&P), pages 691--706, 2019.
[30]
Thomas Minka. Automatic choice of dimensionality for pca. Advances in neural information processing systems, 13, 2000.
[31]
Milad Nasr, Reza Shokri, and Amir Houmansadr. Machine learning with membership privacy using adversarial regularization. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 634--646, 2018.
[32]
Mathias P. M. Parisot, Balazs Pejo, and Dayana Spagnuelo. Property inference attacks on convolutional neural networks: Influence and implications of target model's complexity. In Proceedings of the 18th International Conference on Security and Cryptography (SECRYPT), pages 2687--2704, 2021.
[33]
Dana Pessach and Erez Shmueli. A review on fairness in machine learning. ACM Computing Surveys (CSUR), 55(3):1--44, 2022.
[34]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In Proceedings of IEEE Symposium on Security and Privacy (S&P), pages 3--18, 2017.
[35]
Congzheng Song and Ananth Raghunathan. Information leakage in embedding models. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pages 377--390, 2020.
[36]
Anshuman Suri and David Evans. Formalizing and estimating distribution inference risks. ICML Workshop on Theory and Practice of Differential Privacy, 2021.
[37]
Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
[38]
Petar Velivckovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In Proceedings of International Conference on Learning Representations, 2018.
[39]
Lixu Wang, Shichao Xu, Xiao Wang, and Qi Zhu. Eavesdrop the composition proportion of training labels in federated learning. arXiv preprint arXiv:1910.06044, 2019.
[40]
Xiuling Wang and Wendy Hui Wang. Full paper: Group property inference attacks against graph neural networks. https://arxiv.org/abs/2209.01100, 2022.
[41]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. Beyond inferring class representatives: User-level privacy leakage from federated learning. Proceedings of IEEE Conference on Computer Communications (INFOCOM), pages 2512--2520, 2019.
[42]
Zhihao Wen, Yuan Fang, and Zemin Liu. Meta-inductive node classification across graphs. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1219--1228, 2021.
[43]
Juyang Weng, Narendra Ahuja, and Thomas S Huang. Cresceptron: a selforganizing neural network which grows adaptively. In Proceedings of IJCNN International Joint Conference on Neural Networks, volume 1, pages 576--581, 1992.
[44]
Bang Wu, Xiangwen Yang, Shirui Pan, and Xingliang Yuan. Model extraction attacks on graph neural networks: Taxonomy and realization. In Proceedings of the 17th ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
[45]
Fan Wu, Yunhui Long, Ce Zhang, and Bo Li. Linkteller: Recovering private edges from graph neural networks via influence analysis. In Proceedings of IEEE Symposium on Security and Privacy (S&P), 2022.
[46]
XiWu, Matthew Fredrikson, Somesh Jha, and Jeffrey F Naughton. A methodology for formalizing model-inversion attacks. In Proceedings of the 29th IEEE Computer Security Foundations Symposium (CSF), pages 355--370, 2016.
[47]
Wanrong Zhang, Shruti Tople, and Olga Ohrimenko. Leakage of dataset properties in multi-party machine learning. In Proceedings of the 30th USENIX Security Symposium, pages 2687--2704, 2021.
[48]
Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, and Yang Zhang. Inference attacks against graph neural networks. In Proceedings of the 31th USENIX Security Symposium, pages 1--18, 2022.
[49]
Junhao Zhou, Yufei Chen, Chao Shen, and Yang Zhang. Property inference attacks against gans. In Proceedings of the 29th Network and Distributed System Security Symposium (NDSS), 2022.

Cited By

View all
  • (2024)LinkGuard: Link Locally Privacy-Preserving Graph Neural Networks with Integrated Denoising and Private LearningCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3651533(593-596)Online publication date: 13-May-2024
  • (2024)GrOVe: Ownership Verification of Graph Neural Networks using Embeddings2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00050(2460-2477)Online publication date: 19-May-2024
  • (2024)Trustworthy Graph Neural Networks: Aspects, Methods, and TrendsProceedings of the IEEE10.1109/JPROC.2024.3369017112:2(97-139)Online publication date: Feb-2024
  • Show More Cited By

Index Terms

  1. Group Property Inference Attacks Against Graph Neural Networks

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
    November 2022
    3598 pages
    ISBN:9781450394505
    DOI:10.1145/3548606
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 November 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. graph neural networks
    2. privacy attacks and defense
    3. property inference attack
    4. trustworthy machine learning.

    Qualifiers

    • Research-article

    Funding Sources

    • National Science Foundation

    Conference

    CCS '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

    Upcoming Conference

    CCS '24
    ACM SIGSAC Conference on Computer and Communications Security
    October 14 - 18, 2024
    Salt Lake City , UT , USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)223
    • Downloads (Last 6 weeks)26
    Reflects downloads up to 03 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)LinkGuard: Link Locally Privacy-Preserving Graph Neural Networks with Integrated Denoising and Private LearningCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3651533(593-596)Online publication date: 13-May-2024
    • (2024)GrOVe: Ownership Verification of Graph Neural Networks using Embeddings2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00050(2460-2477)Online publication date: 19-May-2024
    • (2024)Trustworthy Graph Neural Networks: Aspects, Methods, and TrendsProceedings of the IEEE10.1109/JPROC.2024.3369017112:2(97-139)Online publication date: Feb-2024
    • (2024)Differential privacy in deep learning: A literature surveyNeurocomputing10.1016/j.neucom.2024.127663589(127663)Online publication date: Jul-2024
    • (2024)Graph neural networks: a survey on the links between privacy and securityArtificial Intelligence Review10.1007/s10462-023-10656-457:2Online publication date: 8-Feb-2024
    • (2024)Attesting Distributional Properties of Training Data for Machine LearningComputer Security – ESORICS 202410.1007/978-3-031-70879-4_1(3-23)Online publication date: 5-Sep-2024
    • (2023)Link Membership Inference Attacks against Unsupervised Graph Representation LearningProceedings of the 39th Annual Computer Security Applications Conference10.1145/3627106.3627115(477-491)Online publication date: 4-Dec-2023
    • (2023)"Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security ConferencesProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security10.1145/3576915.3623130(3433-3459)Online publication date: 15-Nov-2023
    • (2023)Turning Privacy-preserving Mechanisms against Federated LearningProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security10.1145/3576915.3623114(1482-1495)Online publication date: 15-Nov-2023
    • (2023)Dissecting Distribution Inference2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML54575.2023.00019(150-164)Online publication date: Feb-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media