Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3576915.3623114acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Turning Privacy-preserving Mechanisms against Federated Learning

Published: 21 November 2023 Publication History

Abstract

Recently, researchers have successfully employed Graph Neural Networks (GNNs) to build enhanced recommender systems due to their capability to learn patterns from the interaction between involved entities. In addition, previous studies have investigated federated learning as the main solution to enable a native privacy-preserving mechanism for the construction of global GNN models without collecting sensitive data into a single computation unit. Still, privacy issues may arise as the analysis of local model updates produced by the federated clients can return information related to sensitive local data. For this reason, researchers proposed solutions that combine federated learning with Differential Privacy strategies and community-driven approaches, which involve combining data from neighbor clients to make the individual local updates less dependent on local sensitive data.
In this paper, we identify a crucial security flaw in such a configuration and design an attack capable of deceiving state-of-the-art defenses for federated learning. The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode), and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode). The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.

References

[1]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2938--2948. PMLR, 2020.
[2]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. Advances in Neural Information Processing Systems, 32, 2019.
[3]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30, 2017.
[4]
Di Chai, Leye Wang, Kai Chen, and Qiang Yang. Secure federated matrix factorization. IEEE Intelligent Systems, 36(5):11--20, 2020.
[5]
Konstantina Christakopoulou and Arindam Banerjee. Adversarial attacks on an oblivious recommender. In Proceedings of the 13th ACM Conference on Recommender Systems, pages 322--330, 2019.
[6]
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar. Quantifying privacy leakage in graph embedding. In MobiQuitous 2020--17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, pages 76--85, 2020.
[7]
Cynthia Dwork. Differential privacy: A survey of results. In Theory and Applica-tions of Models of Computation: 5th International Conference, TAMC 2008, Xi'an, China, April 25-29, 2008. Proceedings 5, pages 1--19. Springer, 2008.
[8]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation. In The world wide web conference, pages 417--426, 2019.
[9]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium, pages 1623--1640, 2020.
[10]
Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. Influence function based data poisoning attacks to top-n recommender systems. In Proceedings of The Web Conference 2020, pages 3019--3025, 2020.
[11]
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th annual computer security applications conference, pages 381--392, 2018.
[12]
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. The limitations of federated learning in sybil settings. In RAID, pages 301--316, 2020.
[13]
G. Guo, J. Zhang, and N. Yorke-Smith. A novel bayesian similarity measure for recommender systems. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI), pages 2619--2625, 2013.
[14]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. Deep models under the gan: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pages 603--618, 2017.
[15]
Shyong K Lam and John Riedl. Shilling recommender systems for fun and profit. In Proceedings of the 13th international conference on World Wide Web, pages 393--402, 2004.
[16]
Guanyu Lin, Feng Liang, Weike Pan, and Zhong Ming. Fedrec: Federated recom-mendation with explicit feedback. IEEE Intelligent Systems, 36(5):21--30, 2020.
[17]
Zhaohao Lin, Weike Pan, and Zhong Ming. Fr-fmss: Federated recommendation via fake marks and secret sharing. In Proceedings of the 15th ACM Conference on Recommender Systems, pages 668--673, 2021.
[18]
Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, and Philip S Yu. Federated social recommendation with graph neural network. ACM Transactions on Intelligent Systems and Technology (TIST), 13(4):1--24, 2022.
[19]
Lingjuan Lyu, Han Yu, and Qiang Yang. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020.
[20]
Frank McSherry and Ilya Mironov. Differentially private recommender systems: Building privacy into the netflix prize contenders. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 627--636, 2009.
[21]
Nan Mu, Daren Zha, Yuanye He, and Zhihao Tang. Graph attention networks for neural social recommendation. In 2019 IEEE 31st international conference on tools with artificial intelligence (ICTAI), pages 1320--1327. IEEE, 2019.
[22]
Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Shaza Zeitouni, Farinaz Koushanfar, Ahmad-Reza Sadeghi, and Thomas Schneider. FLAME: Taming backdoors in federated learning. In 31st USENIX Security Symposium (USENIX Security 22), pages 1415--1432, Boston, MA, August 2022. USENIX Association.
[23]
Toan Nguyen Thanh, Nguyen Duc Khang Quach, Thanh Tam Nguyen, Thanh Trung Huynh, Viet Hung Vu, Phi Le Nguyen, Jun Jo, and Quoc Viet Hung Nguyen. Poisoning gnn-based recommender systems with generative surrogate-based attacks. ACM Transactions on Information Systems, 41(3):1--24, 2023.
[24]
Ruihong Qiu, Zi Huang, Jingjing Li, and Hongzhi Yin. Exploiting cross-session information for session-based recommendation with graph neural networks. ACM Transactions on Information Systems (TOIS), 38(3):1--23, 2020.
[25]
Sina Sajadmanesh and Daniel Gatica-Perez. Locally private graph neural networks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pages 2130--2145, 2021.
[26]
Jiliang Tang, Huiji Gao, Xia Hu, and Huan Liu. Exploiting homophily effect for trust prediction. In WSDM, 2013.
[27]
Jiliang Tang, Huiji Gao, and Huan Liu. mtrust: Discerning multi-faceted trust in a connected world. In Proceedings of the fifth ACM international conference on Web search and data mining, pages 93--102, 2012.
[28]
Jiliang Tang, Huiji Gao, and Huan Liu. mTrust: Discerning multi-faceted trust in a connected world. In the 5th ACM International Conference on Web Search and Data Mining, 2012.
[29]
Jiliang Tang, Huiji Gao, Huan Liu, and Atish Das Sarma. eTrust: Understanding trust evolution in an online world. In KDD, 2012.
[30]
Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):10-5555, 2017.
[31]
Xiuling Wang and Wendy Hui Wang. Group property inference attacks against graph neural networks. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2871--2884, 2022.
[32]
Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, and Xing Xie. Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925, 2021.
[33]
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, and Xing Xie. Fedattack: Effective and covert poisoning attack on federated recommendation via hard sampling. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4164--4172, 2022.
[34]
Le Wu, Peijie Sun, Yanjie Fu, Richang Hong, Xiting Wang, and Meng Wang. A neural influence diffusion model for social recommendation. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, pages 235--244, 2019.
[35]
Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 55(5):1--37, 2022.
[36]
Ming Yang, Hang Cheng, Fei Chen, Ximeng Liu, Meiqing Wang, and Xibin Li. Model poisoning attack in differential privacy-based federated learning. Information Sciences, 630:158--172, 2023.
[37]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1--19, 2019.
[38]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pages 5650--5659. PMLR, 2018.
[39]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 974--983, 2018.
[40]
Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, and Lizhen Cui. Pipattack: Poisoning federated recommender systems for manipulating item promotion. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 1415--1423, 2022.
[41]
Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, and Yang Zhang. Inference attacks against graph neural networks. In 31st USENIX Security Symposium (USENIX Security 22), pages 4543--4560, 2022.
[42]
Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, and Jinyin Chen. Link-backdoor: Backdoor attack on link prediction via node injection. IEEE Transactions on Computational Social Systems, 2023.

Cited By

View all
  • (2024)Bidirectional Corrective Model-Contrastive Federated Adversarial TrainingElectronics10.3390/electronics1318374513:18(3745)Online publication date: 20-Sep-2024
  • (2024)Eyes on Federated Recommendation: Targeted Poisoning With Competition and Its MitigationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.348850019(10173-10188)Online publication date: 2024
  • (2024)An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety AlignmentIEEE Journal on Emerging and Selected Topics in Circuits and Systems10.1109/JETCAS.2024.3477348(1-1)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Turning Privacy-preserving Mechanisms against Federated Learning

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
      November 2023
      3722 pages
      ISBN:9798400700507
      DOI:10.1145/3576915
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 November 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. federated learning
      2. graph neural network
      3. model poisoning, privacy
      4. recommender systems

      Qualifiers

      • Research-article

      Conference

      CCS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)747
      • Downloads (Last 6 weeks)46
      Reflects downloads up to 23 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Bidirectional Corrective Model-Contrastive Federated Adversarial TrainingElectronics10.3390/electronics1318374513:18(3745)Online publication date: 20-Sep-2024
      • (2024)Eyes on Federated Recommendation: Targeted Poisoning With Competition and Its MitigationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.348850019(10173-10188)Online publication date: 2024
      • (2024)An Overview of Trustworthy AI: Advances in IP Protection, Privacy-preserving Federated Learning, Security Verification, and GAI Safety AlignmentIEEE Journal on Emerging and Selected Topics in Circuits and Systems10.1109/JETCAS.2024.3477348(1-1)Online publication date: 2024
      • (2024)Enhancing Object Detection with Hybrid dataset in Manufacturing Environments: Comparing Federated Learning to Conventional Techniques2024 1st International Conference on Innovative Engineering Sciences and Technological Research (ICIESTR)10.1109/ICIESTR60916.2024.10798269(1-6)Online publication date: 14-May-2024
      • (2024)Applying AI in the Area of Automation Systems: Overview and Challenges2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA)10.1109/ETFA61755.2024.10711078(1-8)Online publication date: 10-Sep-2024
      • (2024)Efficient integer division computation protocols based on partial homomorphic encryptionCluster Computing10.1007/s10586-024-04589-y27:9(12091-12102)Online publication date: 7-Jun-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media