Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Game-theoretic Framework for Privacy-preserving Federated Learning

Published: 17 May 2024 Publication History

Abstract

In federated learning, benign participants aim to optimize a global model collaboratively. However, the risk of privacy leakage cannot be ignored in the presence of semi-honest adversaries. Existing research has focused either on designing protection mechanisms or on inventing attacking mechanisms. While the battle between defenders and attackers seems never-ending, we are concerned with one critical question: Is it possible to prevent potential attacks in advance? To address this, we propose the first game-theoretic framework that considers both FL defenders and attackers in terms of their respective payoffs, which include computational costs, FL model utilities, and privacy leakage risks. We name this game the federated learning privacy game (FLPG), in which neither defenders nor attackers are aware of all participants’ payoffs. To handle the incomplete information inherent in this situation, we propose associating the FLPG with an oracle that has two primary responsibilities. First, the oracle provides lower and upper bounds of the payoffs for the players. Second, the oracle acts as a correlation device, privately providing suggested actions to each player. With this novel framework, we analyze the optimal strategies of defenders and attackers. Furthermore, we derive and demonstrate conditions under which the attacker, as a rational decision-maker, should always follow the oracle’s suggestion not to attack.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, New York, NY, 308–318.
[2]
Michele Aghassi and Dimitris Bertsimas. 2006. Robust game theory. Math. Program. 107, 1 (2006), 231–273.
[3]
Hilal Asi, Jonathan Ullman, and Lydia Zakynthinou. 2023. From robustness to privacy and back. arXiv preprint arXiv:2302.01855 (2023).
[4]
Robert J. Aumann. 1974. Subjectivity and correlation in randomized strategies. J. Math. Econ. 1, 1 (1974), 67–96.
[5]
Adriana D. Correia and Henk T. C. Stoof. 2019. Nash equilibria in the response strategy of correlated games. Scient. Rep. 9, 1 (2019), 1–8.
[6]
Kate Donahue and Jon Kleinberg. 2020. Model-sharing games: Analyzing federated learning under voluntary participation. arXiv preprint arXiv:2010.00753 (2020).
[7]
Flávio du Pin Calmon and Nadia Fawaz. 2012. Privacy against statistical inference. In Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton’12). IEEE, 1401–1408.
[8]
John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 7 (2011).
[9]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1322–1333.
[10]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients–How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).
[11]
Craig Gentry. 2009. A Fully Homomorphic Encryption Scheme. Stanford University.
[12]
Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
[13]
Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao, and Qiang Yang. 2021. Federated deep learning with Bayesian privacy. arXiv preprint arXiv:2109.13012 (2021).
[14]
Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Applic. 116 (2018), 1–8.
[15]
John C. Harsanyi. 1967. Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model. Manag. Sci. 14, 3 (1967), 159–182.
[16]
Zecheng He, Tianwei Zhang, and Ruby B. Lee. 2019. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference. 148–162.
[17]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 603–618.
[18]
Peter Kairouz, Mehdi Bennis, H. Brendan McMahan, Arjun Nitin Bhagoji, Brendan Avent, Kallista Bonawitz, Aurélien Bellet, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D.Oliveira, Hubert Eichner, Zachary Garrett, Salim El Rouayheb, David Evans, Adrià Gascón, Badih Ghazi, Josh Gardner, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Gauri Joshi, Farinaz Koushanfar, Prateek Mittal, Rasmus Pagh, Mariana Raykova, Ziteng Sun, Jianyu Wang, Ben Hutchinson, Mikhail Khodak, Justin Hsu, Jakub Konečný, Martin Jaggi, Tara Javidi, Sanmi Koyejo, Tancrède Lepoint, Mehryar Mohri, Richard Nock, Hang Qi, Dawn Song, Ananda Theertha Suresh, Li Xiong, Zheng Xu, Daniel Ramage, Weikang Song, Florian Tramèr, Qiang Yang, Aleksandra Korolova, Yang Liu, Ayfer Özgür, Ramesh Raskar, Sebastian U. Stich, Praneeth Vepakomma, Han Yu, Felix X. Yu, Sen Zhao. 2019. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019).
[19]
Shizuo Kakutani. 1941. A generalization of Brouwer’s fixed point theorem. Duke Math. J. 8, 3 (1941), 457–459.
[20]
Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, and Qiang Yang. 2022. A framework for evaluating privacy-utility trade-off in vertical federated learning. arXiv preprint arXiv:2209.03885 (2022).
[21]
Hillol Kargupta, Kamalika Das, and Kun Liu. 2007. A game theoretic approach toward multi-party privacy-preserving distributed data mining. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (PKDD’07). Citeseer.
[22]
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[23]
Jakub Konečnỳ, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).
[24]
Jakub Konečnỳ, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[25]
Lingjuan Lyu, Han Yu, Jun Zhao, and Qiang Yang. 2020. Threats to federated learning. In Federated Learning. Springer, 3–16.
[26]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273–1282.
[27]
H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 (2016).
[28]
Stephen Morris. 1995. The common prior assumption in economic theory. Econ. Philos. 11, 2 (1995), 227–253.
[29]
Roger B. Myerson. 1997. Game Theory: Analysis of Conflict. Harvard University Press.
[30]
John Nash. 1951. Non-cooperative games. Ann. Math. (1951), 286–295.
[31]
John F. Nash Jr. 1950. Equilibrium points in n-person games. Proc. Nat’l Acad. Sci. 36, 1 (1950), 48–49.
[32]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1310–1321.
[33]
Chandra Thapa, Mahawaga Arachchige Pathum Chamikara, and Seyit Camtepe. 2020. SplitFed: When federated learning meets split learning. arXiv preprint arXiv:2004.12088 (2020).
[34]
Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking. 61–66.
[35]
Xuezhen Tu, Kun Zhu, Nguyen Cong Luong, Dusit Niyato, Yang Zhang, and Juan Li. 2021. Incentive mechanisms for federated learning: From economic and game theoretic perspective. arXiv preprint arXiv:2111.11850 (2021).
[36]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the IEEE Conference on Computer Communications (INFOCOM’19). IEEE, 2512–2520.
[37]
Ningbo Wu, Changgen Peng, and Kun Niu. 2020. A privacy-preserving game model for local differential privacy by using information-theoretic approach. IEEE Access 8 (2020), 216741–216751.
[38]
Xiaotong Wu, Taotao Wu, Maqbool Khan, Qiang Ni, and Wanchun Dou. 2017. Game theory based correlated privacy preserving analysis in big data. IEEE Trans. Big Data 7, 4 (2017), 643–656.
[39]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. 10, 2 (2019), 1–19.
[40]
Yufeng Zhan, Peng Li, Zhihao Qu, Deze Zeng, and Song Guo. 2020. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 7, 7 (2020), 6360–6368.
[41]
Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the Annual Technical Conference (USENIX ATC’20). USENIX Association, 493–506. Retrieved from: https://www.usenix.org/conference/atc20/presentation/zhang-chengliang
[42]
Xiaojin Zhang, Kai Chen, and Qiang Yang. 2023. Towards achieving near-optimal utility for privacy-preserving federated learning via data generation and parameter distortion. arXiv preprint arXiv:2305.04288 (2023).
[43]
Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, and Qiang Yang. 2022. No free lunch theorem for security and utility in federated learning. ACM Trans. Intell. Syst. Technol. 14, 1 (2022), 1–35.
[44]
Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, and Qiang Yang. 2023. Probably approximately correct federated learning. arXiv preprint arXiv:2304.04641 (2023).
[45]
Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, and Qiang Yang. 2023. Trading off privacy, utility, and efficiency in federated learning. ACM Transactions on Intelligent Systems and Technology 14, 6 (2023), 1–32.
[46]
Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, and Qiang Yang. 2023. A meta-learning framework for tuning parameters of protection mechanisms in trustworthy federated learning. arXiv preprint arXiv:2305.18400 (2023).
[47]
Xiaojin Zhang, Wenjie Li, Kai Chen, Shutao Xia, and Qiang Yang. 2023. Theoretically principled federated learning for balancing privacy and utility. arXiv preprint arXiv:2305.15148 (2023).
[48]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).
[49]
Yichi Zhou, Jialian Li, and Jun Zhu. 2017. Identify the Nash equilibrium in static games with random payoffs. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, 4160–4169. Retrieved from: https://proceedings.mlr.press/v70/zhou17b.html
[50]
Ligeng Zhu and Song Han. 2020. Deep leakage from gradients. In Federated Learning. Springer, 17–31.
[51]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS’19).
[52]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 32 (2019).

Cited By

View all
  • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 18-Mar-2024
  • (2024)A Multifaceted Survey on Federated Learning: Fundamentals, Paradigm Shifts, Practical Issues, Recent Developments, Partnerships, Trade-Offs, Trustworthiness, and Ways ForwardIEEE Access10.1109/ACCESS.2024.341306912(84643-84679)Online publication date: 2024
  • (2024)Review on Privacy Preservation Techniques and Challenges in IoT DevicesAdvancements in Smart Computing and Information Security10.1007/978-3-031-59100-6_8(81-89)Online publication date: 2-May-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 15, Issue 3
June 2024
646 pages
ISSN:2157-6904
EISSN:2157-6912
DOI:10.1145/3613609
  • Editor:
  • Huan Liu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 May 2024
Online AM: 10 April 2024
Accepted: 06 December 2023
Revised: 25 October 2023
Received: 11 April 2023
Published in TIST Volume 15, Issue 3

Check for updates

Author Tags

  1. Federated learning
  2. privacy
  3. utility
  4. efficiency
  5. trade-off
  6. divergence
  7. optimization

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)290
  • Downloads (Last 6 weeks)39
Reflects downloads up to 18 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 18-Mar-2024
  • (2024)A Multifaceted Survey on Federated Learning: Fundamentals, Paradigm Shifts, Practical Issues, Recent Developments, Partnerships, Trade-Offs, Trustworthiness, and Ways ForwardIEEE Access10.1109/ACCESS.2024.341306912(84643-84679)Online publication date: 2024
  • (2024)Review on Privacy Preservation Techniques and Challenges in IoT DevicesAdvancements in Smart Computing and Information Security10.1007/978-3-031-59100-6_8(81-89)Online publication date: 2-May-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media