Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated Learning

Published: 17 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Trustworthy federated learning typically leverages protection mechanisms to guarantee privacy. However, protection mechanisms inevitably introduce utility loss or efficiency reduction while protecting data privacy. Therefore, protection mechanisms and their parameters should be carefully chosen to strike an optimal tradeoff among privacy leakage, utility loss, and efficiency reduction. To this end, federated learning practitioners need tools to measure the three factors and optimize the tradeoff between them to choose the protection mechanism that is most appropriate to the application at hand. Motivated by this requirement, we propose a framework that (1) formulates trustworthy federated learning as a problem of finding a protection mechanism to optimize the tradeoff among privacy leakage, utility loss, and efficiency reduction and (2) formally defines bounded measurements of the three factors. We then propose a meta-learning algorithm to approximate this optimization problem and find optimal protection parameters for representative protection mechanisms, including randomization, homomorphic encryption, secret sharing, and compression. We further design estimation algorithms to quantify these found optimal protection parameters in a practical horizontal federated learning setting and provide a theoretical analysis of the estimation error.

    References

    [1]
    Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, New York, NY, USA, 308–318.
    [2]
    Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2017. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2017), 1333–1345.
    [3]
    Hilal Asi, Jonathan Ullman, and Lydia Zakynthinou. 2023. From robustness to privacy and back. arXiv preprint arXiv:2302.01855 (2023).
    [4]
    G. R. Blakley. 1979. Safeguarding cryptographic keys. In Proceedings of the 1979 AFIPS National Computer Conference. 313–317.
    [5]
    Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175–1191.
    [6]
    Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, Dimitrios Papadopoulos, and Qiang Yang. 2021. SecureBoost: A lossless federated learning framework. IEEE Intelligent Systems 36 (2021), 87–98.
    [7]
    Flávio du Pin Calmon and Nadia Fawaz. 2012. Privacy against statistical inference. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton’12). IEEE, 1401–1408.
    [8]
    John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12, 7 (2011), 2121–2159.
    [9]
    John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and minimax bounds: Sharp rates for probability estimation. arXiv preprint arXiv:1305.6000 (2013).
    [10]
    Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of Cryptography Conference. 265–284.
    [11]
    Cynthia Dwork and Aaron Roth. 2014. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9, 3-4 (2014), 211–407.
    [12]
    Ran Eilat, Kfir Eliaz, and Xiaosheng Mu. 2021. Bayesian privacy. Theoretical Economics 16, 4 (2021), 1557–1603.
    [13]
    Dominik Maria Endres and Johannes E. Schindelin. 2003. A new metric for probability distributions. IEEE Transactions on Information theory 49, 7 (2003), 1858–1860.
    [14]
    Haokun Fang and Quan Qian. 2021. Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet 13, 4 (2021), 94.
    [15]
    James Foulds, Joseph Geumlek, Max Welling, and Kamalika Chaudhuri. 2016. On the theory and practice of privacy-preserving Bayesian data analysis. arXiv preprint arXiv:1603.07294 (2016).
    [16]
    Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1322–1333.
    [17]
    Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting Gradients—How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).
    [18]
    Craig Gentry. 2009. A Fully Homomorphic Encryption Scheme. Stanford University.
    [19]
    Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
    [20]
    David E. Goldberg. 1989. Genetic Algorithms in Search, Optimization and Machine Learning.Addison Wesley Longman.
    [21]
    Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao, and Qiang Yang. 2021. Federated deep learning with Bayesian privacy. arXiv preprint arXiv:2109.13012 (2021).
    [22]
    Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications 116 (2018), 1–8.
    [23]
    Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan He, Yuxing Han, Lixin Fan, and Qiang Yang. 2023. Optimizing privacy, utility and efficiency in constrained multi-objective federated learning. arXiv preprint arXiv:2305.00312 (2023).
    [24]
    Yan Kang, Yuanqin He, Jiahuan Luo, Tao Fan, Yang Liu, and Qiang Yang. 2022. Privacy-preserving federated adversarial domain adaptation over feature groups for interpretability. IEEE Transactions on Big Data 2022 (2022), 1–12.
    [25]
    Yan Kang, Jiahuan Luo, Yuanqin He, Xiaojin Zhang, Lixin Fan, and Qiang Yang. 2022. A framework for evaluating privacy-utility trade-off in vertical federated learning. arXiv preprint arXiv:2209.03885 (2022).
    [26]
    Jakub Konečnỳ, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).
    [27]
    Jakub Konečnỳ, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
    [28]
    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273–1282.
    [29]
    H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 (2016).
    [30]
    Rajeev Motwani and Prabhakar Raghavan. 1996. Randomized algorithms. ACM Computing Surveys 28, 1 (1996), 33–37.
    [31]
    Frank Nielsen. 2019. On the Jensen–Shannon symmetrization of distances relying on abstract means. Entropy 21, 5 (2019), 485.
    [32]
    Milad Khademi Nori, Sangseok Yun, and Il-Min Kim. 2021. Fast federated learning by balancing communication trade-offs. IEEE Transactions on Communications 69, 8 (2021), 5168–5182.
    [33]
    Pascal Paillier. 1999. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques. 223–238.
    [34]
    Borzoo Rassouli and Deniz Gündüz. 2019. Optimal utility-privacy trade-off with total variation distance as a privacy measure. IEEE Transactions on Information Forensics and Security 15 (2019), 594–603.
    [35]
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323, 6088 (1986), 533–536.
    [36]
    Adi Shamir. 1979. How to share a secret. Communications of the ACM 22, 11 (Nov. 1979), 612–613. DOI:
    [37]
    Aleksei Triastcyn and Boi Faltings. 2020. Bayesian differential privacy for machine learning. In Proceedings of the 37th International Conference on Machine Learning, Hal Daumé III and Aarti Singh (Eds.). Proceedings of Machine Learning Research, Vol. 119. PMLR, 9583–9592. https://proceedings.mlr.press/v119/triastcyn20a.html
    [38]
    Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. 2019. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 1–11.
    [39]
    Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics, and Networking. 61–66.
    [40]
    Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, and Pavlo Molchanov. 2021. See through gradients: Image batch recovery via GradInversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337–16346.
    [41]
    Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC’20). 493–506. https://www.usenix.org/conference/atc20/presentation/zhang-chengliang
    [42]
    Jiale Zhang, Bing Chen, Shui Yu, and Hai Deng. 2019. PEFL: A privacy-enhanced federated learning scheme for big data analytics. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM’19). IEEE, 1–6.
    [43]
    Xiaojin Zhang, Kai Chen, and Qiang Yang. 2023. Towards achieving near-optimal utility for privacy-preserving federated learning via data generation and parameter distortion. arXiv preprint arXiv:2305.04288 (2023).
    [44]
    Xiaojin Zhang, Lixin Fan, Siwei Wang, Wenjie Li, Kai Chen, and Qiang Yang. 2023. A game-theoretic framework for federated learning. arXiv preprint arXiv:2304.05836 (2023).
    [45]
    Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, and Qiang Yang. 2022. No free lunch theorem for security and utility in federated learning. ACM Transactions on Intelligent Systems and Technology 14, 1 (2022), 1–35.
    [46]
    Xiaojin Zhang, Anbu Huang, Lixin Fan, Kai Chen, and Qiang Yang. 2023. Probably approximately correct federated learning. arXiv preprint arXiv:2304.04641 (2023).
    [47]
    Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, and Qiang Yang. 2023. Trading off privacy, utility and efficiency in federated learning. ACM Transactions on Intelligent Systems and Technology 14, 6 (2023), Article 98, 32 pages.
    [48]
    Xiaojin Zhang, Wenjie Li, Kai Chen, Shutao Xia, and Qiang Yang. 2023. Theoretically principled federated learning for balancing privacy and utility. arXiv preprint arXiv:2305.15148 (2023).
    [49]
    Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).
    [50]
    Ligeng Zhu and Song Han. 2020. Deep leakage from gradients. In Federated Learning. Springer, 17–31.
    [51]
    Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS’19).

    Cited By

    View all
    • (2024)A Game-theoretic Framework for Privacy-preserving Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365604915:3(1-35)Online publication date: 10-Apr-2024

    Index Terms

    1. A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated Learning

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Intelligent Systems and Technology
        ACM Transactions on Intelligent Systems and Technology  Volume 15, Issue 3
        June 2024
        646 pages
        ISSN:2157-6904
        EISSN:2157-6912
        DOI:10.1145/3613609
        • Editor:
        • Huan Liu
        Issue’s Table of Contents

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 17 May 2024
        Online AM: 18 March 2024
        Accepted: 04 February 2024
        Revised: 08 January 2024
        Received: 02 June 2023
        Published in TIST Volume 15, Issue 3

        Check for updates

        Author Tags

        1. Federated learning
        2. privacy
        3. utility
        4. efficiency
        5. tradeoff
        6. divergence
        7. optimization

        Qualifiers

        • Research-article

        Funding Sources

        • National Science and Technology Major Project

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)97
        • Downloads (Last 6 weeks)14
        Reflects downloads up to 11 Aug 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)A Game-theoretic Framework for Privacy-preserving Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365604915:3(1-35)Online publication date: 10-Apr-2024

        View Options

        Get Access

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        Full Text

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media