Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3485832.3488014acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

Efficient, Private and Robust Federated Learning

Published: 06 December 2021 Publication History

Abstract

Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness.
To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS’21), which performs comprehensive byzantine defense by normalizing the updates’ magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components. First, we design a crypto-friendly validity checking protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel preprocessing technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.

References

[1]
Nitin Agrawal, Ali Shahin Shamsabadi, Matt J Kusner, and Adrià Gascón. 2019. QUOTIENT: two-party secure neural network training and prediction. In Proceedings of ACM CCS. 1231–1247.
[2]
Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, 2017. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5(2017), 1333–1345.
[3]
Gilad Asharov, Yehuda Lindell, Thomas Schneider, and Michael Zohner. 2013. More efficient oblivious transfer and extensions for faster secure computation. In Proceedings of ACM CCS. 535–548.
[4]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In Proceedings of AISTATS. 2938–2948.
[5]
Moran Baruch, Gilad Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Proceedings of NeuIPS.
[6]
Donald Beaver. 1991. Efficient multiparty protocols using circuit randomization. In Proceedings of CRYPTO. 420–432.
[7]
James Henry Bell, Kallista A Bonawitz, Adrià Gascón, Tancrède Lepoint, and Mariana Raykova. 2020. Secure single-server aggregation with (poly) logarithmic overhead. In Proceedings of ACM CCS. 1253–1269.
[8]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of ICML. 634–643.
[9]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of NeurIPS. 118–128.
[10]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of ACM CCS. 1175–1191.
[11]
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In Proceedings of NDSS.
[12]
Hao Chen, Ilaria Chillotti, Yihe Dong, Oxana Poburinnaya, Ilya Razenshteyn, and M Sadegh Riazi. 2020. {SANNS}: Scaling up secure approximate k-nearest neighbors search. In Proceedings of USENIX Security. 2111–2128.
[13]
Henry Corrigan-Gibbs and Dan Boneh. 2017. Prio: Private, robust, and scalable computation of aggregate statistics. In Proceedings of USENIX NSDI. 259–282.
[14]
Daniel Demmler, Thomas Schneider, and Michael Zohner. 2015. ABY-A framework for efficient mixed-protocol secure two-party computation. In Proceedings of NDSS.
[15]
Whitfield Diffie and Martin Hellman. 1976. New directions in cryptography. IEEE Transactions on Information Theory 22, 6 (1976), 644–654.
[16]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Louis Alexandre Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. In Proceedings of ICML.
[17]
Junfeng Fan and Frederik Vercauteren. 2012. Somewhat practical fully homomorphic encryption.IACR Cryptology ePrint Archive(2012).
[18]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In Proceedings of USENIX Security. 1605–1622.
[19]
FeatureCloud. [n.d.]. Transforming health care and medical research with federated learning. https://featurecloud.eu/about/our-vision/.
[20]
Wei Gao, Shangwei Guo, Tianwei Zhang, Han Qiu, Yonggang Wen, and Yang Liu. 2021. Privacy-preserving collaborative learning with automatic transformation search. In Proceedings of CVPR. 114–123.
[21]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting Gradients–How easy is it to break privacy in federated learning?. In Proceedings of NeurIPS.
[22]
Hanieh Hashemi, Yongqin Wang, Chuan Guo, and Murali Annavaram. 2021. Byzantine-Robust and Privacy-Preserving Framework for FedML. In ICLR Workshop on Security and Safety in Machine Learning Systems.
[23]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of IEEE CVPR. 770–778.
[24]
Lie He, Sai Praneeth Karimireddy, and Martin Jaggi. 2020. Secure byzantine-robust machine learning. arXiv:2006.04747 (2020).
[25]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In Proceedings of ACM CCS. 603–618.
[26]
Yuval Ishai, Joe Kilian, Kobbi Nissim, and Erez Petrank. 2003. Extending oblivious transfers efficiently. In Proceedings of CRYPTO. 145–161.
[27]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. GAZELLE: A low latency framework for secure neural network inference. In Proceedings of USENIX Security. 1651–1669.
[28]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2019. Advances and open problems in federated learning. arXiv:1912.04977 (2019).
[29]
Youssef Khazbak, Tianxiang Tan, and Guohong Cao. 2020. MLGuard: Mitigating Poisoning Attacks in Privacy Preserving Distributed Collaborative Learning. In Proceedings of IEEE ICCCN. 1–9.
[30]
Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation 1, 4 (1989), 541–551.
[31]
Beibei Li, Yuhao Wu, Jiarui Song, Rongxing Lu, Tao Li, and Liang Zhao. 2021. DeepFed: Federated Deep Learning for Intrusion Detection in Industrial Cyber-Physical Systems. IEEE Transactions on Industrial Informatics 17, 8 (2021), 5615–5624.
[32]
Jian Liu, Mika Juuti, Yao Lu, and Nadarajah Asokan. 2017. Oblivious neural network predictions via minionn transformations. In Proceedings of ACM CCS. 619–631.
[33]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of AISTATS. 1273–1282.
[34]
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. 2020. Delphi: A cryptographic inference service for neural networks. In Proceedings of USENIX Security. 2505–2522.
[35]
Payman Mohassel, Mike Rosulek, and Ni Trieu. 2020. Practical privacy-preserving k-means clustering. Proceedings on Privacy Enhancing Technologies 2020, 4(2020), 414–433.
[36]
Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In Proceedings of IEEE SMath 129P. 19–38.
[37]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of S&P. 739–753.
[38]
Thien Duc Nguyen, Phillip Rieger, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Ahmad-Reza Sadeghi, Thomas Schneider, 2021. FLGUARD: Secure and Private Federated Learning. arXiv preprint arXiv:2101.02281(2021).
[39]
Sundar Pichai. 2019. Privacy should not be a luxury good. The New York Times (2019).
[40]
Deevashwer Rathee, Mayank Rathee, Nishant Kumar, Nishanth Chandran, Divya Gupta, Aseem Rastogi, and Rahul Sharma. 2020. CrypTFlow2: Practical 2-party secure inference. In Proceedings of ACM CCS. 325–342.
[41]
Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. In Proceedings of NDSS.
[42]
Nigel P Smart and Frederik Vercauteren. 2014. Fully homomorphic SIMD operations. Designs, codes and cryptography 71, 1 (2014), 57–81.
[43]
Jinhyun So, Başak Güler, and A Salman Avestimehr. 2020. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications (2020).
[44]
Jo Van Bulck, Marina Minkin, Ofir Weisse, Daniel Genkin, Baris Kasikci, Frank Piessens, Mark Silberstein, Thomas F Wenisch, Yuval Yarom, and Raoul Strackx. 2018. Foreshadow: Extracting the keys to the intel {SGX} kingdom with transient out-of-order execution. In Proceedings of USENIX Security. 991–1008.
[45]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2020. Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation. In Uncertainty in Artificial Intelligence. 261–270.
[46]
Guowen Xu, Hongwei Li, Sen Liu, Kan Yang, and Xiaodong Lin. 2019. Verifynet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security 15 (2019), 911–926.
[47]
Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. 2018. Applied federated learning: Improving google keyboard query suggestions. arXiv:1812.02903 (2018).
[48]
Andrew C Yao. 1982. Theory and application of trapdoor functions. In Proceedings of FOCS. 80–91.
[49]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of ICML. 5650–5659.
[50]
Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of USENIX ATC. 493–506.
[51]
Qiao Zhang, Chunsheng Xin, and Hongyi Wu. 2021. GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks. In Proceedings of NDSS.
[52]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep Leakage from Gradients. In Proceedings of NeurIPS.

Cited By

View all
  • (2024)Dual-Server-Based Lightweight Privacy-Preserving Federated LearningIEEE Transactions on Network and Service Management10.1109/TNSM.2024.339953421:4(4787-4800)Online publication date: 10-May-2024
  • (2024)NSPFL: A Novel Secure and Privacy-Preserving Federated Learning With Data Integrity AuditingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337985219(4494-4506)Online publication date: 20-Mar-2024
  • (2024)SPEFL: Efficient Security and Privacy-Enhanced Federated Learning Against Poisoning AttacksIEEE Internet of Things Journal10.1109/JIOT.2023.333963811:8(13437-13451)Online publication date: 15-Apr-2024
  • Show More Cited By

Index Terms

  1. Efficient, Private and Robust Federated Learning
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ACSAC '21: Proceedings of the 37th Annual Computer Security Applications Conference
    December 2021
    1077 pages
    ISBN:9781450385794
    DOI:10.1145/3485832
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 December 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Byzantine robustness.
    2. Federated learning
    3. Privacy protection

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ACSAC '21

    Acceptance Rates

    Overall Acceptance Rate 104 of 497 submissions, 21%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)248
    • Downloads (Last 6 weeks)26
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Dual-Server-Based Lightweight Privacy-Preserving Federated LearningIEEE Transactions on Network and Service Management10.1109/TNSM.2024.339953421:4(4787-4800)Online publication date: 10-May-2024
    • (2024)NSPFL: A Novel Secure and Privacy-Preserving Federated Learning With Data Integrity AuditingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337985219(4494-4506)Online publication date: 20-Mar-2024
    • (2024)SPEFL: Efficient Security and Privacy-Enhanced Federated Learning Against Poisoning AttacksIEEE Internet of Things Journal10.1109/JIOT.2023.333963811:8(13437-13451)Online publication date: 15-Apr-2024
    • (2024)PBFL: Privacy-Preserving and Byzantine-Robust Federated-Learning-Empowered Industry 4.0IEEE Internet of Things Journal10.1109/JIOT.2023.331522611:4(7128-7140)Online publication date: 15-Feb-2024
    • (2024)FedLRDP: Federated Learning Framework with Local Random Differential Privacy2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650657(1-8)Online publication date: 30-Jun-2024
    • (2024)Enabling Privacy-Preserving and Publicly Auditable Federated LearningICC 2024 - IEEE International Conference on Communications10.1109/ICC51166.2024.10622406(5178-5183)Online publication date: 9-Jun-2024
    • (2024)FedSafe-No KDC Needed: Decentralized Federated Learning with Enhanced Security and Efficiency2024 IEEE 21st Consumer Communications & Networking Conference (CCNC)10.1109/CCNC51664.2024.10454870(969-975)Online publication date: 6-Jan-2024
    • (2024)ESVFLInformation Fusion10.1016/j.inffus.2024.102420109:COnline publication date: 18-Jul-2024
    • (2024)Robust federated learning with voting and scalingFuture Generation Computer Systems10.1016/j.future.2023.11.015153:C(113-124)Online publication date: 16-May-2024
    • (2024)Towards robust and privacy-preserving federated learning in edge computingComputer Networks: The International Journal of Computer and Telecommunications Networking10.1016/j.comnet.2024.110321243:COnline publication date: 9-Jul-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media