Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3637494.3638729acmotherconferencesArticle/Chapter ViewAbstractPublication PagescecctConference Proceedingsconference-collections
research-article

Byzantine-Robust Federated Learning Based on Multi-center Secure Clustering

Published: 05 February 2024 Publication History

Abstract

In the backdrop of growing social apprehensions regarding data privacy and security, Federated Learning (FL) stands out as a cornerstone in modern machine learning. However, FL grapples with challenges stemming from data heterogeneity and potential security breaches. While techniques such as FedSEM enhance multi-center aggregation by employing Expectation Maximization (EM) methods to match clients with centers, leading to improved performance in non-IID environments, there remains a critical gap in ensuring optimal privacy and security measures.
To tackle the aforementioned challenge, we introduce Byzantine-Robust Clustering Federated Expectation Maximization (BRCFEM), an innovative federated learning approach. BRCFEM integrates advanced data clustering techniques, precise data-matching strategies, efficient gradient compression, and a robust mechanism for secure data partitioning and distribution. Our scheme adds noise to the data, converts it into binary format, and applies encoding techniques through binary secret sharing. This pivotal step ensures robust privacy measures within our method. This mechanism not only protects data privacy but also strives to enhance the model’s accuracy and resilience. Extensive experimental evaluations demonstrate that BRCFEM offers formidable protection against malicious system attacks and unauthorized data access, without sacrificing model efficacy. This positions BRCFEM as an ideal solution for FL scenarios demanding both security and performance in non-IID environments.

References

[1]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems 30 (2017), 119–129.
[2]
Christopher Briggs, Zhong Fan, and Peter Andras. 2020. Federated learning with hierarchical clustering of local updates to improve training on non-IID data, In 2020 International Joint Conference on Neural Networks (IJCNN). CoRR abs/2004.11791, 1–9.
[3]
Sebastian Caldas, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. 2018. LEAF: A Benchmark for Federated Settings. CoRR abs/1812.01097 (2018). arXiv:1812.01097
[4]
Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. 2018. Federated meta-learning with fast convergence and efficient communication. arXiv preprint arXiv:1802.07876 (2018).
[5]
Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. 2018. Draco: Byzantine-resilient distributed training via redundant gradients. In International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, PMLR, 903–912.
[6]
Yudong Chen, Lili Su, and Jiaming Xu. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems 1, 2 (2017), 1–25.
[7]
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. 2017. EMNIST: Extending MNIST to handwritten letters. In 2017 international joint conference on neural networks (IJCNN). IEEE, IEEE, 2921–2926.
[8]
Moming Duan, Duo Liu, Xinyuan Ji, Renping Liu, Liang Liang, Xianzhang Chen, and Yujuan Tan. 2021. FedGroup: Efficient Federated Learning via Decomposed Similarity-Based Clustering. (2021), 228–237.
[9]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2020. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. In 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, Srdjan Capkun and Franziska Roesner (Eds.). USENIX Association, 1605–1622.
[10]
Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2022. An efficient framework for clustered federated learning. IEEE Transactions on Information Theory 68, 12 (2022), 8076–8091.
[11]
Avishek Ghosh, Justin Hong, Dong Yin, and Kannan Ramchandran. 2019. Robust Federated Learning in a Heterogeneous Environment. CoRR abs/1906.06629 (2019). arXiv:1906.06629
[12]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1–2 (2021), 1–210.
[13]
Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine 37, 3 (2020), 50–60.
[14]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2019. FedDANE: A Federated Newton-Type Method. In 53rd Asilomar Conference on Signals, Systems, and Computers, ACSCC 2019, Pacific Grove, CA, USA, November 3-6, 2019, Michael B. Matthews (Ed.). IEEE, 1227–1231.
[15]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision. IEEE Computer Society, 3730–3738.
[16]
Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, and Jing Jiang. 2023. Multi-center federated learning: clients clustering for better personalization. World Wide Web 26, 1 (2023), 481–500.
[17]
Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. 2020. Three Approaches for Personalization with Applications to Federated Learning. CoRR abs/2002.10619 (2020). arXiv:2002.10619
[18]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics(Proceedings of Machine Learning Research, Vol. 54). PMLR, PMLR, 1273–1282.
[19]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. 80 (2018), 3518–3527.
[20]
Luis Muñoz-González, Kenneth T. Co, and Emil C. Lupu. 2019. Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging. CoRR abs/1909.05125 (2019). arXiv:1909.05125
[21]
Vu Khanh Quy, Abdellah Chehri, Nguyen Minh Quy, Nguyen Dinh Han, and Nguyen Tien Ban. 2023. Innovative Trends in the 6G Era: A Comprehensive Survey of Architecture, Applications, Technologies, and Challenges. IEEE Access 11 (2023), 39824–39844. https://doi.org/10.1109/ACCESS.2023.3269297
[22]
Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. 2020. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems 32, 8 (2020), 3710–3722.
[23]
Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. 2019. Robust and communication-efficient federated learning from non-iid data. IEEE transactions on neural networks and learning systems 31, 9 (2019), 3400–3413.
[24]
Erich Schubert and Michael Gertz. 2018. Improving the Cluster Structure Extracted from OPTICS Plots. In Proceedings of the Conference "Lernen, Wissen, Daten, Analysen", LWDA 2018, Mannheim, Germany, August 22-24, 2018(CEUR Workshop Proceedings, Vol. 2191), Rainer Gemulla, Simone Paolo Ponzetto, Christian Bizer, Margret Keuper, and Heiner Stuckenschmidt (Eds.). CEUR-WS.org, 318–329.
[25]
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated multi-task learning. Advances in neural information processing systems 30 (2017), 4424–4434.
[26]
Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris S. Papailiopoulos, and Yasaman Khazaeni. 2020. Federated Learning with Matched Averaging. CoRR abs/2002.06440 (2020). arXiv:2002.06440
[27]
Chen Zhang, Yu Xie, Hang Bai, Bin Yu, Weihong Li, and Yuan Gao. 2021. A survey on federated learning. Knowledge-Based Systems 216 (2021), 106775.
[28]
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018).
[29]
Heng Zhu and Qing Ling. 2022. Bridging differential privacy and byzantine-robustness via model aggregation. arXiv preprint arXiv:2205.00107 (2022).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CECCT '23: Proceedings of the 2023 International Conference on Electronics, Computers and Communication Technology
November 2023
266 pages
ISBN:9798400716300
DOI:10.1145/3637494
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 February 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Clustering
  2. Differential privacy
  3. Federated Learning
  4. Secret sharing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CECCT 2023

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 30
    Total Downloads
  • Downloads (Last 12 months)30
  • Downloads (Last 6 weeks)0
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media