Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-030-65745-1_12guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Model Poisoning Defense on Federated Learning: A Validation Based Approach

Published: 25 November 2020 Publication History

Abstract

Federated learning is an improved distributed machine learning approach for privacy preservation. All clients collaboratively train the model using on-device data, and the centralized server only aggregates clients’ training results instead of collecting their data. However, there is a serious shortcoming for federated learning that the centralized server cannot detect the validity of clients’ training data and correctness of training results due to its limitation on monitoring clients’ training processes. Federated learning is vulnerable to some attacks when attackers maliciously manipulate training data or updates, such as model poisoning attacks. Attackers who execute model poisoning attacks can negatively affect the global models’ performance on a targeted class by manipulating the label of this class at one or more clients. Currently, there is a gap in the defense methods against model poisoning attacks in federated learning. To address the above shortcoming, we propose an effective defense method against model poisoning attack in federated learning in this paper. We validate each client’s local model with a validation set. The server will only receive updates from well-performing clients to protect against model poisoning attacks. We consider the following two cases: all clients have a very similar distribution of training data and all clients have a very different distribution of training data, and design our methods and experiments for both cases. The experimental results show that our defense method can significantly reduce the success rate of model poisoning attacks in both cases in a federated learning setting.

References

[1]
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. CoRR abs/1807.00459 (2018). http://arxiv.org/abs/1807.00459
[2]
Cao, D., Chang, S., Lin, Z., Liu, G., Sun, D.: Understanding distributed poisoning attack in federated learning. In: 25th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2019, Tianjin, China, 4–6 December 2019, pp. 233–239. IEEE (2019). https://doi.org/10.1109/ICPADS47876.2019.00042
[3]
Fang, M., Cao, X., Jia, J., Gong, N.Z.: Local model poisoning attacks to byzantine-robust federated learning. CoRR abs/1911.11815 (2019). http://arxiv.org/abs/1911.11815
[4]
Kairouz, P., et al.: Advances and open problems in federated learning. CoRR abs/1912.04977 (2019). http://arxiv.org/abs/1912.04977
[5]
Konecný, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: Distributed machine learning for on-device intelligence. CoRR abs/1610.02527 (2016). http://arxiv.org/abs/1610.02527
[6]
Konecný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency. CoRR abs/1610.05492 (2016). http://arxiv.org/abs/1610.05492
[7]
Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: Challenges, methods, and future directions. IEEE Sign. Process. Mag. 37(3), 50–60 (2020). https://doi.org/10.1109/MSP.2020.2975749
[8]
Lim, W.Y.B., et al.: Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun. Surv. Tutorials 22(3), 2031–2063 (2020). https://doi.org/10.1109/COMST.2020.2986024
[9]
Lyu, L., Yu, H., Yang, Q.: Threats to federated learning: A survey. CoRR abs/2003.02133 (2020). https://arxiv.org/abs/2003.02133
[10]
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Singh, A., Zhu, X.J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20–22 April 2017, Fort Lauderdale, FL, USA. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR (2017). http://proceedings.mlr.press/v54/mcmahan17a.html
[11]
Özdayi, M.S., Kantarcioglu, M., Gel, Y.R.: Defending against backdoors in federated learning with robust learning rate. CoRR abs/2007.03767 (2020). https://arxiv.org/abs/2007.03767
[12]
Pillutla, V.K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. CoRR abs/1912.13445 (2019). http://arxiv.org/abs/1912.13445
[13]
Shen, S., Zhu, T., Wu, D., Wang, W., Zhou, W.: From distributed machine learning to federated learning: In the view of data privacy and security. CoRR abs/2010.09258 (2020). https://arxiv.org/abs/2010.09258
[14]
Suya, F., Mahloujifar, S., Evans, D., Tian, Y.: Model-targeted poisoning attacks: Provable convergence and certified bounds. CoRR abs/2006.16469 (2020). https://arxiv.org/abs/2006.16469
[15]
Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. CoRR abs/2007.08432 (2020). https://arxiv.org/abs/2007.08432
[16]
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 12:1–12:19 (2019). https://doi.org/10.1145/3298981
[17]
Zhao, L., Hu, S., Wang, Q., Jiang, J., Shen, C., Luo, X.: Shielding collaborative learning: Mitigating poisoning attacks through client-side detection. CoRR abs/1910.13111 (2019). http://arxiv.org/abs/1910.13111
[18]
Zhu, T., Li, G., Zhou, W., Yu, P.S.: Differentially private data publishing and analysis: a survey. IEEE Trans. Knowl. Data Eng. 29(8), 1619–1638 (2017). https://doi.org/10.1109/TKDE.2017.2697856
[19]
Zhu, T., Ye, D., Wang, W., Zhou, W., Yu, P.S.: More than privacy: Applying differential privacy in key areas of artificial intelligence. CoRR abs/2008.01916 (2020). https://arxiv.org/abs/2008.01916

Cited By

View all
  • (2023)Going Haywire: False Friends in Federated Learning and How to Find ThemProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3595790(593-607)Online publication date: 10-Jul-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Network and System Security: 14th International Conference, NSS 2020, Melbourne, VIC, Australia, November 25–27, 2020, Proceedings
Nov 2020
457 pages
ISBN:978-3-030-65744-4
DOI:10.1007/978-3-030-65745-1
  • Editors:
  • Mirosław Kutyłowski,
  • Jun Zhang,
  • Chao Chen

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 25 November 2020

Author Tags

  1. Federated learning
  2. Model poisoning attack
  3. Defense

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Going Haywire: False Friends in Federated Learning and How to Find ThemProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3595790(593-607)Online publication date: 10-Jul-2023

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media