Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

SKYMASK: Attack-Agnostic Robust Federated Learning with Fine-Grained Learnable Masks

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15077))

Included in the following conference series:

  • 110 Accesses

Abstract

Federated Learning (FL) is becoming a popular paradigm for leveraging distributed data and preserving data privacy. However, due to the distributed characteristic, FL systems are vulnerable to Byzantine attacks that compromised clients attack the global model by uploading malicious model updates. With the development of layer-level and parameter-level fine-grained attacks, the attacks’ stealthiness and effectiveness have been significantly improved. The existing defense mechanisms solely analyze the model-level statistics of individual model updates uploaded by clients to mitigate Byzantine attacks, which are ineffective against fine-grained attacks due to unawareness or overreaction. To address this problem, we propose SkyMask, a new attack-agnostic robust FL system that firstly leverages fine-grained learnable masks to identify malicious model updates at the parameter level. Specifically, the FL server freezes and multiplies the model updates uploaded by clients with the parameter-level masks, and trains the masks over a small clean dataset (i.e., root dataset) to learn the subtle difference between benign and malicious model updates in a high-dimension space. Our extensive experiments involve different models on three public datasets under state-of-the-art (SOTA) attacks, where the results show that SkyMask achieves up to 14% higher testing accuracy compared with SOTA defense strategies under the same attacks and successfully defends against attacks with malicious clients of a high fraction up to 80%. Code is available at https://github.com/KoalaYan/SkyMask.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: Proceedings of the AISTATS (2020)

    Google Scholar 

  2. Baruch, G., Baruch, M., Goldberg, Y.: A little is enough: circumventing defenses for distributed learning. In: Proceedings of the NeurIPS (2019)

    Google Scholar 

  3. Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: Proceedings of the ICML (2019)

    Google Scholar 

  4. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Proceedings of the NeurIPS (2017)

    Google Scholar 

  5. Cao, X., Fang, M., Liu, J., Gong, N.Z.: FLTrust: Byzantine-robust federated learning via trust bootstrapping. In: Proceedings of the NDSS (2021)

    Google Scholar 

  6. Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the NeurIPS (2012)

    Google Scholar 

  7. Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to Byzantine-robust federated learning. In: Proceedings of the USENIX Security (2020)

    Google Scholar 

  8. Guerraoui, R., Rouault, S., et al.: The hidden vulnerability of distributed learning in Byzantium. In: Proceedings of the ICML (2018)

    Google Scholar 

  9. Guo, H., et al.: Siren: Byzantine-robust federated learning via proactive alarming. In: Proceedings of the SoCC (2021)

    Google Scholar 

  10. Guo, P., Wang, P., Zhou, J., Jiang, S., Patel, V.M.: Multi-institutional collaborations for improving deep learning-based magnetic resonance image reconstruction using federated learning. In: Proceedings of the CVPR (2021)

    Google Scholar 

  11. Hsu, T.M.H., Qi, H., Brown, M.: Federated visual classification with real-world data distribution. In: Proceedings of the ECCV (2020)

    Google Scholar 

  12. Kamp, M., Fischer, J., Vreeken, J.: Federated learning from small datasets. In: Proceedings of the ICLR (2023)

    Google Scholar 

  13. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  14. Li, A., Sun, J., Zeng, X., Zhang, M., Li, H., Chen, Y.: Fedmask: joint computation and communication-efficient personalized federated learning via heterogeneous masking. In: Proceedings of the SenSys (2021)

    Google Scholar 

  15. Li, H., et al.: 3DFED: adaptive and extensible framework for covert backdoor attack in federated learning. In: Proceedings of the IEEE S &P (2023)

    Google Scholar 

  16. Li, Q., Liu, Z., Li, Q., Xu, K.: martFL: enabling utility-driven data marketplace with a robust and verifiable federated learning architecture. In: Proceedings of the CCS (2023)

    Google Scholar 

  17. Li, Y., Wang, X., Yang, L., Feng, L., Zhang, W., Gao, Y.: Diverse cotraining makes strong semi-supervised segmentor. In: ICCV (2023)

    Google Scholar 

  18. Liu, J., et al.: Clip-driven universal model for organ segmentation and tumor detection. In: Proceedings of the CVPR (2023)

    Google Scholar 

  19. Liu, Q., Chen, C., Qin, J., Dou, Q., Heng, P.A.: FedDG: federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In: Proceedings of the CVPR (2021)

    Google Scholar 

  20. Liu, Y., et al.: Fedvision: an online visual object detection platform powered by federated learning. In: Proceedings of the AAAI (2020)

    Google Scholar 

  21. Lyu, X., et al.: Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In: Proceedings of the AAAI (2023)

    Google Scholar 

  22. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the AISTATS (2017)

    Google Scholar 

  23. Miao, Y., Liu, Z., Li, H., Choo, K.K.R., Deng, R.H.: Privacy-preserving Byzantine-robust federated learning via blockchain systems. IEEE Trans. Inf. Forensics Secur. 17, 2848–2861 (2022)

    Article  Google Scholar 

  24. Nguyen, T.D., et al.: FLAME: taming backdoors in federated learning. In: Proceedings of the USENIX Security (2022)

    Google Scholar 

  25. Nguyen, T.D., Nguyen, T.A., Tran, A., Doan, K.D., Wong, K.S.: IBA: towards irreversible backdoor attacks in federated learning. In: Proceedings of the NeurIPS (2024)

    Google Scholar 

  26. Qureshi, N.B.S., Kim, D.H., Lee, J., Lee, E.K.: On the performance impact of poisoning attacks on load forecasting in federated learning. In: Proceedings of the UbiComp (2021)

    Google Scholar 

  27. Rieger, P., Nguyen, T.D., Miettinen, M., Sadeghi, A.R.: Deepsight: mitigating backdoor attacks in federated learning through deep model inspection. In: Proceedings of the NDSS (2022)

    Google Scholar 

  28. Shejwalkar, V., Houmansadr, A.: Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning. In: Proceedings of the NDSS (2021)

    Google Scholar 

  29. Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019)

  30. Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: Proceedings of the ESORICS (2020)

    Google Scholar 

  31. Wang, J., Guo, S., Xie, X., Qi, H.: Protect privacy from gradient leakage attack in federated learning. In: INFOCOM (2022)

    Google Scholar 

  32. Xie, C., Huang, K., Chen, P.Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: Proceedings of the ICLR (2019)

    Google Scholar 

  33. Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the ICML (2018)

    Google Scholar 

  34. Zhang, H., Jia, J., Chen, J., Lin, L., Wu, D.: A3FL: adversarially adaptive backdoor attacks to federated learning. In: Proceedings of the AAAI (2024)

    Google Scholar 

  35. Zhang, L., Luo, Y., Bai, Y., Du, B., Duan, L.Y.: Federated learning for non-IID data via unified feature learning and optimization objective alignment. In: ICCV (2021)

    Google Scholar 

  36. Zhang, Z., Cao, X., Jia, J., Gong, N.Z.: Fldetector: defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the KDD (2022)

    Google Scholar 

  37. Zhou, H., Lan, J., Liu, R., Yosinski, J.: Deconstructing lottery tickets: zeros, signs, and the supermask. In: Proceedings of the NeurIPS (2019)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Key R&D Program of China (2022YFB4402102), the Shanghai Key Laboratory of Scalable Computing and Systems, the HighTech Support Program from STCSM (No.22511106200), and Intel Corporation (UFunding 12679). Tao Song is the corresponding author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Song .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 635 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yan, P. et al. (2025). SKYMASK: Attack-Agnostic Robust Federated Learning with Fine-Grained Learnable Masks. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15077. Springer, Cham. https://doi.org/10.1007/978-3-031-72655-2_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72655-2_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72654-5

  • Online ISBN: 978-3-031-72655-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics