Abstract
Federated Learning (FL) is becoming a popular paradigm for leveraging distributed data and preserving data privacy. However, due to the distributed characteristic, FL systems are vulnerable to Byzantine attacks that compromised clients attack the global model by uploading malicious model updates. With the development of layer-level and parameter-level fine-grained attacks, the attacks’ stealthiness and effectiveness have been significantly improved. The existing defense mechanisms solely analyze the model-level statistics of individual model updates uploaded by clients to mitigate Byzantine attacks, which are ineffective against fine-grained attacks due to unawareness or overreaction. To address this problem, we propose SkyMask, a new attack-agnostic robust FL system that firstly leverages fine-grained learnable masks to identify malicious model updates at the parameter level. Specifically, the FL server freezes and multiplies the model updates uploaded by clients with the parameter-level masks, and trains the masks over a small clean dataset (i.e., root dataset) to learn the subtle difference between benign and malicious model updates in a high-dimension space. Our extensive experiments involve different models on three public datasets under state-of-the-art (SOTA) attacks, where the results show that SkyMask achieves up to 14% higher testing accuracy compared with SOTA defense strategies under the same attacks and successfully defends against attacks with malicious clients of a high fraction up to 80%. Code is available at https://github.com/KoalaYan/SkyMask.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: Proceedings of the AISTATS (2020)
Baruch, G., Baruch, M., Goldberg, Y.: A little is enough: circumventing defenses for distributed learning. In: Proceedings of the NeurIPS (2019)
Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: Proceedings of the ICML (2019)
Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Proceedings of the NeurIPS (2017)
Cao, X., Fang, M., Liu, J., Gong, N.Z.: FLTrust: Byzantine-robust federated learning via trust bootstrapping. In: Proceedings of the NDSS (2021)
Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the NeurIPS (2012)
Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to Byzantine-robust federated learning. In: Proceedings of the USENIX Security (2020)
Guerraoui, R., Rouault, S., et al.: The hidden vulnerability of distributed learning in Byzantium. In: Proceedings of the ICML (2018)
Guo, H., et al.: Siren: Byzantine-robust federated learning via proactive alarming. In: Proceedings of the SoCC (2021)
Guo, P., Wang, P., Zhou, J., Jiang, S., Patel, V.M.: Multi-institutional collaborations for improving deep learning-based magnetic resonance image reconstruction using federated learning. In: Proceedings of the CVPR (2021)
Hsu, T.M.H., Qi, H., Brown, M.: Federated visual classification with real-world data distribution. In: Proceedings of the ECCV (2020)
Kamp, M., Fischer, J., Vreeken, J.: Federated learning from small datasets. In: Proceedings of the ICLR (2023)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)
Li, A., Sun, J., Zeng, X., Zhang, M., Li, H., Chen, Y.: Fedmask: joint computation and communication-efficient personalized federated learning via heterogeneous masking. In: Proceedings of the SenSys (2021)
Li, H., et al.: 3DFED: adaptive and extensible framework for covert backdoor attack in federated learning. In: Proceedings of the IEEE S &P (2023)
Li, Q., Liu, Z., Li, Q., Xu, K.: martFL: enabling utility-driven data marketplace with a robust and verifiable federated learning architecture. In: Proceedings of the CCS (2023)
Li, Y., Wang, X., Yang, L., Feng, L., Zhang, W., Gao, Y.: Diverse cotraining makes strong semi-supervised segmentor. In: ICCV (2023)
Liu, J., et al.: Clip-driven universal model for organ segmentation and tumor detection. In: Proceedings of the CVPR (2023)
Liu, Q., Chen, C., Qin, J., Dou, Q., Heng, P.A.: FedDG: federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In: Proceedings of the CVPR (2021)
Liu, Y., et al.: Fedvision: an online visual object detection platform powered by federated learning. In: Proceedings of the AAAI (2020)
Lyu, X., et al.: Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In: Proceedings of the AAAI (2023)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the AISTATS (2017)
Miao, Y., Liu, Z., Li, H., Choo, K.K.R., Deng, R.H.: Privacy-preserving Byzantine-robust federated learning via blockchain systems. IEEE Trans. Inf. Forensics Secur. 17, 2848–2861 (2022)
Nguyen, T.D., et al.: FLAME: taming backdoors in federated learning. In: Proceedings of the USENIX Security (2022)
Nguyen, T.D., Nguyen, T.A., Tran, A., Doan, K.D., Wong, K.S.: IBA: towards irreversible backdoor attacks in federated learning. In: Proceedings of the NeurIPS (2024)
Qureshi, N.B.S., Kim, D.H., Lee, J., Lee, E.K.: On the performance impact of poisoning attacks on load forecasting in federated learning. In: Proceedings of the UbiComp (2021)
Rieger, P., Nguyen, T.D., Miettinen, M., Sadeghi, A.R.: Deepsight: mitigating backdoor attacks in federated learning through deep model inspection. In: Proceedings of the NDSS (2022)
Shejwalkar, V., Houmansadr, A.: Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning. In: Proceedings of the NDSS (2021)
Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019)
Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: Proceedings of the ESORICS (2020)
Wang, J., Guo, S., Xie, X., Qi, H.: Protect privacy from gradient leakage attack in federated learning. In: INFOCOM (2022)
Xie, C., Huang, K., Chen, P.Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: Proceedings of the ICLR (2019)
Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the ICML (2018)
Zhang, H., Jia, J., Chen, J., Lin, L., Wu, D.: A3FL: adversarially adaptive backdoor attacks to federated learning. In: Proceedings of the AAAI (2024)
Zhang, L., Luo, Y., Bai, Y., Du, B., Duan, L.Y.: Federated learning for non-IID data via unified feature learning and optimization objective alignment. In: ICCV (2021)
Zhang, Z., Cao, X., Jia, J., Gong, N.Z.: Fldetector: defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the KDD (2022)
Zhou, H., Lan, J., Liu, R., Yosinski, J.: Deconstructing lottery tickets: zeros, signs, and the supermask. In: Proceedings of the NeurIPS (2019)
Acknowledgements
This work was partially supported by the National Key R&D Program of China (2022YFB4402102), the Shanghai Key Laboratory of Scalable Computing and Systems, the HighTech Support Program from STCSM (No.22511106200), and Intel Corporation (UFunding 12679). Tao Song is the corresponding author.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yan, P. et al. (2025). SKYMASK: Attack-Agnostic Robust Federated Learning with Fine-Grained Learnable Masks. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15077. Springer, Cham. https://doi.org/10.1007/978-3-031-72655-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-72655-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72654-5
Online ISBN: 978-3-031-72655-2
eBook Packages: Computer ScienceComputer Science (R0)