Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
short-survey

Backdoor attacks and defenses in federated learning: : Survey, challenges and future research directions

Published: 01 February 2024 Publication History

Abstract

Federated learning (FL) is an approach within the realm of machine learning (ML) that allows the use of distributed data without compromising personal privacy. In FL, it becomes evident that the training data among participants frequently exhibit heterogeneous distribution characteristics. This inherent heterogeneity poses a substantial challenge for the orchestration server as it strives to assess the reliability of each local model update. Due to this challenge, FL becomes susceptible to various potential risks, with the ominous backdoor attack standing out as one of the most menacing threats. Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients. These attacks can cause the global model to misbehave on specific inputs while appearing normal in other instances. Although the backdoor attacks received significant attention for their potential impact on practical deep learning applications, their exploration within the realm of FL remains limited. This survey seeks to address this gap by offering an all-encompassing examination of prevailing backdoor attack tactics and defenses in the context of FL. We include an exhaustive analysis of diverse approaches to provide a comprehensive understanding of this intricate landscape. Furthermore, we also discuss the challenges and potential future directions for attacks and defenses in the context of FL.

References

[1]
Abdulrahman S., Tout H., Ould-Slimane H., Mourad A., Talhi C., Guizani M., A survey on federated learning: The journey from centralized to distributed on-site learning and beyond, IEEE Internet of Things Journal 8 (7) (2020) 5476–5497.
[2]
Andreina S., Marson G.A., Möllering H., Karame G., Baffle: Backdoor detection via feedback-based federated learning, in: 2021 IEEE 41st International Conference on Distributed Computing Systems, ICDCS, IEEE, 2021, pp. 852–863.
[3]
Bagdasaryan E., Poursaeed O., Shmatikov V., Differential privacy has disparate impact on model accuracy, Adv. Neural Inf. Process. Syst. 32 (2019).
[4]
Bagdasaryan E., Veit A., Hua Y., Estrin D., Shmatikov V., How to backdoor federated learning, in: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 2938–2948.
[5]
Baluja S., Hiding images in plain sight: Deep steganography, Adv. Neural Inf. Process. Syst. 30 (2017).
[6]
Baruch G., Baruch M., Goldberg Y., A little is enough: Circumventing defenses for distributed learning, Adv. Neural Inf. Process. Syst. 32 (2019).
[7]
Becking, D., Kirchhoffer, H., Tech, G., Haase, P., Müller, K., Schwarz, H., Samek, W., 2022. Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. CVPRW, pp. 3366–3375.
[8]
Bernstein J., Zhao J., Azizzadenesheli K., Anandkumar A., signSGD with majority vote is communication efficient and fault tolerant, 2018, arXiv preprint arXiv:1810.05291.
[9]
Bhagoji A.N., Chakraborty S., Mittal P., Calo S., Analyzing federated learning through an adversarial lens, in: International Conference on Machine Learning, PMLR, 2019, pp. 634–643.
[10]
Blanchard P., El Mhamdi E.M., Guerraoui R., Stainer J., Machine learning with adversaries: Byzantine tolerant gradient descent, Adv. Neural Inf. Process. Syst. 30 (2017).
[11]
Blasch E., Pham T., Chong C.-Y., Koch W., Leung H., Braines D., Abdelzaher T., Machine learning/artificial intelligence for sensor data fusion–opportunities and challenges, IEEE Aerosp. Electron. Syst. Mag. 36 (7) (2021) 80–93.
[12]
Bonawitz K., Eichner H., Grieskamp W., Huba D., Ingerman A., Ivanov V., Kiddon C., Konecný J., Mazzocchi S., McMahan H.B., Overveldt T.V., Petrou D., Ramage D., Roselander J., Towards federated learning at scale: System design, 2019, arXiv abs/1902.01046.
[13]
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., Seth, K., 2017. Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. pp. 1175–1191.
[14]
Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C.A., Jia, H., Travers, A., Zhang, B., Lie, D., Papernot, N., 2021. Machine Unlearning. In: 2021 IEEE Symposium on Security and Privacy. SP, pp. 141–159.
[15]
Cao D., Chang S., Lin Z., Liu G., Sun D., Understanding distributed poisoning attack in federated learning, in: 2019 IEEE 25th International Conference on Parallel and Distributed Systems, ICPADS, IEEE, 2019, pp. 233–239.
[16]
Chen H., Asif S.A., Park J., Shen C.-C., Bennis M., Robust blockchained federated learning with model validation and proof-of-stake inspired consensus, 2021, arXiv preprint arXiv:2101.03300.
[17]
Chen C.-L., Golubchik L., Paolieri M., Backdoor attacks on federated meta-learning, 2020, arXiv preprint arXiv:2006.07026.
[18]
Chen X., Liu C., Li B., Lu K., Song D.X., Targeted backdoor attacks on deep learning systems using data poisoning, 2017, arXiv abs/1712.05526.
[19]
Chen Y., Qin X., Wang J., Yu C., Gao W., Fedhealth: A federated transfer learning framework for wearable healthcare, IEEE Intell. Syst. 35 (4) (2020) 83–93.
[20]
Chen Y., yan Sun X., Jin Y., Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation, IEEE Trans. Neural Netw. Learn. Syst. 31 (2020) 4229–4238.
[21]
Chen M., Suresh A.T., Mathews R., Wong A., Allauzen C., Beaufays F., Riley M., Federated learning of n-gram language models, 2019, arXiv preprint arXiv:1910.03432.
[22]
Cohen G., Afshar S., Tapson J., Van Schaik A., EMNIST: Extending MNIST to handwritten letters, in: 2017 International Joint Conference on Neural Networks, IJCNN, IEEE, 2017, pp. 2921–2926.
[23]
Cui, X., Lu, S., Kingsbury, B., 2021. Federated acoustic modeling for automatic speech recognition. In: ICASSP. pp. 6748–6752.
[24]
Deng L., The mnist database of handwritten digit images for machine learning research, IEEE Signal Process. Mag. 29 (6) (2012) 141–142.
[25]
Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L., Imagenet: A large-scale hierarchical image database, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, Ieee, 2009, pp. 248–255.
[26]
Dimitriadis, D., Kumatani, K., Gmyr, R., et al., 2020. A federated approach in training acoustic models. In: Interspeech. pp. 981–985.
[27]
Doan K., Lao Y., Li P., Backdoor attack with imperceptible input and latent modification, Adv. Neural Inf. Process. Syst. 34 (2021) 18944–18957.
[28]
Doan, K., Lao, Y., Zhao, W., Li, P., 2021b. Lira: Learnable, imperceptible and robust backdoor attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 11966–11976.
[29]
Doshi, K., lmaz, Y.Y., 2022. Federated Learning-based Driver Activity Recognition for Edge Devices. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. CVPRW, pp. 3337–3345.
[30]
Fang, S., Choromanska, A., 2022. Backdoor attacks on the DNN interpretation system. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. pp. 561–570.
[31]
Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D., 2022. Fiba: Frequency-injection based backdoor attack in medical image analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20876–20885.
[32]
Fung C., Yoon C.J., Beschastnikh I., Mitigating sybils in federated learning poisoning, 2018, arXiv preprint arXiv:1808.04866.
[33]
Fung, C., Yoon, C.J., Beschastnikh, I., 2020. The limitations of federated learning in sybil settings. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses. RAID 2020, pp. 301–316.
[34]
Go A., Bhayani R., Huang L., Twitter sentiment classification using distant supervision, CS224N project report, Stanford 1 (12) (2009) 2009.
[35]
Goldblum M., Tsipras D., Xie C., Chen X., Schwarzschild A., Song D., Madry A., Li B., Goldstein T., Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, IEEE Trans. Pattern Anal. Mach. Intell. PP (2022).
[36]
Gong X., Chen Y., Huang H., Liao Y., Wang S., Wang Q., Coordinated backdoor attacks against federated learning with model-dependent triggers, IEEE Netw. 36 (2022) 84–90.
[37]
Gong X., Chen Y., Wang Q., Kong W., Backdoor attacks and defenses in federated learning: State-of-the-art, taxonomy, and future directions, IEEE Wirel. Commun. (2022).
[38]
Gosselin R., Vieu L., Loukil F., Benoit A., Privacy and security in federated learning: A survey, Appl. Sci. (2022).
[39]
Gu T., Liu K., Dolan-Gavitt B., Garg S., Badnets: evaluating backdooring attacks on deep neural networks, IEEE Access 7 (2019) 47230–47244.
[40]
Guerraoui R., Rouault S., et al., The hidden vulnerability of distributed learning in byzantium, in: International Conference on Machine Learning, PMLR, 2018, pp. 3521–3530.
[41]
Guliani, D., Beaufays, F., Motta, G., 2021. Training speech recognition models with federated learning: A quality/cost framework. In: ICASSP. pp. 3080–3084.
[42]
Gupta V., Jung C., Neel S., Roth A., Sharifi-Malvajerdi S., Waites C., Adaptive machine unlearning, Adv. Neural Inf. Process. Syst. 34 (2021) 16319–16330.
[43]
Gupta, D., Kayode, O., Bhatt, S., Gupta, M., Tosun, A.S., 2021b. Hierarchical Federated Learning based Anomaly Detection using Digital Twins for Smart Healthcare. In: 2021 IEEE 7th International Conference on Collaboration and Internet Computing. CIC, pp. 16–25.
[44]
Halimi A., Kadhe S., Rawat A., Baracaldo N., Federated unlearning: How to efficiently erase a client in FL?, 2022, arXiv preprint arXiv:2207.05521.
[45]
Hayes J., Danezis G., Generating steganographic images via adversarial training, Adv. Neural Inf. Process. Syst. 30 (2017).
[46]
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
[47]
Hochreiter S., Schmidhuber J., Long short-term memory, Neural Comput. 9 (8) (1997) 1735–1780.
[48]
Hu, B., Gao, Y., Liu, L., Ma, H., 2018. Federated Region-Learning: An Edge Computing Based Framework for Urban Environment Sensing. In: 2018 IEEE Global Communications Conference. GLOBECOM, pp. 1–7.
[49]
Jin R., Li X., Backdoor attack and defense in federated generative adversarial network-based medical image synthesis, 2022, arXiv preprint arXiv:2210.10886.
[50]
Jing, J., Deng, X., Xu, M., Wang, J., Guan, Z., 2021. HiNet: Deep image hiding by invertible network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4733–4742.
[51]
Jing Q., Wang W., Zhang J., Tian H., Chen K., Quantifying the performance of federated transfer learning, 2019, arXiv abs/1912.12795.
[52]
Kairouz P., McMahan H.B., Avent B., Bellet A., Bennis M., Bhagoji A.N., Bonawitz K., Charles Z., Cormode G., Cummings R., et al., Advances and open problems in federated learning, Found. Trends® Mach. Learn. 14 (1–2) (2021) 1–210.
[53]
Kholod I.I., Yanaki E., Fomichev D., Shalugin E., Novikova E., Filippov E., Nordlund M., Open-source federated learning frameworks for IoT: A comparative review and analysis, Sensors (Basel, Switzerland) 21 (2021).
[54]
Krizhevsky A., Hinton G., Learning Multiple Layers of Features from Tiny Images, University of Toronto, Toronto, Ontario, 2009.
[55]
Krizhevsky A., Hinton G., et al., Learning Multiple Layers of Features from Tiny Images, Toronto, ON, Canada, 2009.
[56]
LeCun Y., Bottou L., Bengio Y., Haffner P., Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278–2324.
[57]
Li S., Cheng Y., Wang W., Liu Y., Chen T., Learning to detect malicious clients for robust federated learning, 2020, arXiv preprint arXiv:2002.00211.
[58]
Li, Q., Wen, Z., He, B., 2020b. Practical federated gradient boosting decision trees. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. pp. 4642–4649.
[59]
Li Y., Wu B., Jiang Y., Li Z., Xia S., Backdoor learning: A survey, IEEE Trans. Neural Netw. Learn. Syst. PP (2022).
[60]
Lin B.Y., He C., Zeng Z., Wang H., Huang Y., Soltanolkotabi M., Ren X., Avestimehr S., FedNLP: A research platform for federated learning in natural language processing, 2021, arXiv abs/2104.08815.
[61]
Liu Y., Garg S., Nie J., Zhang Y., Xiong Z., Kang J., Hossain M.S., Deep anomaly detection for time-series data in industrial IoT: A communication-efficient on-device federated learning approach, IEEE Internet Things J. 8 (2021) 6348–6358.
[62]
Liu T., Hu X., Shu T., Technical report: Assisting backdoor federated learning with whole population knowledge alignment, 2022, arXiv abs/2207.12327.
[63]
Liu G., Ma X., Yang Y., Wang C., Liu J., FedEraser: Enabling efficient client-level data removal from federated learning models, in: 2021 IEEE/ACM 29th International Symposium on Quality of Service, IWQOS, IEEE, 2021, pp. 1–10.
[64]
Liu, Y., Yang, R., 2021. Federated Learning Application on Depression Treatment Robots(DTbot). In: 2021 IEEE 13th International Conference on Computer Research and Development. ICCRD, pp. 121–124.
[65]
Liu Y., qian Yi Z., Chen T., Backdoor attacks and defenses in feature-partitioned collaborative learning, 2020, arXiv abs/2007.03608.
[66]
Liu Y., Zou T., Kang Y., Liu W., He Y., Yi Z., Yang Q., Batch label inference and replacement attacks in black-boxed vertical federated learning, 2021, arXiv e-prints, arXiv–2112.
[67]
Lyu L., Yu H., Yang Q., Threats to federated learning: A survey, 2020, arXiv abs/2003.02133.
[68]
Mahalanobis, P.C., 1936. On the generalised distance in statistics. In: Proceedings of the National Institute of Science of India, Vol. 12. pp. 49–55.
[69]
McMahan H.B., Moore E., Ramage D., y Arcas B.A., Federated learning of deep networks using model averaging, 2016, arXiv abs/1602.05629.
[70]
McMahan B., Moore E., Ramage D., Hampson S., y Arcas B.A., Communication-efficient learning of deep networks from decentralized data, in: Artificial Intelligence and Statistics, PMLR, 2017, pp. 1273–1282.
[71]
McMahan H.B., Ramage D., Talwar K., Zhang L., Learning differentially private recurrent language models, 2017, arXiv preprint arXiv:1710.06963.
[72]
Mdhaffar S., Tommasi M., Esteve Y., Study on acoustic model personalization in a context of collaborative learning constrained by privacy preservation, in: Speech and Computer, 2021, pp. 426–436.
[73]
Molnar C., Interpretable Machine Learning, Lulu. com, 2020.
[74]
Montavon G., Samek W., Müller K.-R., Methods for interpreting and understanding deep neural networks, Digit. Signal Process. 73 (2018) 1–15.
[75]
Mothukuri V., Parizi R.M., Pouriyeh S., ping Huang Y., Dehghantanha A., Srivastava G., A survey on security and privacy of federated learning, Future Gener. Comput. Syst. 115 (2021) 619–640.
[76]
Muñoz-González L., Co K.T., Lupu E.C., Byzantine-robust federated machine learning through adaptive model averaging, 2019, arXiv preprint arXiv:1909.05125.
[77]
Naseri M., Hayes J., De Cristofaro E., Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy, 2020, arXiv e-prints, arXiv–2009.
[78]
Neel S., Roth A., Sharifi-Malvajerdi S., Descent-to-delete: Gradient-based methods for machine unlearning, in: Algorithmic Learning Theory, PMLR, 2021, pp. 931–962.
[79]
Nguyen T.D., Marchal S., Miettinen M., Fereidooni H., Asokan N., Sadeghi A.-R., DÏoT: A federated self-learning anomaly detection system for IoT, in: 2019 IEEE 39th International Conference on Distributed Computing Systems, ICDCS, IEEE, 2019, pp. 756–767.
[80]
Nguyen, T.D., Rieger, P., Chen, H., Yalame, H., Möllering, H., Fereidooni, H., Marchal, S., Miettinen, M., Mirhoseini, A., Zeitouni, S., et al., 2022. FLAME: Taming Backdoors in Federated Learning. In: 31st USENIX Security Symposium. USENIX Security 22, pp. 1415–1432.
[81]
Nguyen, T.D., Rieger, P., Miettinen, M., Sadeghi, A.-R., 2020. Poisoning attacks on federated learning-based IoT intrusion detection system. In: Proc. Workshop Decentralized IoT Syst. Secur.(DISS). DISS, pp. 1–7.
[82]
Nguyen V.-D., Sharma S.K., Vu T.X., Chatzinotas S., Ottersten B.E., Efficient federated learning algorithm for resource allocation in wireless IoT networks, IEEE Internet Things J. 8 (2021) 3394–3409.
[83]
Nguyen A., Tran A., WaNet - Imperceptible warping-based backdoor attack, 2021, arXiv abs/2102.10369.
[84]
Ozdayi, M.S., Kantarcioglu, M., Gel, Y.R., 2021. Defending against backdoors in federated learning with robust learning rate. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. pp. 9268–9276.
[85]
Pillutla K., Kakade S.M., Harchaoui Z., Robust aggregation for federated learning, IEEE Trans. Signal Process. 70 (2022) 1142–1154.
[86]
Prayitno K., Shyu C.R., Putra K.T., Chen H.-C., Tsai Y.-Y., Hossain K.S.M.T., Jiang W., Shae Z., A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications, Appl. Sci. (2021).
[87]
Preuveneers D., Rimmer V., Tsingenopoulos I., Spooren J., Joosen W., Ilie-Zudor E., Chained anomaly detection models for federated learning: An intrusion detection case study, Appl. Sci. 8 (12) (2018) 2663.
[88]
Rieger P., Nguyen T.D., Miettinen M., Sadeghi A.-R., Deepsight: Mitigating backdoor attacks in federated learning through deep model inspection, 2022, arXiv preprint arXiv:2201.00763.
[89]
Rodríguez-Barroso N., Jiménez-López D., Luzón M.V., Herrera F., Martínez-Cámara E., Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges, Inf. Fusion 90 (2023) 148–173.
[90]
Rudin C., Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat. Mach. Intell. 1 (5) (2019) 206–215.
[91]
Samek W., Müller K.-R., Towards explainable artificial intelligence, in: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer, 2019, pp. 5–22.
[92]
Sattler F., Müller K.-R., Wiegand T., Samek W., On the byzantine robustness of clustered federated learning, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2020, pp. 8861–8865.
[93]
Shafahi A., Huang W.R., Najibi M., Suciu O., Studer C., Dumitras T., Goldstein T., Poison frogs! targeted clean-label poisoning attacks on neural networks, 2018, arXiv abs/1804.00792.
[94]
Shejwalkar V., Houmansadr A., Kairouz P., Ramage D., Back to the drawing board: A critical evaluation of poisoning attacks on federated learning, 2021, arXiv abs/2108.10241.
[95]
Shen, S., Tople, S., Saxena, P., 2016. Auror: Defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications. pp. 508–519.
[96]
Singhal A., et al., Modern information retrieval: A brief overview, IEEE Data Eng. Bull. 24 (4) (2001) 35–43.
[97]
Smith V., Chiang C.-K., Sanjabi M., Talwalkar A.S., Federated multi-task learning, Adv. Neural Inf. Process. Syst. 30 (2017).
[98]
Sun Z., Kairouz P., Suresh A.T., McMahan H.B., Can you really backdoor federated learning?, 2019, arXiv abs/1911.07963.
[99]
Sun J., Li A., DiValentin L., Hassanzadeh A., Chen Y., Li H., Fl-wbc: Enhancing robustness against model poisoning attacks in federated learning from a client perspective, Adv. Neural Inf. Process. Syst. 34 (2021) 12613–12624.
[100]
Sun Y., Ochiai H., Sakuma J., Semi-targeted model poisoning attack on federated learning via backward error analysis, 2022, arXiv abs/2203.11633.
[101]
Tian Z., Cui L., Liang J., Yu S., A comprehensive survey on poisoning attacks and countermeasures in machine learning, ACM Comput. Surv. (2022).
[102]
Tolpegin V., Truex S., Gursoy M.E., Liu L., Data poisoning attacks against federated learning systems, in: European Symposium on Research in Computer Security, Springer, 2020, pp. 480–501.
[103]
Wan C.P., Chen Q., Robust federated learning with attack-adaptive aggregation, 2021, arXiv preprint arXiv:2102.05257.
[104]
Wang, J., Guo, S., Xie, X., Qi, H., 2022a. Federated unlearning via class-discriminative pruning. In: Proceedings of the ACM Web Conference 2022. pp. 622–632.
[105]
Wang X., Han Y., Wang C., Zhao Q., Chen X., Chen M., In-edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning, IEEE Netw. 33 (2019) 156–165.
[106]
Wang H., Sreenivasan K., Rajput S., Vishwakarma H., Agarwal S., Sohn J.-y., Lee K., Papailiopoulos D., Attack of the tails: Yes, you really can backdoor federated learning, Adv. Neural Inf. Process. Syst. 33 (2020) 16070–16084.
[107]
Wang, N., Xiao, Y., Chen, Y., Hu, Y., Lou, W., Hou, Y.T., 2022b. Flare: Defending federated learning against model poisoning attacks via latent space representations. In: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security. pp. 946–958.
[108]
Wei K., Li J., Ma C., Ding M., Wei S., Wu F., Chen G., Ranbaduge T., Vertical federated learning: Challenges, methodologies and experiments, 2022, arXiv abs/2202.04309.
[109]
Wenger, E., Passananti, J., Bhagoji, A.N., Yao, Y., Zheng, H., Zhao, B.Y., 2021. Backdoor attacks against deep learning systems in the physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, pp. 6206–6215.
[110]
Wu Y., Cai S., Xiao X., Chen G., Ooi B.C., Privacy preserving vertical federated learning for tree-based models, 2020, arXiv preprint arXiv:2008.06170.
[111]
Wu C., Yang X., Zhu S., Mitra P., Mitigating backdoor attacks in federated learning, 2020, arXiv abs/2011.01767.
[112]
Wu C., Zhu S., Mitra P., Federated unlearning with knowledge distillation, 2022, arXiv preprint arXiv:2201.09441.
[113]
Xie C., Chen M., Chen P.-Y., Li B., Crfl: Certifiably robust federated learning against backdoor attacks, in: International Conference on Machine Learning, PMLR, 2021, pp. 11372–11382.
[114]
Xie, C., Huang, K., Chen, P.-Y., Li, B., 2019. Dba: Distributed backdoor attacks against federated learning. In: International Conference on Learning Representations.
[115]
Xu J., Wang F., Federated learning for healthcare informatics, J. Healthc. Inf. Res. 5 (2021) 1–19.
[116]
Xu, X., Wu, J., Yang, M., Luo, T., Duan, X., Li, W., Wu, Y., Wu, B., 2020. Information Leakage by Model Weights on Federated Learning. In: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice.
[117]
Xue, M., He, C., Sun, S., Wang, J., Liu, W., 2021. Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. In: 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications. TrustCom, pp. 620–626.
[118]
Yang Q., Liu Y., Chen T., Tong Y., Federated machine learning: concept and applications, ACM Transactions on Intelligent Systems and Technology (TIST) 10 (2) (2019) 1–19.
[119]
Yang S., Ren B., Zhou X., Liu L., Parallel distributed logistic regression for vertical federated learning without third-party coordinator, 2019, arXiv preprint arXiv:1911.09824.
[120]
Yin D., Chen Y., Kannan R., Bartlett P., Byzantine-robust distributed learning: Towards optimal statistical rates, in: International Conference on Machine Learning, PMLR, 2018, pp. 5650–5659.
[121]
Yin X., Zhu Y., Hu J., A comprehensive survey of privacy-preserving federated learning, ACM Comput. Surv. 54 (2021) 1–36.
[122]
Yoo K., Kwak N., Backdoor attacks in federated learning by rare embeddings and gradient ensembling, 2022, arXiv abs/2204.14017.
[123]
Zhang, Z., Cao, X., Jia, J., Gong, N.Z., 2022a. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. pp. 2545–2555.
[124]
Zhang J., Chen B., Cheng X., Binh H.T.T., Yu S., PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems, IEEE Internet Things J. 8 (5) (2020) 3310–3322.
[125]
Zhang Z., Panda A., Song L., Yang Y., Mahoney M., Mittal P., Kannan R., Gonzalez J., Neurotoxin: Durable backdoors in federated learning, in: International Conference on Machine Learning, PMLR, 2022, pp. 26429–26446.
[126]
Zhang J., Wu D., Liu C., Chen B., Defending poisoning attacks in federated learning via adversarial training method, in: International Conference on Frontiers in Cyber Security, Springer, 2020, pp. 83–94.
[127]
Zheng Z., Zhou Y., Sun Y., Wang Z., Liu B., Li K., Federated learning in smart cities: A comprehensive survey, 2021, arXiv abs/2102.01375.
[128]
Zhou X.-L., Xu M., Wu Y., Zheng N., Deep model poisoning attack on federated learning, Future Internet 13 (2021) 73.
[129]
Zou T., Liu Y., Kang Y., Liu W., He Y., Yi Z., Yang Q., Zhang Y.-Q., Defending batch-level label inference and replacement attacks in vertical federated learning, IEEE Transactions on Big Data (2022).

Cited By

View all
  • (2024)GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated LearningProceedings of the 2024 5th International Conference on Computing, Networks and Internet of Things10.1145/3670105.3670211(606-612)Online publication date: 24-May-2024
  • (2024)A bipolar neutrosophic combined compromise solution-based hybrid model for identifying blockchain application barriers and Benchmarking consensus algorithmsEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108343133:PDOnline publication date: 1-Jul-2024
  • (2023)IBAProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669018(66364-66376)Online publication date: 10-Dec-2023

Index Terms

  1. Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Engineering Applications of Artificial Intelligence
      Engineering Applications of Artificial Intelligence  Volume 127, Issue PA
      Jan 2024
      1599 pages

      Publisher

      Pergamon Press, Inc.

      United States

      Publication History

      Published: 01 February 2024

      Author Tags

      1. Federated learning
      2. Decentralized learning
      3. Backdoor attacks
      4. Backdoor defenses
      5. Systematic literature review

      Qualifiers

      • Short-survey

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 15 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated LearningProceedings of the 2024 5th International Conference on Computing, Networks and Internet of Things10.1145/3670105.3670211(606-612)Online publication date: 24-May-2024
      • (2024)A bipolar neutrosophic combined compromise solution-based hybrid model for identifying blockchain application barriers and Benchmarking consensus algorithmsEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108343133:PDOnline publication date: 1-Jul-2024
      • (2023)IBAProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669018(66364-66376)Online publication date: 10-Dec-2023

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media