Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Fedmvae uses multiple variational autoencoder models to detect and exclude malicious model updates from a spatial perspective. Moreover, to handle poisoning ...
Unlike existing works that struggle to defend against poisoning attacks from a spatial perspective, ... federated learning against model poisoning attacks via ...
Feb 13, 2024 · The authors of [19] discuss the vulnerability of poisoning attacks considering the distributed nature of FL. Byzantine and backdoor attacks ...
In poisoning attacks, malicious clients can poison local model updates by injecting poisoned instances into the training data (i.e., data poisoning attacks [14, ...
Defending Against Poisoning Attacks in Federated Learning with Blockchain ... Choo, “Technical, temporal, and spatial research challenges and opportunities ...
In this paper, we propose a novel defense mechanism, MinVar, to counter the model poisoning attacks in FL from a drastically different perspective.
Jul 2, 2023 · View a PDF of the paper titled Defending Against Poisoning Attacks in Federated Learning with Blockchain, by Nanqing Dong and 5 other authors.
Missing: Spatial- temporal
This paper proposes a defense scheme named CONTRA to defend against poisoning attacks, e.g., label-flipping and backdoor attacks, in FL systems, ...
Jul 19, 2024 · Defending Against Poisoning Attacks in Federated Learning With Blockchain. July 2024; IEEE Transactions on Artificial Intelligence PP(99):1-13.
Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in NeurIPS, Vol. 31. Google Scholar.
Missing: Spatial- | Show results with:Spatial-