Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Aug 31, 2022 · We design the first latent backdoor attacks against incremental learning. We propose two novel techniques, which can effectively and stealthily embed a ...
Such backdoor can only be activated when the pre-trained model is extended to a downstream model with incremental learning. It has a very high attack success.
Dec 9, 2024 · Such backdoor can only be activated when the pre-trained model is extended to a downstream model with incremental learning. It has a very high ...
May 28, 2023 · In this paper, we empirically reveal the high vulnerability of 11 typical incremental learners against poisoning-based backdoor attack on 3 learning scenarios.
Missing: Threats. | Show results with:Threats.
People also ask
Aug 31, 2022 · Such backdoor can only be activated when the pre-trained model is extended to a downstream model with incremental learning. It has a very high ...
Dec 17, 2024 · We highlight three critical challenges in executing backdoor attacks on incremental learners and propose corresponding solutions: (1) ...
Jul 1, 2024 · information. One potential threat is backdoor attack, which manipulates neural networks to exhibit the attacker's desired behavior when the ...
We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain.
Sep 26, 2024 · In this work, we provide a unifying framework to study the process of backdoor learning under the lens of incremental learning and influence functions.
May 26, 2024 · In Section 3, we introduce the continual backdoor threat model, discuss backdoor challenges, and propose our prompt-based continual backdoor AOP.