Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Adversarial training (AT) aims to improve models' robustness against adversarial attacks by mixing clean data and adversarial examples (AEs) into training. Most existing AT approaches can be grouped into restricted and unrestricted approaches.
Feb 3, 2024
Adversarial training (AT) aims to improve models' robustness against adversarial attacks by mixing clean data and adversarial examples (AEs) into training.
We analyze the existing pruning methods and find that the robustness of the pruned models varies drastically with different pruning processes, and the ...
Apr 10, 2020 · Abstract:Adversarial training (AT) aims to improve the robustness of deep learning models by mixing clean data and adversarial examples ...
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. · Carlini, N., Wagner, D.: Towards evaluating the robustness ...
Adversarial training (AT) aims to improve the robustness of deep learning models by mixing clean data and adversarial examples (AEs).
Madaan et al. [43] designed an unstructured Bayesian pruning framework based on vulnerability suppression. Xie et al. [44] used an unstructured pruning ...
People also ask
ABSTRACT. With the growth of interest in the attack and defense of deep neural networks, researchers have increasingly focused on the robustness of their ...
Missing: Comprehensively | Show results with:Comprehensively
Blind Adversarial Pruning: Towards The Comprehensive Robust Models With Gradually Pruning Against Blind Adversarial Attacks · Haidong XieLixin QianXueshuang ...
Missing: Comprehensively | Show results with:Comprehensively
(2018) shows that this adversarial training framework does not rely on obfuscated gradient and truly increases model robustness; gradient based attacks with ...
Missing: Comprehensively | Show results with:Comprehensively