Dec 15, 2020 · The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory and can be combined with existing acceleration ...
In order to reduce the computational cost, we propose an anneal- ing mechanism, Amata, to reduce the overhead associated with adversarial training. The proposed ...
This paper proposes a simple modification for adversarial training in order to improve the robustness of the algorithms. The adversarial training of the neural ...
An Annealing Mechanism for Adversarial Training Acceleration
www.semanticscholar.org › paper › An-...
An annealing mechanism for adversarial training acceleration (Amata), which is provably convergent, well-motivated from the lens of optimal control theory, ...
This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to ...
Algorithm 1 Amata: an annealing mechanism for adversarial training acceleration. Input: T:training epochs; Kmin: the minimum number of adversarial ...
The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory and can be combined with existing acceleration methods to ...
Dec 15, 2020 · Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed ...
Amata: An Annealing Mechanism for Adversarial Training Acceleration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10691-10699 ...