Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples release_tsdfxniycnexrgvipdavnfx7ea

by Wanting Yu, Hongyi Yu, Lingyun Jiang, Mengli Zhang, Kai Qiao, Linyuan Wang, Bin Yan

Released as a article .

2021  

Abstract

Adversarial examples reveal the vulnerability and unexplained nature of neural networks. Studying the defense of adversarial examples is of considerable practical importance. Most adversarial examples that misclassify networks are often undetectable by humans. In this paper, we propose a defense model to train the classifier into a human-perception classification model with shape preference. The proposed model comprising a texture transfer network (TTN) and an auxiliary defense generative adversarial networks (GAN) is called Human-perception Auxiliary Defense GAN (HAD-GAN). The TTN is used to extend the texture samples of a clean image and helps classifiers focus on its shape. GAN is utilized to form a training framework for the model and generate the necessary images. A series of experiments conducted on MNIST, Fashion-MNIST and CIFAR10 show that the proposed model outperforms the state-of-the-art defense methods for network robustness. The model also demonstrates a significant improvement on defense capability of adversarial examples.
In text/plain format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

"Dark" Preservation Only
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  article
Stage   submitted
Date   2021-05-09
Version   v4
Language   en ?
arXiv  1909.07558v4
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 18fe7ca6-5924-47e4-bde3-688eaea55797
API URL: JSON