Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

HAD-GAN: A Human-perception Auxiliary Defense GAN to Defend Adversarial Examples release_xqnazqvl5vd4lgfdbgkioidhjm

by Wanting Yu, Hongyi Yu, Lingyun Jiang, Mengli Zhang, Kai Qiao, Linyuan Wang, Bin Yan

Released as a article .

2020  

Abstract

Adversarial examples reveal the vulnerability and unexplained nature of neural networks. Studying the defense of adversarial examples is of considerable practical importance. Most adversarial examples that misclassify networks are often undetectable by humans. In this paper, we propose a defense model to train the classifier into a human-perception classification model with shape preference. The proposed model comprising a texture transfer network (TTN) and an auxiliary defense generative adversarial networks (GAN) is called Human-perception Auxiliary Defense GAN (HAD-GAN). The TTN is used to extend the texture samples of a clean image and helps classifiers focus on its shape. GAN is utilized to form a training framework for the model and generate the necessary images. A series of experiments conducted on MNIST, Fashion-MNIST and CIFAR10 show that the proposed model outperforms the state-of-the-art defense methods for network robustness. The model also demonstrates a significant improvement on defense capability of adversarial examples.
In text/plain format

Archived Files and Locations

application/pdf  1.5 MB
file_utxq4aiozzdwzpcxlvg2dz7fl4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-25
Version   v3
Language   en ?
arXiv  1909.07558v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 52fead21-3be8-4ea0-84ec-0844abc74605
API URL: JSON