Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3376067.3376112acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvipConference Proceedingsconference-collections
research-article

Generative Transferable Adversarial Attack

Published: 25 February 2020 Publication History

Abstract

Despite their superior performance in computer vision tasks, deep neural networks are found to be vulnerable to adversarial examples, slightly perturbed examples that can mislead trained models. Moreover, adversarial examples are often transferable, i.e., adversaries crafted for one model can attack another model. Most existing adversarial attack methods are iterative or optimization-based, consuming relatively long time in crafting adversarial examples. Besides, the crafted examples usually underfit or overfit the source model, which reduces their transferability to other target models. In this paper, we introduce the Generative Transferable Adversarial Attack (GTAA), which generate highly transferable adversarial examples efficiently. GTAA leverages a generator network to produce adversarial examples in a single forward pass. To further enhance the transferability, we train the generator with an objective of making the intermediate features of the generated examples diverge from those of their original version. Extensive experiments on challenging ILSVRC2012 dataset show that our method achieves impressive performance in both white-box and black-box attacks. In addition, we verify that our method is even faster than one-step gradient-based method, and the generator converges extremely rapidly in training phase.

References

[1]
Baluja, S. and Fischer, I. 2018. Learning to Attack: Adversarial Transformation Networks. In AAAI Conference on Artificial Intelligence (AAAI).
[2]
Carlini, N. and Wagner, D. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy (S&P).
[3]
Chen, L., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. 2018. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In European Conference on Computer Vision (ECCV).
[4]
Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., and Li, J. 2018. Boosting Adversarial Attacks with Momentum. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5]
Goodfellow, I., Shlens, J., and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations (ICLR).
[6]
He, K., Zhang, X., Ren, S., and Sun, J. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7]
Huang, G., Liu, Z., and Weinberger, K. 2017. Densely Connected Convolutional Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8]
Inkawhich, N., Wen, W., Li, H., and Chen, Y. 2019. Feature Space Perturbations Yield More Transferable Adversarial Examples. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9]
Isola, P., Zhu, J., Zhou, T., and Efros, A. 2016. Image-to-Image Translation with Conditional Adversarial Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10]
Johnson, J., Alahi, A., and Fei-Fei, L. 2016. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European Conference on Computer Vision (ECCV).
[11]
Kurakin, A., Goodfellow, I., and Bengio, S. 2017. Adversarial examples in the physical world. In International Conference on Learning Representations (ICLR) Workshop.
[12]
Liao, F., Liang, M., Dong, Y., Pang, T., Zhu, J., and Hu, X. 2018. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13]
Liu, Y., Chen, X., Liu, C., and Song, D. 2016. Delving into Transferable Adversarial Examples and Black-box Attacks. In International Conference on Learning Representations (ICLR).
[14]
Poursaeed, O., Katsman, I., Gao, B., and Belongie, S. 2017. Generative Adversarial Perturbations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15]
Ren, S., He, K., Girshick, R., and Sun, J. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (2015), 1137--1149.
[16]
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision 115 (2015), 211--252.
[17]
Simonyan, K. and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR).
[18]
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
[19]
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. 2014. How transferable are features in deep neural networks?. In Neural Information Processing Systems (NeurIPS).
[20]
Zhou, W., Hou, X., Chen, Y., Tang, M., Huang, X., Gan, X., and Yang, Y. 2018. Transferable Adversarial Perturbations. In European Conference on Computer Vision (ECCV).
[21]
Zhu, J., Park, T., Isola, P., and Efros, A. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In IEEE International Conference on Computer Vision (ICCV).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVIP '19: Proceedings of the 3rd International Conference on Video and Image Processing
December 2019
270 pages
ISBN:9781450376822
DOI:10.1145/3376067
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • Shanghai Jiao Tong University: Shanghai Jiao Tong University
  • Xidian University
  • TU: Tianjin University

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 February 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial attack
  2. Feature representations
  3. Generative model
  4. Image-to-image translation
  5. Transferability

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICVIP 2019

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 211
    Total Downloads
  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)0
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media