Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Robust Single-Step Adversarial Training with Regularizer

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

  • 1772 Accesses

Abstract

High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of “catastrophic overfitting”, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0% over a single epoch. To address this issue, we focus on single-step adversarial training scheme in this paper and propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core observation is that single-step adversarial training can not simultaneously learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for \(L_{\infty }\)-perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.

The first author is a student.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Andriushchenko, M., Flammarion, N.: Understanding and improving fast adversarial training. Virtual, Online (2020)

    Google Scholar 

  2. Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International conference on machine learning, pp. 274–283. Stockholm, Sweden (2018)

    Google Scholar 

  3. Buckman, J., Roy, A., Raffel, C., Goodfellow, I.: Thermometer encoding: one hot way to resist adversarial examples. In: International Conference on Learning Representations. Vancouver, BC, Canada (2018)

    Google Scholar 

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (sp), pp. 39–57. IEEE (2017)

    Google Scholar 

  5. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. San Diego, CA, United states (2015)

    Google Scholar 

  6. Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. Vancouver, BC, Canada (2018)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  8. Kannan, H., Kurakin, A., Goodfellow, I.: Adversarial logit pairing (2018)

    Google Scholar 

  9. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/ kriz/learning-features-2009-TR.pdf (2009)

  10. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  11. Li, B., Wang, S., Jana, S., Carin, L.: Towards understanding fast adversarial training. arXiv preprint arXiv:2006.03089 (2020)

  12. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2018)

    Google Scholar 

  13. Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. Toulon, France (2017)

    Google Scholar 

  14. Narang, S., et al.: Mixed precision training. Vancouver, BC, Canada (2018)

    Google Scholar 

  15. Shafahi, A., et al.: Adversarial training for free! In: Advances in Neural Information Processing Systems, pp. 3358–3369 (2019)

    Google Scholar 

  16. Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464–472. IEEE (2017)

    Google Scholar 

  17. Szegedy, C., et al.: Intriguing properties of neural networks. Banff, AB, Canada (2014)

    Google Scholar 

  18. Vivek, B., Babu, R.V.: Single-step adversarial training with dropout scheduling. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 947–956. IEEE (2020)

    Google Scholar 

  19. Wong, E., Rice, L., Kolter, J.Z.: Fast is better than free: Revisiting adversarial training. Addis Ababa, Ethiopia (2020)

    Google Scholar 

  20. Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle. In: Advances in Neural Information Processing Systems, pp. 227–238 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, L., Wang, Y., Yin, JL., Liu, X. (2021). Robust Single-Step Adversarial Training with Regularizer. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics