Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-46677-9_10guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Improving Adversarial Robustness via Channel and Depth Compatibility

Published: 05 November 2023 Publication History

Abstract

Several deep neural networks are vulnerable to adversarial samples that are imperceptible to humans. To address this challenge, a range of techniques have been proposed to design more robust model architectures. However, previous research has primarily focused on identifying atomic structures that are more resilient, while our work focuses on adapting the model in two spatial dimensions: width and depth. In this paper, we present a multi-objective neural architecture search (NAS) method that searches for optimal widths for different layers in spatial dimensions, referred to as DW-Net. We also propose a novel adversarial sample generation technique for one-shot that enhances search space diversity and promotes search efficiency. Our experimental results demonstrate that the proposed optimal neural architecture outperforms state-of-the-art NAS-based networks widely used in the literature in terms of adversarial accuracy, under different adversarial attacks and for different-sized tasks.

References

[1]
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning, pp. 274–283. PMLR (2018)
[2]
Bai, Y., et al.: Improving adversarial robustness via channel-wise activation suppressing (2021)
[3]
Das, N., et al.: Compression to the rescue: Defending from adversarial attacks across modalities. In: ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2018)
[4]
Devaguptapu, C., Agarwal, D., Mittal, G., Gopalani, P., Balasubramanian, V.N.: On adversarial robustness: a neural architecture search perspective http://arxiv.org/abs/2007.08428
[5]
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding (2018)
[6]
Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)
[7]
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2017)
[8]
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014)
[9]
Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014)
[10]
Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.: When NAS meets robustness: in search of robust architectures against adversarial attacks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 628–637 (2020)., ISSN: 2575-7075
[11]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2016). http://arxiv.org/abs/1512.03385
[12]
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale (2016)
[13]
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)
[14]
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search (2018). http://arxiv.org/abs/1806.09055
[15]
Lu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., Banzhaf, W.: NSGA-net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427 (2019)
[16]
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)
[17]
Ning, X., Zhao, J., Li, W., Zhao, T., Yang, H., Wang, Y.: Multi-shot NAS for discovering adversarially robust convolutional neural architectures at targeted capacities. arXiv preprint arXiv:2012.11835 (2020)
[18]
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
[19]
Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing (2018)
[20]
Qian, Y., Huang, S., Wang, Y., Li, S.: RNAS: robust network architecture search beyond DARTS (2021)
[21]
Szegedy, C., et al.: Intriguing properties of neural networks (2013)
[22]
Tramr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses (2017). http://arxiv.org/abs/1705.07204, issue: arXiv:1705.07204
[23]
Wang, Y., Deng, X., Pu, S., Huang, Z.: Residual convolutional CTC networks for automatic speech recognition (2017)
[24]
Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training (2021)
[25]
Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training (2021). arXiv preprint arXiv:2112.08304
[26]
Xie, G., Wang, J., Yu, G., Zheng, F., Jin, Y.: Tiny adversarial mulit-objective oneshot neural architecture search (2023)
[27]
Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle, vol. 32 (2019)
[28]
Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)
[29]
Zhang, J., et al.: Attacks which do not kill training make adversarial learning stronger. In: International Conference on Machine Learning, pp. 11278–11287. PMLR (2020)
[30]
Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning (2016)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Advanced Data Mining and Applications: 19th International Conference, ADMA 2023, Shenyang, China, August 21–23, 2023, Proceedings, Part V
Aug 2023
669 pages
ISBN:978-3-031-46676-2
DOI:10.1007/978-3-031-46677-9
  • Editors:
  • Xiaochun Yang,
  • Heru Suhartanto,
  • Guoren Wang,
  • Bin Wang,
  • Jing Jiang,
  • Bing Li,
  • Huaijie Zhu,
  • Ningning Cui

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 05 November 2023

Author Tags

  1. Robust architecture
  2. Neural architecture search
  3. Deep learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media