Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3573428.3573661acmotherconferencesArticle/Chapter ViewAbstractPublication PageseitceConference Proceedingsconference-collections
research-article

Misleading Image Classification with Multi-shearing and Random Padding

Published: 15 March 2023 Publication History

Abstract

Neural networks are vulnerable when input data is applied with human-imperceptible perturbations, which is called adversarial examples. When used in image classification models, adversarial examples mislead neural netwoks to classify images with wrong labels, posing great threat to network security. White-box attack has achieved considerable success rate, for the model structure is already known. But black-box attack remains to be improved, so as to the transferability. We refer to the model augmentation method in network training process, and apply to generating adversarial examples to reduce overfitting. Consulting fundamental methods in adversarial examples, we propose a multi-cropping transformation method to alleviate overfitting and enhance transferability. Firstly, referring to data augmentation, we multi-crop original images in every iteration with random possibilities in adversarial exaples generating process. Secondly, the gradient of model loss function is calculated, and the perturbations are added to original images. Finally, we generate adversarial examples with iterative perturbations. The validation of our method is verified on single models and ensemble models, and the transferability is improved, compared to other fundamental methods.

References

[1]
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems-Volume 1. 2012.
[2]
LeCun, Yann, 1989. Backpropagation applied to handwritten zip code recognition. Neural computation 1.4 (1989): 541-551.
[3]
Biggio, Battista, 2013. Evasion attacks against machine learning at test time. Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, 2013.
[4]
Szegedy, Christian, 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[5]
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[6]
Papernot, Nicolas, 2017 Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017.
[7]
Nielsen, Michael A. 2015. Neural networks and deep learning. Vol. 25. San Francisco, CA, USA: Determination press, 2015.
[8]
Xie, Cihang, 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[9]
Kurakin, Alexey, Ian J. Goodfellow, and Samy Bengio. 2018. Adversarial examples in the physical world. Artificial intelligence safety and security. Chapman and Hall/CRC, 2018. 99-112.
[10]
Yuan, Kan, 2019. Stealthy porn: Understanding real-world adversarial images for illicit online promotion. 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019.
[11]
Yang, Bo, 2022. Adversarial example generation with adabelief optimizer and crop invariance. Applied Intelligence (2022): 1-16.
[12]
Li, Zheming, 2021. Rotation Transformation: A Method to Improve the Transferability of Adversarial Examples. 2021 International Conference on Digital Society and Intelligent Systems (DSInS). IEEE, 2021.
[13]
Russakovsky, Olga, 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115.3 (2015): 211-252.
[14]
Szegedy, Christian, 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[15]
Szegedy, Christian, 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-first AAAI conference on artificial intelligence. 2017.
[16]
He, Kaiming, 2016. Identity mappings in deep residual networks. European conference on computer vision. Springer, Cham, 2016.
[17]
Tramèr, Florian, 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EITCE '22: Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering
October 2022
1999 pages
ISBN:9781450397148
DOI:10.1145/3573428
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 March 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial examples
  2. Image padding
  3. Image shearing
  4. Neural networks
  5. Transferability

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EITCE 2022

Acceptance Rates

Overall Acceptance Rate 508 of 972 submissions, 52%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 16
    Total Downloads
  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media