Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3560830.3563733acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Assessing the Impact of Transformations on Physical Adversarial Attacks

Published: 07 November 2022 Publication History

Abstract

The decision of neural networks is easily shifted at an attacker's will by so-called adversarial attacks. Initially only successful when directly applied to the input, recent advances allow attacks to breach the digital realm, leading to over-the-air physical adversarial attacks. During training, some physical phenomena are simulated through equivalent transformations to increase the attack's success. In our work, we evaluate the impact of the selected transformations on the performance of physical adversarial attacks. We quantify their performance across diverse attack scenarios, e.g., multiple distances and angles. Our evaluation motivates that some transformations are indeed essential for successful attacks, no matter the target class. These also appear to be responsible for creating shapes within the attacks, which are semantically related to the target class. However, they do not ensure physical robustness alone. The choice of the remaining transformations appears to be context-dependent, e.g., some being more advantageous for long-range attacks, but not for close-range ones. With our findings, we not only provide useful information on generating physical adversarial attacks, but also help research on defenses to understand their weaknesses.

References

[1]
Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018a. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In ICML. 274--283. http://proceedings.mlr.press/v80/athalye18a.html
[2]
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018b. Synthesizing Robust Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 284--293. https://proceedings.mlr.press/v80/athalye18b.html
[3]
Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2018. Adversarial Patch. https://doi.org/10.48550/arXiv.1712.09665 arXiv:1712.09665 [cs].
[4]
N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 39--57. https://doi.org/10.1109/SP.2017.49 ISSN: 2375--1207.
[5]
Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng (Polo) Chau. 2019. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector. In Machine Learning and Knowledge Discovery in Databases, Michele Berlingerio, Francesco Bonchi, Thomas Gärtner, Neil Hurley, and Georgiana Ifrim (Eds.). Springer International Publishing, Cham, 52--68.
[6]
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations. http://arxiv.org/abs/1412.6572
[7]
Stephen Gould, Richard Fulton, and Daphne Koller. 2009. Decomposing a scene into geometric and semantically consistent regions. In 2009 IEEE 12th International Conference on Computer Vision. 1--8. https://doi.org/10.1109/ICCV.2009.5459211
[8]
Shize Huang, Xiaowen Liu, Xiaolu Yang, and Zhaoxin Zhang. 2021. An Improved ShapeShifter Method of Generating Adversarial Examples for Physical Attacks on Stop Signs against Faster R-CNNs. Comput. Secur., Vol. 104, C (May 2021). https://doi.org/10.1016/j.cose.2020.102120 Place: GBR Publisher: Elsevier Advanced Technology Publications.
[9]
Erik Kalalembang, Koredianto Usman, and Irwan Prasetya Gunawan. 2009. DCT-based local motion blur detection. In International Conference on Instrumentation, Communication, Information Technology, and Biomedical Engineering 2009. 1--6. https://doi.org/10.1109/ICICI-BME.2009.5417252
[10]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24--26, 2017, Workshop Track Proceedings. OpenReview.net. https://openreview.net/forum?id=HJGU3Rodl
[11]
Mark Lee and Zico Kolter. 2019. On Physical Adversarial Patches for Object Detection. arXiv. https://doi.org/10.48550/arXiv.1906.11897 arXiv:1906.11897 [cs, stat].
[12]
Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. 2016. SSD: Single Shot MultiBox Detector. In Computer Vision -- ECCV 2016, Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, Cham, 21--37.
[13]
Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. 2019. DPatch: An adversarial patch attack on object detectors. In CEUR Workshop Proceedings, Vol. 2301. ISSN: 16130073.
[14]
Giulio Lovisotto, Henry Turner, Ivo Sluganovic, Martin Strohmeier, and Ivan Martinovic. 2021. SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations. In 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, 1865--1882. https://www.usenix.org/conference/usenixsecurity21/presentation/lovisotto
[15]
Jiajun Lu, Hussein Sibai, Evan Fabry, and David Forsyth. 2017. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles. https://doi.org/10.48550/arXiv.1707.03501 arXiv:1707.03501 [cs].
[16]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations. https://openreview.net/forum?id=rJzIBfZAb
[17]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (ASIA CCS '17). Association for Computing Machinery, New York, NY, USA, 506--519. https://doi.org/10.1145/3052973.3053009 event-place: Abu Dhabi, United Arab Emirates.
[18]
Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An Incremental Improvement. https://doi.org/10.48550/arXiv.1804.02767 arXiv:1804.02767 [cs].
[19]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39, 6 (June 2017), 1137--1149. https://doi.org/10.1109/TPAMI.2016.2577031
[20]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS '16). Association for Computing Machinery, New York, NY, USA, 1528--1540. https://doi.org/10.1145/2976749.2978392 event-place: Vienna, Austria.
[21]
Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, and Prateek Mittal. 2018. DARTS: Deceiving Autonomous Cars with Toxic Signs. https://doi.org/10.48550/arXiv.1802.06430 arXiv:1802.06430 [cs].
[22]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations. http://arxiv.org/abs/1312.6199
[23]
Jia Tan, Nan Ji, Haidong Xie, and Xueshuang Xiang. 2021. Legitimate Adversarial Patches: Evading Human Eyes and Detection Models in the Physical World. In Proceedings of the 29th ACM International Conference on Multimedia (MM '21). Association for Computing Machinery, New York, NY, USA, 5307--5315. https://doi.org/10.1145/3474085.3475653 event-place: Virtual Event, China.
[24]
Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, and Xue Lin. 2020. Adversarial T-Shirt! Evading Person Detectors in a Physical World. In Computer Vision -- ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Vol. 12350. Springer International Publishing, Cham, 665--681. https://doi.org/10.1007/978--3-030--58558--7_39 Series Title: Lecture Notes in Computer Science.
[25]
Yue Zhao, Hong Zhu, Ruigang Liang, Qintao Shen, Shengzhi Zhang, and Kai Chen. 2019. Seeing Isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS '19). Association for Computing Machinery, New York, NY, USA, 1989--2004. https://doi.org/10.1145/3319535.3354259 event-place: London, United Kingdom.
[26]
Xingyu Zhou, Zhisong Pan, Yexin Duan, Jin Zhang, and Shuaihui Wang. 2021. A data independent approach to generate adversarial patches. Machine Vision and Applications, Vol. 32, 3 (April 2021), 67. https://doi.org/10.1007/s00138-021-01194--6 io

Cited By

View all
  • (2024)A Comprehensive Study on the Robustness of Deep Learning-Based Image Classification and Object Detection in Remote Sensing: Surveying and BenchmarkingJournal of Remote Sensing10.34133/remotesensing.02194Online publication date: 3-Oct-2024
  • (2024)Road Decals as Trojans: Disrupting Autonomous Vehicle Navigation with Adversarial Patterns2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S)10.1109/DSN-S60304.2024.00039(133-140)Online publication date: 24-Jun-2024
  • (2023)Adversarial Attacks on Traffic Sign Recognition: A Survey2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)10.1109/ICECCME57830.2023.10252727(1-6)Online publication date: 19-Jul-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AISec'22: Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security
November 2022
168 pages
ISBN:9781450398800
DOI:10.1145/3560830
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial machine learning
  2. computer vision
  3. it security
  4. object detection
  5. physical adversarial attacks

Qualifiers

  • Research-article

Funding Sources

  • Bavarian Ministry of Economic Affairs, Regional Development and Energy

Conference

CCS '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 94 of 231 submissions, 41%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)68
  • Downloads (Last 6 weeks)4
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Comprehensive Study on the Robustness of Deep Learning-Based Image Classification and Object Detection in Remote Sensing: Surveying and BenchmarkingJournal of Remote Sensing10.34133/remotesensing.02194Online publication date: 3-Oct-2024
  • (2024)Road Decals as Trojans: Disrupting Autonomous Vehicle Navigation with Adversarial Patterns2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S)10.1109/DSN-S60304.2024.00039(133-140)Online publication date: 24-Jun-2024
  • (2023)Adversarial Attacks on Traffic Sign Recognition: A Survey2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)10.1109/ICECCME57830.2023.10252727(1-6)Online publication date: 19-Jul-2023
  • (2023)Distracting Downpour: Adversarial Weather Attacks for Motion Estimation2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00927(10072-10082)Online publication date: 1-Oct-2023

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media