Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Camdar‐adv: : Generating adversarial patches on 3D object

Published: 28 January 2021 Publication History

Abstract

Deep neural network model is the core technology for sensors of the autonomous driving platform to perceive the external environment. Recent research have shown that it has a certain vulnerability. The artificial designed adversarial examples can make the DNN model output the wrong results. These adversarial examples not only exist in the digital world, but also in the physical world. At present, research on autonomous driving platform mainly focus on attacking a single sensor. In this paper, we introduce Camdar‐adv, a method for generating image adversarial examples on three‐dimensional (3D) objects, which could potentially lunch a multisensor attack toward the autonomous driving platforms. Specifically, with objects that can attack LiDAR sensors, a geometric transformation can be used to project their shape onto the two‐dimensional plane. Adversarial perturbations against optical image sensor could be added to the surface of the adversarial 3D objects precisely without changing its geometry. Test results on the open‐source autonomous driving data set KITTI show that Camdar‐adv can generate adversarial samples for the state of the art object detection model. From a fixed viewpoint, our method can achieve an attack success rate over 99%.

References

[1]
Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP); 2017:39‐57.
[2]
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R. Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014, Conference Track Proceedings, Banff, AB, Canada, April 14‐16; 2014.
[3]
Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbrücken, Germany, March 21‐24; 2016:372‐387.
[4]
Chen P‐Y, Zhang H, Sharma Y, Yi J, Hsieh C‐J. Zoo: zeroth order optimization based black‐box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security; 2017:15‐26.
[5]
Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C, Srivastava MB. Genattack: practical black‐box attacks with gradient‐free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13‐17; 2019:1111‐1119.
[6]
Athalye A, Engstrom L, Ilyas A, Kwok K. Synthesizing robust adversarial examples. In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10‐15; 2018:284‐293.
[7]
Liu A, Liu X, Fan J, Ma Y, Zhang A, Xie H, Tao D. Perceptual‐sensitive GAN for generating adversarial patches. In: The Thirty‐Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty‐First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, January 27‐February 1, 2019; 2019:1028‐1035.
[8]
Kurakin A, Goodfellow IJ, Bengio S. Adversarial examples in the physical world. In: ICLR (Workshop); 2016:99‐112.
[9]
Thys S, Ranst WV, Goedeme T. Fooling automated surveillance cameras: adversarial patches to attack person detection. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2019.
[10]
Wang Z, Zheng S, Song M, Wang Q, Rahimpour A, Qi H. advpattern: Physical‐world attacks on deep person re‐identification via adversarially transformable patterns. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019:8341‐8350.
[11]
Zhao Y, Zhu H, Liang R, Shen Q, Zhang S, Chen K. Seeing isn't believing: towards more robust adversarial attack against real world object detectors. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security; 2019:1989‐2004.
[12]
Cao Y, Xiao C, Yang D, Fang J, Yang R, Liu M, Li B. Adversarial objects against lidar‐based autonomous driving systems. arXiv preprint arXiv:1907.05418; 2019.
[13]
Tu J, Ren M, Manivasagam S, Liang M, Yang B, Du R, Cheng F, Urtasun R. Physically realizable adversarial examples for lidar object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020:13716‐13725.
[14]
Ma F, Cavalheiro GV, Karaman S. Self‐supervised sparse‐to‐dense: self‐supervised depth completion from lidar and monocular camera. In: 2019 International Conference on Robotics and Automation (ICRA); 2019:3288‐3295.
[15]
Tang J, Tian F, Feng W, Li J, Tan P. Learning guided convolutional network for depth completion. IEEE Trans Image Process, 2020;30:1116‐1129.
[16]
Xiao C, Yang D, Li B, Deng J, Liu M. Meshadv: adversarial meshes for visual recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, June 16‐20; 2019:6898‐6907.
[17]
Athalye A, Carlini N, Wagner DA. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning; 2018:274‐283.
[18]
Cao Y, Xiao C, Cyr B, Zhou Y, Park W, Rampazzi S, Chen QA, Fu K, Mao ZM. Adversarial sensor attack on lidar‐based perception in autonomous driving. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11‐15; 2019:2267‐2281.
[19]
Sun J, Cao Y, Chen QA, Mao ZM. Towards robust lidar‐based perception in autonomous driving: General black‐box adversarial sensor attack and countermeasures. In: 29th USENIX Security Symposium (USENIX Security 20); 2020:877‐894.
[20]
Ren S, He K, Girshick R, Sun J. Faster r‐cnn: towards real‐time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137‐1149.
[21]
Redmon J, Divvala SK, Girshick RB, Farhadi A. You only look once: unified, real‐time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27‐30; 2016:779‐788.
[22]
Van Etten A. Satellite imagery multiscale rapid detection with windowed networks. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE; 2019:735‐743
[23]
Zhou X, Zhuo J, Krahenbuhl P. Bottom‐up object detection by grouping extreme and center points. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019:850‐859.
[24]
Ma F, Karaman S. Sparse‐to‐dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21‐25; 2018:1‐8.
[25]
Jaritz M, de Charette R, Wirbel É, Perrotton X, Nashashibi F. Sparse and dense data with cnns: depth completion and semantic segmentation. In: 2018 International Conference on 3D Vision, 3DV 2018, Verona, Italy, September 5‐8; 2018:52‐60.
[26]
Wang J, Zhou L. Traffic light recognition with high dynamic range imaging and deep learning. IEEE Trans Intell Transp Syst. 2019;20(4):1341‐1352.
[27]
Yao P, So A, Chen T, Ji H. On multiview robustness of 3d adversarial attacks. In: Practice and Experience in Advanced Research Computing; 2020:372‐378.
[28]
Cignoni P, Callieri M, Corsini M, Dellepiane M, Ganovelli F, Ranzuglia G. MeshLab: an open‐source mesh processing tool. In: Scarano V, Chiara RD, Erra U, eds. Eurographics Italian Chapter Conference. The Eurographics Association; 2008.
[29]
Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition; 2012:3354‐3361.

Cited By

View all
  • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: 1-Aug-2024
  • (2024)Evaluating robustness of support vector machines with the Lagrangian dual approachNeural Computing and Applications10.1007/s00521-024-09490-836:14(7991-8006)Online publication date: 1-May-2024
  • (2023)A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial AttacksACM Computing Surveys10.1145/363655156:6(1-37)Online publication date: 7-Dec-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image International Journal of Intelligent Systems
International Journal of Intelligent Systems  Volume 36, Issue 3
March 2021
355 pages
ISSN:0884-8173
DOI:10.1002/int.v36.3
Issue’s Table of Contents

Publisher

John Wiley and Sons Ltd.

United Kingdom

Publication History

Published: 28 January 2021

Author Tags

  1. Adversarial example
  2. autonmous driving
  3. geometric transformation

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision ModelsIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.341243225:8(9840-9851)Online publication date: 1-Aug-2024
  • (2024)Evaluating robustness of support vector machines with the Lagrangian dual approachNeural Computing and Applications10.1007/s00521-024-09490-836:14(7991-8006)Online publication date: 1-May-2024
  • (2023)A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial AttacksACM Computing Surveys10.1145/363655156:6(1-37)Online publication date: 7-Dec-2023
  • (2023)Securing Cross Reality: Unraveling the Risks of 3D Object Disguise on Head Mount DisplayProceedings of the 13th International Conference on the Internet of Things10.1145/3627050.3631570(281-286)Online publication date: 7-Nov-2023
  • (2023)A Survey on Automated Driving System Testing: Landscapes and TrendsACM Transactions on Software Engineering and Methodology10.1145/357964232:5(1-62)Online publication date: 24-Jul-2023
  • (2023)Beyond model splitting: Preventing label inference attacks in vertical federated learning with dispersed trainingWorld Wide Web10.1007/s11280-023-01159-x26:5(2691-2707)Online publication date: 1-Sep-2023
  • (2023)Hiding from infrared detectors in real world with adversarial clothesApplied Intelligence10.1007/s10489-023-05102-553:23(29537-29555)Online publication date: 1-Dec-2023
  • (2023)Event Sparse Net: Sparse Dynamic Graph Multi-representation Learning with Temporal Attention for Event-Based DataPattern Recognition and Computer Vision10.1007/978-981-99-8546-3_17(208-219)Online publication date: 13-Oct-2023
  • (2022)Interpolation graph convolutional network for 3D point cloud analysisInternational Journal of Intelligent Systems10.1002/int.2308737:12(12283-12304)Online publication date: 22-Sep-2022
  • (2022)Towards robust and stealthy communication for wireless intelligent terminalsInternational Journal of Intelligent Systems10.1002/int.2306337:12(11791-11814)Online publication date: 5-Sep-2022
  • Show More Cited By

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media