Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3319535.3345660acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

Published: 06 November 2019 Publication History

Abstract

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Supplementary Material

WEBM File (p275-co.webm)

References

[1]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Vol. 80. PMLR, Stockholmsmssan, Stockholm Sweden, 274--283.
[2]
Marco Barreno, Blaine Nelson, Russell Sears, Anthony D. Joseph, and J. D. Tygar. 2006. Can machine learning be secure?. In Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security (ASIACCS '06). ACM, New York, NY, USA, 16--25. https://doi.org/10.1145/1128817.1128824
[3]
Arjun Nitin Bhagoji, Warren He, Bo Li, and Dawn Song. 2018. Black-box attacks on deep neural networks via gradient estimation. In ICLR Workshop .
[4]
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim vS rndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases. Springer, 387--402.
[5]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
[6]
Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248 (2017).
[7]
Eric Brochu, Vlad M Cora, and Nando De Freitas. 2010. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599 (2010).
[8]
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. 2019. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705 (2019).
[9]
Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. 2016. Hidden voice commands. In USENIX Security Symposium . 513--530.
[10]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Symposium on Security and Privacy . 39--57.
[11]
Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. arXiv preprint arXiv:1801.01944 (2018).
[12]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Workshop on Artificial Intelligence and Security . 15--26.
[13]
Yali Du, Meng Fang, Jinfeng Yi, Jun Cheng, and Dacheng Tao. 2018. Towards query efficient black-box attacks: An input-free perspective. In Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security. ACM, 13--24.
[14]
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. 2018. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 (2018).
[15]
Ross Girshick. 2015. Fast R-CNN. In Proceedings of the IEEE international conference on computer vision. 1440--1448.
[16]
Ian Goodfellow. 2018. Defense against the dark arts: An overview of adversarial example security research and future research directions. arXiv preprint arXiv:1806.04169 (2018).
[17]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning .MIT Press.
[18]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[19]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016).
[20]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015).
[21]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[22]
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et almbox. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, Vol. 29, 6 (2012), 82--97.
[23]
Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. 2011. Adversarial machine learning. In Workshop on Security and Artificial Intelligence . 43--58.
[24]
Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284 (2017).
[25]
Andrew Ilyas, Logan Engstrom, and Aleksander Madry. 2019 a. Prior convictions: Black-box adversarial attacks with bandits and priors. In International Conference on Learning Representations .
[26]
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. 2019 b. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 (2019).
[27]
Daniel Jakubovitz and Raja Giryes. 2018. Improving DNN robustness to adversarial attacks using Jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV) . 514--529.
[28]
Ahmad Javaid, Quamar Niyaz, Weiqing Sun, and Mansoor Alam. 2016. A deep learning approach for network intrusion detection system. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (BICT'15). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, Belgium, 21--26. https://doi.org/10.4108/eai.3--12--2015.2262516
[29]
Min-Joo Kang and Je-Won Kang. 2016. Intrusion detection system using deep neural network for in-vehicle network security. PloS one, Vol. 11, 6 (2016), e0155781.
[30]
Valentin Khrulkov and Ivan Oseledets. 2018. Art of singular vectors and universal adversarial perturbations. In Procs. Conf. on Computer Vision and Pattern Recognition. 8562--8570.
[31]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097--1105.
[32]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
[33]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In International Conference on Learning Representations .
[34]
Ares Lagae, Sylvain Lefebvre, Rob Cook, Tony DeRose, George Drettakis, David S Ebert, John P Lewis, Ken Perlin, and Matthias Zwicker. 2010. A survey of procedural noise functions. In Computer Graphics Forum, Vol. 29. 2579--2600.
[35]
Ares Lagae, Sylvain Lefebvre, George Drettakis, and Philip Dutré. 2009. Procedural noise using sparse Gabor convolution. ACM Transactions on Graphics (TOG), Vol. 28, 3 (2009), 54.
[36]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In European conference on computer vision. Springer, 740--755.
[37]
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. 2017. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748 (2017).
[38]
Dong C Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical programming, Vol. 45, 1--3 (1989), 503--528.
[39]
André Teixeira Lopes, Edilson de Aguiar, Alberto F De Souza, and Thiago Oliveira-Santos. 2017. Facial expression recognition with convolutional neural networks: Coping with few data and the training sample order. Pattern Recognition, Vol. 61 (2017), 610--628.
[40]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[41]
Patrick McDaniel, Nicolas Papernot, and Z Berkay Celik. 2016. Machine learning in adversarial settings. IEEE Security & Privacy, Vol. 14, 3 (2016), 68--72.
[42]
Mitchell McIntire, Daniel Ratner, and Stefano Ermon. 2016. Sparse gaussian processes for Bayesian optimization. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence (UAI'16). AUAI Press, Arlington, Virginia, United States, 517--526.
[43]
J Moćkus, V Tiesis, and A 'Zilinskas. 1978. The application of Bayesian methods for seeking the extremum. Towards Global Optimization, Vol. 2 (1978), 117--129.
[44]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Conference on Computer Vision and Pattern Recognition. 86--94.
[45]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A simple and accurate method to fool deep neural networks. In Conference on Computer Vision and Pattern Recognition. 2574--2582.
[46]
Konda Mopuri, Utkarsh Ojha, Utsav Garg, and R. Venkatesh Babu. 2018. NAG: Network for adversary generation. In Procs. Conf. on Computer Vision and Pattern Recognition. 742--751.
[47]
Fabrice Neyret and Eric Heitz. 2016. Understanding and controlling contrast oscillations in stochastic texture algorithms using spectrum of variance . Ph.D. Dissertation. LJK/Grenoble University-INRIA.
[48]
Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. 2017. Feature visualization. Distill, Vol. 2, 11 (2017).
[49]
Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1717--1724.
[50]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016).
[51]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Asia Conference on Computer and Communications Security. 506--519.
[52]
Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P Wellman. 2018. SoK: Security and privacy in machine learning. In European Symposium on Security and Privacy . 399--414.
[53]
Ken Perlin. 1985. An image synthesizer. ACM Siggraph Computer Graphics, Vol. 19, 3 (1985), 287--296.
[54]
Ken Perlin. 2002. Improving noise. ACM Transactions on Graphics, Vol. 21, 3 (2002), 681--682.
[55]
Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian processes for machine learning .The MIT Press.
[56]
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779--788.
[57]
Joseph Redmon and Ali Farhadi. 2017. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition. 7263--7271.
[58]
Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018).
[59]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems. 91--99.
[60]
Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. 2011. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning (ICML'11). Omnipress, USA, 833--840.
[61]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et almbox. 2015. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, Vol. 115, 3 (2015), 211--252.
[62]
Joshua Saxe and Konstantin Berlin. 2015. Deep neural network based malware detection using two dimensional binary program features. In Intl. Conference on Malicious and Unwanted Software (MALWARE) . 11--20.
[63]
Joshua Saxe and Konstantin Berlin. 2017. eXpose: A character-level convolutional neural network with embeddings for detecting malicious URLs, file paths and registry keys. arXiv preprint arXiv:1702.08568 (2017).
[64]
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. 2016. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE, Vol. 104, 1 (2016), 148--175.
[65]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[66]
Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural information Processing Systems. 2951--2959.
[67]
Jure Sokolić, Raja Giryes, Guillermo Sapiro, and Miguel RD Rodrigues. 2017. Robust large margin deep neural networks. IEEE Transactions on Signal Processing, Vol. 65, 16 (2017), 4265--4280.
[68]
Yi Sun, Xiaogang Wang, and Xiaoou Tang. 2013. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3476--3483.
[69]
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In AAAI, Vol. 4. 12.
[70]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the Inception architecture for computer vision. In Conference on Computer Vision and Pattern Recognition. 2818--2826.
[71]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[72]
Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. DeepTest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering. ACM, 303--314.
[73]
Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations .
[74]
Bichen Wu, Forrest N Iandola, Peter H Jin, and Kurt Keutzer. 2017. SqueezeDet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) . 446--454. https://doi.org/10.1109/CVPRW.2017.60
[75]
Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018).
[76]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in neural information processing systems. 3320--3328.
[77]
Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. 2018. Transferable adversarial perturbations. In Computer Vision--ECCV 2018 . Springer, 471--486.

Cited By

View all
  • (2024)Improving Transferability of Universal Adversarial Perturbation With Feature DisruptionIEEE Transactions on Image Processing10.1109/TIP.2023.334513633(722-737)Online publication date: 2024
  • (2024)MalPatch: Evading DNN-Based Malware Detection With Adversarial PatchesIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.333356719(1183-1198)Online publication date: 2024
  • (2024)QE-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization2024 International Conference on Computing, Networking and Communications (ICNC)10.1109/ICNC59896.2024.10555954(783-788)Online publication date: 19-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
November 2019
2755 pages
ISBN:9781450367479
DOI:10.1145/3319535
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial machine learning
  2. bayesian optimization
  3. black-box attacks
  4. deep neural networks
  5. procedural noise
  6. universal adversarial perturbations

Qualifiers

  • Research-article

Funding Sources

  • Data Spartan

Conference

CCS '19
Sponsor:

Acceptance Rates

CCS '19 Paper Acceptance Rate 149 of 934 submissions, 16%;
Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '24
ACM SIGSAC Conference on Computer and Communications Security
October 14 - 18, 2024
Salt Lake City , UT , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)77
  • Downloads (Last 6 weeks)6
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Improving Transferability of Universal Adversarial Perturbation With Feature DisruptionIEEE Transactions on Image Processing10.1109/TIP.2023.334513633(722-737)Online publication date: 2024
  • (2024)MalPatch: Evading DNN-Based Malware Detection With Adversarial PatchesIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.333356719(1183-1198)Online publication date: 2024
  • (2024)QE-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization2024 International Conference on Computing, Networking and Communications (ICNC)10.1109/ICNC59896.2024.10555954(783-788)Online publication date: 19-Feb-2024
  • (2024)Implementing a Multitarget Backdoor Attack Algorithm Based on Procedural Noise Texture FeaturesIEEE Access10.1109/ACCESS.2024.340184812(69539-69550)Online publication date: 2024
  • (2023)Secure Gait Recognition-Based Smart Surveillance Systems Against Universal Adversarial AttacksJournal of Database Management10.4018/JDM.31841534:2(1-25)Online publication date: 16-Feb-2023
  • (2023)ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification ModelEntropy10.3390/e2502021525:2(215)Online publication date: 22-Jan-2023
  • (2023)FreePart: Hardening Data Processing Software via Framework-based Partitioning and IsolationProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 410.1145/3623278.3624760(169-188)Online publication date: 25-Mar-2023
  • (2023)Stealthy 3D Poisoning Attack on Video Recognition ModelsIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.316339720:2(1730-1743)Online publication date: 1-Mar-2023
  • (2023)Attacking Deep Reinforcement Learning With Decoupled Adversarial PolicyIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.314356620:1(758-768)Online publication date: 1-Jan-2023
  • (2023)“Real Attackers Don't Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML54575.2023.00031(339-364)Online publication date: Feb-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media