Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3134600.3134635acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning

Published: 04 December 2017 Publication History

Abstract

Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms are being used in diverse domains where security is a concern, such as, automotive systems, finance, health-care, computer vision, speech recognition, natural-language processing, and malware detection. Of particular concern is use of ML in cyberphysical systems, such as driver-less cars and aviation, where the presence of an adversary can cause serious consequences. In this paper we focus on attacks caused by adversarial samples, which are inputs crafted by adding small, often imperceptible, perturbations to force a ML model to misclassify. We present a simple gradient-descent based algorithm for finding adversarial samples, which performs well in comparison to existing algorithms. The second issue that this paper tackles is that of metrics. We present a novel metric based on few computer-vision algorithms for measuring the quality of adversarial samples.

References

[1]
DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
Babak Alipanahi, Andrew Delong, Matthew T Weirauch, and Brendan J Frey. 2015. Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning. Nature biotechnology (2015).
[3]
M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. 2016. End to End Learning for Self-Driving Cars. Technical Report.
[4]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. 2016. End to End Learning for Self-Driving Cars. CoRR abs/1604.07316 (2016). http://arxiv.org/abs/1604.07316.
[5]
J Canny. 1986. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6 (June 1986), 679--698.
[6]
Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE Symposium on Security and Privacy.
[7]
Chih-Lin Chi, W. Nick Street, Jennifer G. Robinson, and Matthew A. Crawford. 2012. Individualized Patient-centered Lifestyle Recommendations: An Expert System for Communicating Patient Specific Cardiovascular Risk Information and Prioritizing Lifestyle Options. J. of Biomedical Informatics 45, 6 (Dec. 2012), 1164--1174.
[8]
George E Dahl, Jack W Stokes, Li Deng, and Dong Yu. 2013. Large-scale malware classification using random projections and neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 3422--3426.
[9]
Navneet Dalal and Bill Triggs. 2005. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CV PR'05) - Volume 1 - Volume 01 (CVPR '05). IEEE Computer Society, Washington, DC, USA, 886--893.
[10]
Nathan Eddy. 2016. AI, Machine Learning Drive Autonomous Vehicle Development. http://www.informationweek.com/big-data/big-data-analytics/ai-machine-learning-drive-autonomous-vehicle-development/d/d-id/1325906. (2016).
[11]
Leslie Hogben (Editor). 2013. Handbook of Linear Algebra. Chapman and Hall/CRC.
[12]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adversarial Examples. CoRR (2014).
[13]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the 2015 International Conference on Learning Representations. Computational and Biological Learning Society.
[14]
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 29, 6 (2012), 82--97.
[15]
Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. 2011. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence. ACM, 43--58.
[16]
X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. 2017. Safety Verification of Deep Neural Networks.
[17]
International Warfarin Pharmacogenetic Consortium. 2009. Estimation of the Warfarin Dose with Clinical and Pharmacogenetic Data. New England Journal of Medicine 360, 8 (2009), 753--764.
[18]
K. Julian, J. Lopez, J. Brush, M. Owen, and M. Kochenderfer. 2016. Policy Compression for Aircraft Collision Avoidance Systems. In Proc. 35th Digital Avionics Systems Conf. (DASC).
[19]
Guy Katz, Clark Barrett, David Dill, Kyle Julian, and Mykel Kochenderfer. 2017. An Efficient SMT Solver for Verifying Deep Neural Networks.
[20]
Eric Knorr. 2015. How PayPal beats the bad guys with machine learning. http://www.infoworld.com/article/2907877/machine-learning/how-paypal-reduces-fraud-with-machine-learning.html. (2015).
[21]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[22]
A. Kurakin, I. J. Goodfellow, and S. Bengio. 2016. Adversarial Examples in the Physical world. (2016).
[23]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2015. DeepFool: a simple and accurate method to fool deep neural networks. CoRR (2015).
[24]
Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25]
Jorge Nocedal and Stephen Wright. 2006. Numerical Optimization. Springer.
[26]
NVIDIA. 2015. NVIDIA Tegra Drive PX: Self-Driving Car Computer. (2015). http://www.nvidia.com/object/drive-px.html
[27]
Nicolas Papernot, Ian Goodfellow, Ryan Sheatsley, Reuben Feinman, and Patrick McDaniel. 2016. cleverhans v1.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768 (2016).
[28]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 1st IEEE European Symposium on Security and Privacy. arXiv preprint arXiv:1511.07528.
[29]
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12 (2014), 1532--1543.
[30]
Alfio Quarteroni, Riccardo Sacco, and Fausto Saleri. 2000. Numerical mathematics. p.307.
[31]
Eui Chul Richard Shin, Dawn Song, and Reza Moazzezi. 2015. Recognizing functions in binaries with neural networks. In 24th USENIX Security Symposium (USENIX Security 15). 611--626.
[32]
J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks 0 (2012), --.
[33]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).

Cited By

View all
  • (2025)Adversarial machine learning threat analysis and remediation in Open Radio Access Network (O-RAN)Journal of Network and Computer Applications10.1016/j.jnca.2024.104090236(104090)Online publication date: Apr-2025
  • (2024)AI PsychiatryProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3698995(1687-1704)Online publication date: 14-Aug-2024
  • (2024)Analysis of neural network detectors for network attacksJournal of Computer Security10.3233/JCS-23003132:3(193-220)Online publication date: 17-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACSAC '17: Proceedings of the 33rd Annual Computer Security Applications Conference
December 2017
618 pages
ISBN:9781450353458
DOI:10.1145/3134600
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 December 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial Examples
  2. Machine Learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ACSAC 2017

Acceptance Rates

Overall Acceptance Rate 104 of 497 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)89
  • Downloads (Last 6 weeks)5
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Adversarial machine learning threat analysis and remediation in Open Radio Access Network (O-RAN)Journal of Network and Computer Applications10.1016/j.jnca.2024.104090236(104090)Online publication date: Apr-2025
  • (2024)AI PsychiatryProceedings of the 33rd USENIX Conference on Security Symposium10.5555/3698900.3698995(1687-1704)Online publication date: 14-Aug-2024
  • (2024)Analysis of neural network detectors for network attacksJournal of Computer Security10.3233/JCS-23003132:3(193-220)Online publication date: 17-Jun-2024
  • (2024)Generating Imperceptible and Cross-Resolution Remote Sensing Adversarial Examples Based on Implicit Neural RepresentationsIEEE Transactions on Geoscience and Remote Sensing10.1109/TGRS.2023.334937362(1-15)Online publication date: 2024
  • (2024)Deep Learning Models as Moving Targets to Counter Modulation Classification AttacksIEEE INFOCOM 2024 - IEEE Conference on Computer Communications10.1109/INFOCOM52122.2024.10621413(1601-1610)Online publication date: 20-May-2024
  • (2024)A Comprehensive Study on Network Security in the Current Scenario2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)10.1109/I-SMAC61858.2024.10714841(598-603)Online publication date: 3-Oct-2024
  • (2024)Network characteristics adaption and hierarchical feature exploration for robust object recognitionPattern Recognition10.1016/j.patcog.2023.110240149(110240)Online publication date: May-2024
  • (2024)Spatial-frequency gradient fusion based model augmentation for high transferability adversarial attackKnowledge-Based Systems10.1016/j.knosys.2024.112241301(112241)Online publication date: Oct-2024
  • (2024)Enhancing Generalization in Few-Shot Learning for Detecting Unknown Adversarial ExamplesNeural Processing Letters10.1007/s11063-024-11572-656:2Online publication date: 5-Mar-2024
  • (2024)Adversarial robustness improvement for deep neural networksMachine Vision and Applications10.1007/s00138-024-01519-135:3Online publication date: 14-Mar-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media