Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Free access

DeepXplore: automated whitebox testing of deep learning systems

Published: 24 October 2019 Publication History

Abstract

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains such as self-driving cars and malware detection, where the correctness and predictability of a system's behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.
We design, implement, and evaluate DeepXplore, the first white-box framework for systematically testing real-world DL systems. First, we introduce neuron coverage for measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.
DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets such as ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model's accuracy by up to 3%.

References

[1]
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604,07316 (2016).
[2]
Brubaker, C., Jana, S., Ray, B., Khurshid, S., Shmatikov V. Using frankencerts for automated adversarial testing of certificate validation in SSL/TLS implementations. In Proceedings of the 35th IEEE Symposium on Security and Privacy (2014).
[3]
Goodfellow, I., Shlens, J., Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations (2015).
[4]
Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P. Adversarial examples for malware detection. In European Symposium on Research in Computer Security (2017).
[5]
He, K., Zhang, X., Ren, S., Sun, J. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).
[6]
Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J. Policy compression for aircraft collision avoidance systems. In Proceedings of the 35th IEEE/AIAA Digital Avionics Systems Conference (2016).
[7]
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J. Reluplex: An efficient smt solver for verifying deep neural networks. In Proceedings of the 29th International Conference on Computer Aided Verification (2017).
[8]
LeCun, Y., Cortes, C., Burges, C.J. MNIST handwritten digit database. 2010.
[9]
Liu, M.-Y., Breuel, T., Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems (2017).
[10]
Odena, A., Goodfellow, I. Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. arXiv preprint arXiv:1807.10875 (2018).
[11]
Pei, K., Cao, Y., Yang, J., Jana, S. Towards practical verification of machine learning: The case of computer vision systems. arXiv preprint arXiv:1712.01785 (2017).
[12]
Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations (2015).
[13]
Tian, Y., Pei, K., Jana, S., Ray, B. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, ACM (2018), 303--314
[14]
Šrndic, N., Laskov, P. Practical evasion of a learning-based classifier: a case study. In Proceedings of the 35th IEEE Symposium on Security and Privacy (2014).
[15]
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems (2018).
[16]
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (2018).
[17]
Wong, E., Kolter, Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (2018).
[18]
Yosinski, J., Clune, J., Fuchs, T., Lipson, H. Understanding neural networks through deep visualization. In 2015 ICML Workshop on Deep Learning (2015).

Cited By

View all
  • (2024)TraModeAVTest: Modeling Scenario and Violation Testing for Autonomous Driving Systems Based on Traffic RegulationsElectronics10.3390/electronics1307119713:7(1197)Online publication date: 25-Mar-2024
  • (2024)Machine learning with requirements: A manifestoNeurosymbolic Artificial Intelligence10.3233/NAI-240767(1-13)Online publication date: 27-Aug-2024
  • (2024)Artificial Intelligence and the Production of Judicial TruthTheory, Culture & Society10.1177/02632764241268174Online publication date: 13-Sep-2024
  • Show More Cited By

Recommendations

Reviews

Amos O Olagunju

Many of us use trustworthy electronic systems, from self-driving car owners to online bankers and shoppers. How should real-life computer systems be methodically tested for nearly all potential faults and malware threats, to instill confidence in users Pei et al. present DeepXplore, the first system of its kind that uses deep learning (DL) techniques to design and exhaustively test for impending malware threats and defects in software. The authors identify two major drawbacks of current deep neural network (DNN) testing strategies: (1) the exorbitant human endeavors to create accurate behaviors and classifications for specific chores, and (2) the marginal assessment of various behavioral rules. Consequently, they present DeepXplore, a programmed whitebox archetype for methodically assessing inaccurate situation actions in DNNs, such as self-reliant cars bumping into shield fences. DeepXplore uses untagged source inputs to create numerous representative neurons for testing the multiplicity of behaviors in DNNs. The unique DeepXplore DL algorithm simultaneously capitalizes on the coverage of neurons and the variety of actions in DNNs to uncover a variety of system faults and failures. The authors present efficient algorithms for resolving the joint optimization problems of neuron coverage and different behaviors in DNNs. Is DeepXplore effective in exploring threats and failures in emerging online computerized systems Experiments were performed with datasets that originated from public images, driving, malicious attacks, and different DNNs. The results reveal that neuron coverage is a reliable predictor of DNN testing. But what about issues related to testing simulation shadows, the "efficient search for error-inducing test cases for arbitrary transformations," and the error-free gradient-based local search used in DeepXplore I invite colleagues from computational and applied mathematics to immediately investigate these problems and solutions. Clearly, the authors offer compelling, futuristic ideas about the nature of DL and DNN research challenges.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Information & Contributors

Information

Published In

cover image Communications of the ACM
Communications of the ACM  Volume 62, Issue 11
November 2019
136 pages
ISSN:0001-0782
EISSN:1557-7317
DOI:10.1145/3368886
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 October 2019
Published in CACM Volume 62, Issue 11

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)478
  • Downloads (Last 6 weeks)57
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)TraModeAVTest: Modeling Scenario and Violation Testing for Autonomous Driving Systems Based on Traffic RegulationsElectronics10.3390/electronics1307119713:7(1197)Online publication date: 25-Mar-2024
  • (2024)Machine learning with requirements: A manifestoNeurosymbolic Artificial Intelligence10.3233/NAI-240767(1-13)Online publication date: 27-Aug-2024
  • (2024)Artificial Intelligence and the Production of Judicial TruthTheory, Culture & Society10.1177/02632764241268174Online publication date: 13-Sep-2024
  • (2024)LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language ModelsACM Transactions on Software Engineering and Methodology10.1145/366481233:7(1-38)Online publication date: 26-Aug-2024
  • (2024)A Miss Is as Good as A Mile: Metamorphic Testing for Deep Learning OperatorsProceedings of the ACM on Software Engineering10.1145/36607961:FSE(2005-2027)Online publication date: 12-Jul-2024
  • (2024)Abstraction and Refinement: Towards Scalable and Exact Verification of Neural NetworksACM Transactions on Software Engineering and Methodology10.1145/364438733:5(1-35)Online publication date: 3-Jun-2024
  • (2024)COSTELLO: Contrastive Testing for Embedding-Based Large Language Model as a Service EmbeddingsProceedings of the ACM on Software Engineering10.1145/36437671:FSE(906-928)Online publication date: 12-Jul-2024
  • (2024)DeepHashDetection: Adversarial Example Detection Basedon Similarity Image RetrievalProceedings of the 2024 8th International Conference on Control Engineering and Artificial Intelligence10.1145/3640824.3640859(220-224)Online publication date: 26-Jan-2024
  • (2024)Causality-driven Testing of Autonomous Driving SystemsACM Transactions on Software Engineering and Methodology10.1145/363570933:3(1-35)Online publication date: 15-Mar-2024
  • (2024)DeepSample: DNN sampling-based testing for operational accuracy assessmentProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639584(1-12)Online publication date: 20-May-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Digital Edition

View this article in digital edition.

Digital Edition

Magazine Site

View this article on the magazine site (external)

Magazine Site

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media