Abstract
Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safety-critical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new tool, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure implemented in 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN tool on an extensive set of benchmarks, obtaining promising results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
It may be possible that an invocation of SingleLayerModification() fails because no change is possible that obtains the desired results. Whenever this happens, 3M-DNN continues to the next iteration, exploring a different change to the separation layers’ values. This situation is theoretically possible, but did not occur in our experiments.
References
Amir, G., et al.: Verifying Learning-Based Robotic Navigation Systems. Technical report (2022). http://arxiv.org/abs/2205.13536
Amir, G., Katz, G., Schapira, M.: Verification-aided deep ensemble selection. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)
Amir, G., Schapira, M., Katz, G.: Towards scalable verification of deep reinforcement learning. In: Proceedings of 21st International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 193–203 (2021)
Amir, G., Wu, H., Barrett, C., Katz, G.: An SMT-based approach for verifying binarized neural networks. In: Proceedings of 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 203–222 (2021)
Ashok, P., Hashemi, V., Kretinsky, J., Mohr, S.: DeepAbstract: neural network abstraction for accelerating verification. In: Proceedings of 18th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 92–107 (2020)
Bojarski, M., et al.: End to End Learning for Self-Driving Cars. Technical report (2016). http://arxiv.org/abs/1604.07316
Browne, C., et al.: A survey of monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)
Bunel, R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.: A Unified View of Piecewise Linear Neural Network Verification. In: Proceedings of 32nd Conference on Neural Information Processing Systems (NeurIPS), pp. 4795–4804 (2018)
Carlini, N., Katz, G., Barrett, C., Dill, D.: Provably Minimally-Distorted Adversarial Examples. Technical report (2017). http://arxiv.org/abs/1709.10207
Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649 (2012)
Dong, G., Sun, J., Wang, J., Wang, X., Dai, T.: Towards Repairing Neural Networks Correctly. Technical report (2020). http://arxiv.org/abs/2012.01872
Dutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: Proceedings of 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC) (2019)
Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks. In: Proceedings of 10th NASA Formal Methods Symposium (NFM), pp. 121–138 (2018)
Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: Proceedings of 15th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 269–286 (2017)
Elboher, Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Proceedings of 32nd International Conference on Computer Aided Verification (CAV), pp. 43–65 (2020)
Eliyahu, T., Kazak, Y., Katz, G., Schapira, M.: Verifying learning-augmented systems. In Proceedings of Conference on the ACM Special Interest Group on Data Communication on the Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), pp. 305–318 (2021)
Fu, M.: AlphaGo and Monte Carlo tree search: the simulation optimization perspective. In: Proceedings of Winter Simulation Conference (WSC), pp. 659–670 (2016)
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, E., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: Proceedings of 39th IEEE Symposium on Security and Privacy (S &P) (2018)
Gokulanathan, S., Feldsher, A., Malca, A., Barrett, C., Katz, G.: Simplifying neural networks using formal verification. In: Proceedings of 12th NASA Formal Methods Symposium (NFM), pp. 85–93 (2020)
Goldberger, B., Adi, Y., Keshet, J., Katz, G.: Minimal modifications of deep neural networks using verification. In: Proceedings of 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), pp. 260–278 (2020)
Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning. MIT Press Cambridge, Cambridge (2016)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. Technical report (2014). http://arxiv.org/abs/1412.6572
Gopinath, D., Katz, G., Pǎsǎreanu, C., Barrett, C.: DeepSafe: a data-driven approach for checking adversarial robustness in neural networks. In: Proceedings of 16th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 3–19 (2018)
Hao, K.: Training a Single AI Model can Emit as much Carbon as Five Cars In Their Lifetimes. MIT Technology Review (2019)
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Proceedings of 29th International Conference on Computer Aided Verification (CAV), pp. 3–29 (2017)
Isac, O., Barrett, C., Zhang, M., Katz, G.: Neural network verification with proof production. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)
Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. In: Proceedings of 18th International Symposium on Automated Technology for Verification and Analysis (ATVA), pp. 57–74 (2020)
Julian, K., Lopez, J., Brush, J., Owen, M., Kochenderfer, M.: Policy compression for aircraft collision avoidance systems. In: Proceedings of 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016)
Katz, G.: Guarded deep learning using scenario-based modeling. In: Proceedings of 8th International Conference on Model-Driven Engineering and Software Development (MODELSWARD), pp. 126–136 (2020)
Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Proceedings of 29th International Conference on Computer Aided Verification (CAV), pp. 97–117 (2017)
Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Towards proving the adversarial robustness of deep neural networks. In: Proceedings of 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pp. 19–26 (2017)
Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: a calculus for reasoning about deep neural networks. In: Formal Methods in System Design (FMSD) (2021)
Katz, G., Elyasaf, A.: Towards combining deep learning, verification, and scenario-based programming. In: Proceedings of 1st Workshop on Verification of Autonomous and Robotic Systems (VARS), pp. 1–3 (2021)
Katz, G., et al.: The Marabou framework for verification and analysis of deep neural networks. In: Proceedings of 31st International Conference on Computer Aided Verification (CAV), pp. 443–452 (2019)
Kauschke, D., Lehmann, S.: Towards Neural Network Patching: Evaluating Engagement-Layers and Patch-Architectures. Technical report (2018). http://arxiv.org/abs/1812.03468
Kauschke, S., Furnkranz, J.: Batchwise patching of classifiers. In: Proceedings of 32nd AAAI Conference on Artificial Alliance (2018)
Kazak, Y., Barrett, C., Katz, G., Schapira, M.: Verifying deep-RL-driven systems. In: Proceedings of 1st ACM SIGCOMM Workshop on Network Meets AI & ML (NetAI), pp. 83–89 (2019)
Kermany, D., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Proceedings of 26th Conference on Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)
Kuper, L., Katz, G., Gottschlich, J., Julian, K., Barrett, C., Kochenderfer, M.: Toward Scalable Verification for Safety-Critical Deep Networks. Technical report (2018). http://arxiv.org/abs/1801.05950
Lahav, O., Katz, G.: Pruning and slicing neural networks using formal verification. In: Proceedings of 21st International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 183–192 (2021)
LeCun, Y.: The MNIST Database of Handwritten Digits (1998). http://yann.lecun.com/exdb/mnist/
Leino, K., Fromherz, A., Mangal, R., Fredrikson, M., Parno, B., Păsăreanu, C.: Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair. Technical report (2021). http://arxiv.org/abs/2107.11445
Liu, Y., et al.: Trojaning Attack on Neural Networks (2017)
Lomuscio, A., Maganti, L.: An Approach to Reachability Analysis for Feed-Forward ReLU Neural Networks. Technical report (2017). http://arxiv.org/abs/1706.07351
Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying Properties of Binarized Deep Neural Networks. Technical report (2017). http://arxiv.org/abs/1709.06662
Nwankpa, C., Ijomah, W., Gachagan, A., Marshall, S.: Activation Functions: Comparison of Trends in Practice and Research for Deep Learning. Technical report (2018). http://arxiv.org/abs/1811.03378
Ostrovsky, M., Barrett, C., Katz, G.: An Abstraction-refinement approach to verifying convolutional neural networks. In: Proceedings of 20th International Symposium on Automated Technology for Verification and Analysis (ATVA) (2022)
Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of 27th International Joint Conference on Artificial Intelligence (IJCAI) (2018)
Seide, F., Li, G., Yu, D.: Conversational speech transcription using context-dependent deep neural networks. In: Proceedings of 12th Conference of the International Speech Communication Association (Interspeech), pp. 437–440 (2011)
Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. Technical report ( 2014). http://arxiv.org/abs/1409.1556
Singh, S., Kumar, A., Darbari, H., Singh, L., Rastogi, A., Jain, S.: Machine translation using deep learning: an overview. In: Proceedings of International Conference on Computer, Communications and Electronics (Comptelix), pp. 162–167 (2017)
Sotoudeh, M., Thakur, A.: Correcting deep neural networks with small, generalizing patches. In: Workshop on Safety and Robustness in Decision Making (2019)
Sotoudeh, M., Thakur, A.: Provable repair of deep neural networks. In: Proceedings of 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), pp. 588–603 (2021)
Strong, C., et al.: Global optimization of objective functions represented by ReLU networks. J. Mach. Learn. 2021, 1–28 (2021). https://doi.org/10.1007/s10994-021-06050-2
Sun, X., Khedr, H., Shoukry, Y.: Formal verification of neural network controlled autonomous systems. In: Proceedings of 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC) (2019)
Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: Testing Deep Neural Networks. Technical report (2018). http://arxiv.org/abs/1803.04792
Tjeng, V., Xiao, K., Tedrake. Evaluating robustness of neural networks with mixed integer programming. In: Proceedings of 7th International Conference on Learning Representations (ICLR) (2019)
Usman, M., Gopinath, D., Sun, Y., Noller, Y., Pǎsǎreanu, C.: NNrepair: Constraint-based Repair of Neural Network Classifiers. Technical report (2021). http://arxiv.org/abs/2103.12535
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: Proceedings of 27th USENIX Security Symposium, pp. 1599–1614 (2018)
Wu, H., et al.: Parallelization techniques for verifying neural networks. In: Proceedings of 20th International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 128–137 (2020)
Wu, H., Zeljić, A., Katz, G., Barrett, C.: Efficient neural network analysis with sum-of-infeasibilities. In: Proceedings of 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 143–163 (2022)
Yang, X., Yamaguchi, T., Tran, H.-D., Hoxha, B., Johnson, T., Prokhorov, D.: Neural Network Repair with Reachability Analysis. Technical report (2021). http://arxiv.org/abs/2108.04214
Zelazny, T., Wu, C., Barrett, H., Katz, G.: On reducing over-approximation errors for neural network verification. In: Proceedings of 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD) (2022)
Zhang, H., Shinn, M., Gupta, A., Gurfinkel, A., Le, N., Narodytska, N.: Verification of recurrent neural networks for cognitive tasks via reachability analysis. In: Proceedings of 24th Conference of European Conference on Artificial Intelligence (ECAI) (2020)
Acknowledgement
This work was partially supported by the Israel Science Foundation (grant number 683/18) and the HUJI Federmann Cyber Security Center.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Refaeli, I., Katz, G. (2022). Minimal Multi-Layer Modifications of Deep Neural Networks. In: Isac, O., Ivanov, R., Katz, G., Narodytska, N., Nenzi, L. (eds) Software Verification and Formal Methods for ML-Enabled Autonomous Systems. NSV FoMLAS 2022 2022. Lecture Notes in Computer Science, vol 13466. Springer, Cham. https://doi.org/10.1007/978-3-031-21222-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-21222-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21221-5
Online ISBN: 978-3-031-21222-2
eBook Packages: Computer ScienceComputer Science (R0)