Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Verifying the Safety of Autonomous Systems with Neural Network Controllers

Published: 07 December 2020 Publication History
  • Get Citation Alerts
  • Abstract

    This article addresses the problem of verifying the safety of autonomous systems with neural network (NN) controllers. We focus on NNs with sigmoid/tanh activations and use the fact that the sigmoid/tanh is the solution to a quadratic differential equation. This allows us to convert the NN into an equivalent hybrid system and cast the problem as a hybrid system verification problem, which can be solved by existing tools. Furthermore, we improve the scalability of the proposed method by approximating the sigmoid with a Taylor series with worst-case error bounds. Finally, we provide an evaluation over four benchmarks, including comparisons with alternative approaches based on mixed integer linear programming as well as on star sets.

    References

    [1]
    [n.d.]. F1/10 Autonomous Racing Competition. Retrieved from http://f1tenth.org.
    [2]
    M. Althoff. 2015. An introduction to CORA 2015. In Proceedings of the Workshop on Applied Verification for Continuous and Hybrid Systems. 120--151.
    [3]
    R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P. H. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Yovine. 1995. The algorithmic analysis of hybrid systems. Theor. Comput. Sci. 138, 1 (1995), 3--34.
    [4]
    M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, and J. Zhao. 2016. End to end learning for self-driving cars. ArXiv:1604.07316. Retrieved from https://arxiv.org/abs/1604.07316.
    [5]
    Xin Chen, Erika Abraham, and Sriram Sankaranarayanan. 2012. Taylor model flowpipe construction for non-linear hybrid systems. In Proceedings of the 2012 IEEE 33rd Real-Time Systems Symposium (RTSS’12). IEEE, 183--192.
    [6]
    X. Chen, E. Ábrahám, and S. Sankaranarayanan. 2013. Flow*: An analyzer for non-linear hybrid systems. In Proceedings of the International Conference on Computer Aided Verification. Springer, 258--263.
    [7]
    M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier. 2017. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70. 854--863.
    [8]
    Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 (August 2011), 2493--2537.
    [9]
    T. Dreossi, A. Donzé, and S. A. Seshia. 2017. Compositional falsification of cyber-physical systems with machine learning components. In Proceedings of the NASA Formal Methods Symposium. Springer, 357--372.
    [10]
    P. S. Duggirala, S. Mitra, M. Viswanathan, and M. Potok. 2015. C2E2: A verification tool for stateflow models. In Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 68--82.
    [11]
    S. Dutta, X. Chen, and S. Sankaranarayanan. 2019. Reachability analysis for neural feedback systems using regressive polynomial rule inference. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control. ACM, 157--168.
    [12]
    S. Dutta, S. Jha, S. Sankaranarayanan, and A. Tiwari. 2018. Output range analysis for deep feedforward neural networks. In Proceedings of the NASA Formal Methods Symposium. Springer, 121--138.
    [13]
    R. Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis. Springer, 269--286.
    [14]
    K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song. 2018. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1625--1634.
    [15]
    Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George J. Pappas. 2019. Efficient and accurate estimation of lipschitz constants for deep neural networks. ArXiv:1906.04893. Retrieved from https://arxiv.org/abs/1906.04893.
    [16]
    G. Frehse, C. L. Guernic, A. Donzé, S. Cotton, R. Ray, O. Lebeltel, R. Ripado, A. Girard, T. Dang, and O. Maler. 2011. SpaceEx: Scalable verification of hybrid systems. In Proceedings of the International Conference on Computer Aided Verification. 379--395.
    [17]
    Scott Fujimoto, Herke van Hoof, and David Meger. 2018. Addressing function approximation error in actor-critic methods. ArXiv:1802.09477. Retrieved from https://arxiv.org/abs/1802.09477.
    [18]
    T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev. 2018. AI2: Safety and robustness certification of neural networks with abstract interpretation. In Proceedings of the IEEE Symposium on Security and Privacy (SP’18).
    [19]
    Matthias Hein and Maksym Andriushchenko. 2017. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems. 2266--2276.
    [20]
    Chao Huang, Jiameng Fan, Wenchao Li, Xin Chen, and Qi Zhu. 2019. ReachNN: Reachability analysis of neural-network controlled systems. ACM Trans. Embedd. Comput. Syst. 18, 5s (2019), 1--22.
    [21]
    R. Ivanov, T. Carpenter, J. Weimer, R. Alur, G. J. Pappas, and I. Lee. 2020. Case study: Verifying the safety of an autonomous racing car with a neural network controller. In Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control. ACM, 1--7.
    [22]
    R. Ivanov, J. Weimer, R. Alur, G. J. Pappas, and I. Lee. 2019. Verisig: Verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control. ACM, 169--178.
    [23]
    K. D. Julian, J. Lopez, J. S. Brush, M. P. Owen, and M. J. Kochenderfer. 2016. Policy compression for aircraft collision avoidance systems. In Proceedings of the 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC’16). IEEE, 1--10.
    [24]
    G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proceedings of the International Conference on Computer Aided Verification. Springer, 97--117.
    [25]
    S. Kong, S. Gao, W. Chen, and E. Clarke. 2015. dReach: -reachability analysis for hybrid systems. In Proceedings of the International Conference on TOOLS and Algorithms for the Construction and Analysis of Systems. Springer, 200--205.
    [26]
    A. Kurakin, I. Goodfellow, and S. Bengio. 2016. Adversarial examples in the physical world. ArXiv:1607.02533. Retrieved from https://arxiv.org/abs/1607.02533.
    [27]
    G. Lafferriere, G. J. Pappas, and S. Yovine. 1999. A new class of decidable hybrid systems. In Proceedings of the International Workshop on Hybrid Systems: Computation and Control. 137--151.
    [28]
    T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. 2015. Continuous control with deep reinforcement learning. ArXiv:1509.02971. Retrieved from https://arxiv.org/abs/1509.02971.
    [29]
    D. M. Lopez, P. Musau, H. Tran, S. Dutta, T. J. Carpenter, R. Ivanov, and T. T. Johnson. 2019. ARCH-COMP19 category report: Artificial intelligence/neural network control systems (AINNCS) for continuous and hybrid systems plants. EPiC Ser. Comput. 61 (2019), 103--119.
    [30]
    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. ArXiv:1706.06083. Retrieved from https://arxiv.org/abs/1706.06083.
    [31]
    Kyoko Makino and Martin Berz. 2007. Suppression of the wrapping effect by Taylor model-based verified integrators: The single step. Int. J. Pure Appl. Math. 36, 2 (2007), 175.
    [32]
    Ali A. Minai and Ronald D. Williams. 1993. On the derivatives of the sigmoid. Neural Netw. 6, 6 (1993), 845--853.
    [33]
    Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable abstract interpretation for provably robust neural networks. In Proceedings of the International Conference on Machine Learning. 3575--3583.
    [34]
    Andrew William Moore. 1990. Efficient Memory-based Learning for Robot Control. Ph.D. Dissertation. University of Cambridge.
    [35]
    OpenAI. [n.d.]. OpenAI Gym. Retrieved from https://gym.openai.com.
    [36]
    Gurobi Optimization. [n.d.]. Gurobi Optimizer. Retrieved from https://gurobi.com.
    [37]
    Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Deepxplore: Automated whitebox testing of deep learning systems. In Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 1--18.
    [38]
    P. Polack, F. Altché, B. d’Andréa Novel, and A. de La Fortelle. 2017. The kinematic bicycle model: A consistent model for planning feasible trajectories for autonomous vehicles?. In Proceedings of the Intelligent Vehicles Symposium. IEEE, 812--818.
    [39]
    Rajesh Rajamani. 2011. Vehicle Dynamics and Control. Springer Science 8 Business Media.
    [40]
    Prajit Ramachandran, Barret Zoph, and Quoc V. Le. 2017. Searching for activation functions. ArXiv:1710.05941 (2017). Retrieved from https://arxiv.org/pdf/1710.05941.
    [41]
    V. R. Royo, D. Fridovich-Keil, S. Herbert, and C. J. Tomlin. 2018. Classification-based approximate reachability with guarantees applied to safe trajectory tracking. ArXiv:1803.03237. Retrieved from https://arxiv.org/abs/1803.03237.
    [42]
    X. Sun, H. Khedr, and Y. Shoukry. 2019. Formal verification of neural network controlled autonomous systems. In Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control. ACM, 147--156.
    [43]
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, et al. 2013. Intriguing properties of neural networks. ArXiv:1312.6199. Retrieved from https://arxiv.org/abs/1312.6199.
    [44]
    Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1701--1708.
    [45]
    A. Tarski. 1998. A decision method for elementary algebra and geometry. In Quantifier Elimination and Cylindrical Algebraic Decomposition. Springer, 24--84.
    [46]
    H. Tran, S. Bak, W. Xiang, and T. T. Johnson. 2020. Verification of deep convolutional neural networks using ImageStars. In Proceedings of the 32nd International Conference on Computer-Aided Verification (CAV’20). Springer.
    [47]
    H. Tran, F. Cai, D. M. Lopez, P. Musau, T. T. Johnson, and X. Koutsoukos. 2019. Safety verification of cyber-physical systems with reinforcement learning control. ACM Trans. Embedd. Comput. Syst. 18, 5s (2019), 105.
    [48]
    Cumhur Erkan Tuncali, Georgios Fainekos, Hisahiro Ito, and James Kapinski. 2018. Simulation-based adversarial test generation for autonomous vehicles with machine learning components. ArXiv:1804.06760. Retrieved from https://arxiv.org/abs/1804.06760.
    [49]
    S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. 2018. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems. 6367--6377.
    [50]
    T. Weng, H. Zhang, H. Chen, Z. Song, C. Hsieh, L. Daniel, D. Boning, and I. Dhillon. 2018. Towards fast computation of certified robustness for ReLU networks. In Proceedings of the International Conference on Machine Learning. 5273--5282.
    [51]
    A. J. Wilkie. 1997. Schanuel’s conjecture and the decidability of the real exponential field. In Algebraic Model Theory. Springer, 223--230.
    [52]
    W. Xiang, P. Musau, A. A. Wild, D. M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. T Johnson. 2018. Verification for machine learning, autonomy, and neural networks survey. ArXiv:1810.01989. Retrieved from https://arxiv.org/abs/1810.01989.
    [53]
    W. Xiang, H. D. Tran, and T. T. Johnson. 2017. Output reachable set estimation and verification for multi-layer neural networks. ArXiv:1708.03322. Retrieved from https://arxiv.org/abs/1708.03322.

    Cited By

    View all
    • (2024)Controller Synthesis for Autonomous Systems With Deep-Learning Perception ComponentsIEEE Transactions on Software Engineering10.1109/TSE.2024.338537850:6(1374-1395)Online publication date: Jun-2024
    • (2024)POLAR-Express: Efficient and Precise Formal Reachability Analysis of Neural-Network Controlled SystemsIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.333121543:3(994-1007)Online publication date: 1-Mar-2024
    • (2024)Verifying safety of neural networks from topological perspectivesScience of Computer Programming10.1016/j.scico.2024.103121236:COnline publication date: 1-Sep-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Embedded Computing Systems
    ACM Transactions on Embedded Computing Systems  Volume 20, Issue 1
    January 2021
    193 pages
    ISSN:1539-9087
    EISSN:1558-3465
    DOI:10.1145/3441649
    • Editor:
    • Tulika Mitra
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Journal Family

    Publication History

    Published: 07 December 2020
    Accepted: 01 August 2020
    Revised: 01 July 2020
    Received: 01 February 2020
    Published in TECS Volume 20, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Neural network verification
    2. hybrid systems with neural network controllers
    3. safe autonomy

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • Air Force Research Laboratory (AFRL)

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)108
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 12 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Controller Synthesis for Autonomous Systems With Deep-Learning Perception ComponentsIEEE Transactions on Software Engineering10.1109/TSE.2024.338537850:6(1374-1395)Online publication date: Jun-2024
    • (2024)POLAR-Express: Efficient and Precise Formal Reachability Analysis of Neural-Network Controlled SystemsIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.333121543:3(994-1007)Online publication date: 1-Mar-2024
    • (2024)Verifying safety of neural networks from topological perspectivesScience of Computer Programming10.1016/j.scico.2024.103121236:COnline publication date: 1-Sep-2024
    • (2024)Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based TrainingVerification, Model Checking, and Abstract Interpretation10.1007/978-3-031-50521-8_4(73-97)Online publication date: 15-Jan-2024
    • (2023)Boosting verification of deep reinforcement learning via piece-wise linear decision neural networksProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666560(10022-10037)Online publication date: 10-Dec-2023
    • (2023)Neural policy safety verification via predicate abstractionProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i12.26772(15188-15196)Online publication date: 7-Feb-2023
    • (2023)Virtual Environment Model Generation for CPS Goal Verification using Imitation LearningACM Transactions on Embedded Computing Systems10.1145/3633804Online publication date: 27-Nov-2023
    • (2023)Reachability Analysis of Sigmoidal Neural NetworksACM Transactions on Embedded Computing Systems10.1145/3627991Online publication date: 17-Oct-2023
    • (2023)A Review of Abstraction Methods Toward Verifying Neural NetworksACM Transactions on Embedded Computing Systems10.1145/361750823:4(1-19)Online publication date: 28-Aug-2023
    • (2023)Safe Control With Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction Methods for Robotics and ControlIEEE Transactions on Robotics10.1109/TRO.2022.323254239:3(1749-1767)Online publication date: 1-Jun-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media