Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Review of Abstraction Methods Toward Verifying Neural Networks

Published: 10 June 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Neural networks as a machine learning technique are increasingly deployed in various domains. Despite their performance and their continuous improvement, the deployment of neural networks in safety-critical systems, in particular for autonomous mobility, remains restricted. This is mainly due to the lack of (formal) specifications and verification methods and tools that allow for having sufficient confidence in the behavior of the neural-network-based functions. Recent years have seen neural network verification getting more attention; many verification methods were proposed, yet the practical applicability of these methods to real-world neural network models remains limited. The main challenge of neural network verification methods is related to the computational complexity and the large size of neural networks pertaining to complex functions. As a consequence, applying abstraction methods for neural network verification purposes is seen as a promising mean to cope with such issues. The aim of abstraction is to build an abstract model by omitting some irrelevant details or some details that are not highly impacting w.r.t some considered features. Thus, the verification process is made faster and easier while preserving, to some extent, the relevant behavior regarding the properties to be examined on the original model. In this article, we review both the abstraction techniques for activation functions and model size reduction approaches, with a particular focus on the latter. The review primarily discusses the application of abstraction techniques on feed-forward neural networks and explores the potential for applying abstraction to other types of neural networks. Throughout the article, we present the main idea of each approach and then discuss its respective advantages and limitations in detail. Finally, we provide some insights and guidelines to improve the discussed methods.

    References

    [1]
    Michael E. Akintunde, Andreea Kevorchian, Alessio Lomuscio, and Edoardo Pirovano. 2019. Verification of RNN-based neural agent-environment systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6006–6013.
    [2]
    Pranav Ashok, Vahid Hashemi, Jan Křetínskỳ, and Stefanie Mohr. 2020. Deepabstract: Neural network abstraction for accelerating verification. In International Symposium on Automated Technology for Verification and Analysis. Springer, 92–107.
    [3]
    Armin Biere, Marijn Heule, and Hans van Maaren. 2009. Handbook of Satisfiability. Vol. 185. IOS Press.
    [4]
    Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
    [5]
    Benedikt Bollig, Martin Leucker, and Daniel Neider. 2022. A survey of model learning techniques for recurrent neural networks. In A Journey from Process Algebra via Timed Automata to Model Learning: Essays Dedicated to Frits Vaandrager on the Occasion of His 60th Birthday. 81–97.
    [6]
    Fateh Boudardara, Abderraouf Boussif, and Mohamed Ghazel. 2023. A sound abstraction method towards efficient neural networks verification. In The 16th International Conference on Verification and Evaluation of Computer and Communication Systems (VECoS’23), Proceedings. 14.
    [7]
    Fateh Boudardara, Abderraouf Boussif, Pierre-Jean Meyer, and Mohamed Ghazel. 2022. Interval weight-based abstraction for neural network verification. In Computer Safety, Reliability, and Security. SAFECOMP 2022 Workshops: DECSoS, DepDevOps, SASSUR, SENSEI, USDAI, and WAISE, Proceedings. Springer, 330–342.
    [8]
    Xuyi Cai, Ying Wang, and Lei Zhang. 2022. Optimus: An operator fusion framework for deep neural networks. ACM Transactions on Embedded Computing Systems (TECS) (Feb.2022).
    [9]
    Chih-Hong Cheng, Georg Nührenberg, and Harald Ruess. 2017. Maximum resilience of artificial neural networks. In International Symposium on Automated Technology for Verification and Analysis. Springer, 251–268.
    [10]
    Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282 (2017).
    [11]
    Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, Roderick Bloem, et al. 2018. Handbook of Model Checking. Vol. 10. Springer.
    [12]
    Patrick Cousot and Radhia Cousot. 1977. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages. 238–252.
    [13]
    Souradeep Dutta, Susmit Jha, Sriram Sanakaranarayanan, and Ashish Tiwari. 2017. Output range analysis for deep neural networks. arXiv preprint arXiv:1709.09130 (2017).
    [14]
    Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output range analysis for deep feedforward neural networks. In Proceedings of the 10th NASA Formal Methods. 121–138.
    [15]
    Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy A. Mann, and Pushmeet Kohli. 2018. A dual approach to scalable verification of deep networks. In UAI, Vol. 1. 3.
    [16]
    Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis. Springer, 269–286.
    [17]
    Yizhak Yisrael Elboher, Justin Gottschlich, and Guy Katz. 2020. An abstraction-based framework for neural network verification. In International Conference on Computer Aided Verification. Springer, 43–65.
    [18]
    Martin Fowler. 2018. Refactoring: Improving the Design of Existing Code. Addison-Wesley Professional.
    [19]
    Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP’18). IEEE, 3–18.
    [20]
    Khalil Ghorbal, Eric Goubault, and Sylvie Putot. 2009. The zonotope abstract domain taylor1+. In International Conference on Computer Aided Verification. Springer, 627–633.
    [21]
    Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.
    [22]
    Song Han, Huizi Mao, and William J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015).
    [23]
    Chao Huang, Jiameng Fan, Wenchao Li, Xin Chen, and Qi Zhu. 2019. ReachNN: Reachability analysis of neural-network controlled systems. ACM Transactions on Embedded Computing Systems (TECS) 18, 5s, Article 106 (Oct.2019), 22 pages.
    [24]
    Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. 2020. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review 37 (2020), 100270.
    [25]
    Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety verification of deep neural networks. In International Conference on Computer Aided Verification. Springer, 3–29.
    [26]
    Radoslav Ivanov, Taylor J. Carpenter, James Weimer, Rajeev Alur, George J. Pappas, and Insup Lee. 2020. Verifying the safety of autonomous systems with neural network controllers. ACM Transactions on Embedded Computing Systems (TECS) 20, 1 (2020), 1–26.
    [27]
    Radoslav Ivanov, Kishor Jothimurugan, Steve Hsu, Shaan Vaidya, Rajeev Alur, and Osbert Bastani. 2021. Compositional learning and verification of neural network controllers. ACM Transactions on Embedded Computing Systems (TECS) 20, 5s (2021), 1–26.
    [28]
    Yuval Jacoby, Clark Barrett, and Guy Katz. 2020. Verifying recurrent neural networks using invariant inference. In Automated Technology for Verification and Analysis: 18th International Symposium (ATVA’20), Proceedings. Springer, 57–74.
    [29]
    Kai Jia and Martin Rinard. 2020. Efficient exact verification of binarized neural networks. Advances in Neural Information Processing Systems 33 (2020), 1782–1795.
    [30]
    Kyle D. Julian, Jessica Lopez, Jeffrey S. Brush, Michael P. Owen, and Mykel J. Kochenderfer. 2016. Policy compression for aircraft collision avoidance systems. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC’16). IEEE, 1–10.
    [31]
    Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification. Springer, 97–117.
    [32]
    Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, et al. 2019. The Marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification. Springer, 443–452.
    [33]
    Jianglin Lan, Yang Zheng, and Alessio Lomuscio. 2022. Tight neural network verification via semidefinite relaxations and linear reformulations. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 7272–7280.
    [34]
    Kim G. Larsen and Arne Skou. 1991. Bisimulation through probabilistic testing. Information and Computation 94, 1 (1991), 1–28.
    [35]
    Christopher Lazarus and Mykel J. Kochenderfer. 2022. A mixed integer programming approach for verifying properties of binarized neural networks. arXiv preprint arXiv:2203.07078 (2022).
    [36]
    Yann LeCun. 1998. The MNIST Database of Handwritten Digits.
    [37]
    Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
    [38]
    Francesco Leofante, Nina Narodytska, Luca Pulina, and Armando Tacchella. 2018. Automated verification of neural networks: Advances, challenges and perspectives. arXiv preprint arXiv:1805.09938 (2018).
    [39]
    Jianlin Li, Jiangchao Liu, Pengfei Yang, Liqian Chen, Xiaowei Huang, and Lijun Zhang. 2019. Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In International Static Analysis Symposium. Springer, 296–319.
    [40]
    Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing 461 (2021), 370–403.
    [41]
    Changliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett, Mykel J. Kochenderfer, et al. 2021. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization 4, 3–4 (2021), 244–404.
    [42]
    Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, and Fuad E. Alsaadi. 2017. A survey of deep neural network architectures and their applications. Neurocomputing 234 (2017), 11–26.
    [43]
    Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward Relu neural networks. arXiv preprint arXiv:1706.07351 (2017).
    [44]
    Antonio Loquercio, Ana I. Maqueda, Carlos R. Del-Blanco, and Davide Scaramuzza. 2018. Dronet: Learning to fly by driving. IEEE Robotics and Automation Letters 3, 2 (2018), 1088–1095.
    [45]
    Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, and Martin Vechev. 2022. PRIMA: General and precise neural network certification via scalable convex hull approximations. Proceedings of the ACM on Programming Languages 6, POPL (2022), 1–33.
    [46]
    Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, and Toby Walsh. 2018. Verifying properties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
    [47]
    Matan Ostrovsky, Clark Barrett, and Guy Katz. 2022. An abstraction-refinement approach to verifying convolutional neural networks. In Automated Technology for Verification and Analysis: 20th International Symposium (ATVA’22), Proceedings. Springer, 391–396.
    [48]
    Ajeet Ram Pathak, Manjusha Pandey, and Siddharth Rautaray. 2018. Application of deep learning for object detection. Procedia Computer Science 132 (2018), 1706–1717.
    [49]
    Pavithra Prabhakar. 2022. Bisimulations for neural network reduction. In Verification, Model Checking, and Abstract Interpretation: 23rd International Conference (VMCAI’22), Proceedings. Springer, 285–300.
    [50]
    Pavithra Prabhakar and Zahra Rahimi Afzal. 2019. Abstraction based output range analysis for neural networks. Advances in Neural Information Processing Systems 32 (2019).
    [51]
    Luca Pulina and Armando Tacchella. 2010. An abstraction-refinement approach to verification of artificial neural networks. In International Conference on Computer Aided Verification. Springer, 243–257.
    [52]
    Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344 (2018).
    [53]
    David Shriver, Dong Xu, Sebastian Elbaum, and Matthew B. Dwyer. 2019. Refactoring neural networks for verification. arXiv preprint arXiv:1908.08026 (2019).
    [54]
    Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev. 2019. Beyond the single neuron convex barrier for neural network certification. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc.
    [55]
    Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and effective robustness certification. Advances in Neural Information Processing Systems 31 (2018).
    [56]
    Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1–30.
    [57]
    Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. Boosting robustness certification of neural networks. In International Conference on Learning Representations.
    [58]
    Matthew Sotoudeh and Aditya V. Thakur. 2020. Abstract neural networks. In International Static Analysis Symposium. Springer, 65–88.
    [59]
    Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2019. Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations. https://openreview.net/forum?id=HyGIdiRqtm
    [60]
    Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T. Johnson. 2020. Verification of deep convolutional neural networks using imagestars. In International Conference on Computer Aided Verification. Springer, 18–42.
    [61]
    Hoang-Dung Tran, Diago Manzanas Lopez, Patrick Musau, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, and Taylor T. Johnson. 2019. Star-based reachability analysis of deep neural networks. In International Symposium on Formal Methods. Springer, 670–686.
    [62]
    Hoang-Dung Tran, Weiming Xiang, and Taylor T. Johnson. 2020. Verification approaches for learning-enabled autonomous cyber-physical systems. IEEE Design & Test (2020).
    [63]
    Damien Trentesaux, Rudy Dahyot, Abel Ouedraogo, Diego Arenas, Sébastien Lefebvre, Walter Schön, Benjamin Lussier, and Hugues Cheritel. 2018. The autonomous train. In 2018 13th Annual Conference on System of Systems Engineering (SoSE’18). IEEE, 514–520.
    [64]
    Caterina Urban and Antoine Miné. 2021. A review of formal methods applied to machine learning. arXiv preprint arXiv:2104.02466 (2021).
    [65]
    Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Efficient formal safety analysis of neural networks. arXiv preprint arXiv:1809.08098 (2018).
    [66]
    Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security’18). 1599–1614.
    [67]
    Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning. PMLR, 5286–5295.
    [68]
    Weiming Xiang, Hoang-Dung Tran, Xiaodong Yang, and Taylor T. Johnson. 2020. Reachable set estimation for neural network control systems: A simulation-guided approach. IEEE Transactions on Neural Networks and Learning Systems 32, 5 (2020), 1821–1830.
    [69]
    Jin Xu, Zishan Li, Miaomiao Zhang, and Bowen Du. 2021. Conv-Reluplex: A verification framework for convolution neural networks. In Proceedings of the 33rd International Conference on Software Engineering and Knowledge Engineering (SEKE’21).
    [70]
    Pengfei Yang, Jianlin Li, Jiangchao Liu, Cheng-Chao Huang, Renjue Li, Liqian Chen, Xiaowei Huang, and Lijun Zhang. 2021. Enhancing robustness verification for deep neural networks via symbolic propagation. Formal Aspects of Computing 33, 3 (2021), 407–435.
    [71]
    Meng Zhu, Weidong Min, Qi Wang, Song Zou, and Xinhao Chen. 2021. PFLU and FPFLU: Two novel non-monotonic activation functions in convolutional neural networks. Neurocomputing 429 (2021), 110–117.

    Cited By

    View all
    • (2024)A Risk-Based Decision-Making Process for Autonomous Trains Using POMDP: Case of the Anti-Collision FunctionIEEE Access10.1109/ACCESS.2023.334750012(5630-5647)Online publication date: 2024
    • (2024)Dendritic SE-ResNet Learning for Bioinformatic ClassificationBioinformatics Research and Applications10.1007/978-981-97-5128-0_12(139-150)Online publication date: 12-Jul-2024
    • (2023)A Sound Abstraction Method Towards Efficient Neural Networks VerificationVerification and Evaluation of Computer and Communication Systems10.1007/978-3-031-49737-7_6(76-89)Online publication date: 18-Oct-2023

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Embedded Computing Systems
    ACM Transactions on Embedded Computing Systems  Volume 23, Issue 4
    July 2024
    306 pages
    ISSN:1539-9087
    EISSN:1558-3465
    DOI:10.1145/3613607
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Journal Family

    Publication History

    Published: 10 June 2024
    Online AM: 28 August 2023
    Accepted: 07 August 2023
    Revised: 23 May 2023
    Received: 17 May 2022
    Published in TECS Volume 23, Issue 4

    Check for updates

    Author Tags

    1. Formal verification
    2. neural network verification
    3. abstraction
    4. abstract interpretation

    Qualifiers

    • Research-article

    Funding Sources

    • French program “Investissements d’Avenir”
    • French collaborative project TASV (Train Autonome Service Voyageurs)

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)514
    • Downloads (Last 6 weeks)99

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Risk-Based Decision-Making Process for Autonomous Trains Using POMDP: Case of the Anti-Collision FunctionIEEE Access10.1109/ACCESS.2023.334750012(5630-5647)Online publication date: 2024
    • (2024)Dendritic SE-ResNet Learning for Bioinformatic ClassificationBioinformatics Research and Applications10.1007/978-981-97-5128-0_12(139-150)Online publication date: 12-Jul-2024
    • (2023)A Sound Abstraction Method Towards Efficient Neural Networks VerificationVerification and Evaluation of Computer and Communication Systems10.1007/978-3-031-49737-7_6(76-89)Online publication date: 18-Oct-2023

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media