Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Coupling algebraic topology theory, formal methods and safety requirements toward a new coverage metric for artificial intelligence models

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Safety requirements are among the main barriers to the industrialization of machine learning based on deep learning architectures. In this work, a new metric of data coverage is presented by exploring the algebraic topology theory and the abstract interpretation process. The algebraic topology connects the cloud points of the dataset, and the abstract interpretation evaluates the robustness of the model. Thus, the coverage metric evaluates simultaneously the dataset and the robustness and highlights safe and unsafe areas. We also propose the first complete process to evaluate, in terms of data completeness, the machine learning models by providing a workflow based on the proposed metric and a set of safety requirements applied on autonomous driving. The obtained results provide an interpretable coverage evaluation and a promising line of research in the industrialization of artificial intelligence models. It is important to mention that the proposed metric is not dependent on the specific data. In other terms, it can be applied on 1 to n-dimensional data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. See Sect. 4.3.

  2. See Sect. 6.1.

  3. Scenarios evaluated by the model and considered as safe.

  4. Scenarios not evaluated by the model but considered as safe by AI\(^2\) process.

  5. Scenarios not evaluated by the model and its decision could be unsafe.

  6. Scenarios evaluated by the model and considered as unsafe.

  7. https://github.com/eth-sri/eran.

  8. https://github.com/eth-sri/ELINA.

  9. https://gudhi.inria.fr/.

References

  1. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples, arXiv preprint arXiv:1412.6572

  2. Liu X, Xie L, Wang Y, Zou J, Xiong J, Ying Z, Vasilakos AV (2020) Privacy and security issues in deep learning: a survey. IEEE Access 9:4566

    Article  Google Scholar 

  3. Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R, Chatila R, Franscisco H (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82

    Article  Google Scholar 

  4. Holzinger A, Malle B, Saranti A, Pfeifer B (2021) Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inf Fusion 71:28

    Article  Google Scholar 

  5. Lipton ZC (2018) The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3):31

    Article  Google Scholar 

  6. Urban C, Miné A (2021) A review of formal methods applied to machine learning, arXiv preprint arXiv:2104.02466

  7. Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient SMT solver for verifying deep neural networks. In: International conference on computer aided verification, pp 97–117. Springer

  8. Sotoudeh M, Thakur A (2019) Correcting deep neural networks with small, generalizing patches. In: Workshop on safety and robustness in decision making

  9. Naseer M, Minhas MF, Khalid F, Hanif MA, Hasan O, Shafique M (2020) Fannet: formal analysis of noise tolerance, training bias and input sensitivity in neural networks. In: 2020 Design, automation & test in europe conference & exhibition (DATE), pp 666–669. IEEE

  10. Li J, Liu J, Yang P, Chen L, Huang X, Zhang L (2019) Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: International static analysis symposium, pp 296–319. Springer

  11. Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. In: International conference on computer aided verification, pp 3–29. Springer

  12. Pei K, Cao Y, Yang J, Jana S (2017) Deepxplore: automated whitebox testing of deep learning systems. In: Proceedings of the 26th symposium on operating systems principles, pp 1–18. ACM

  13. Tian Y, Pei K, Jana S, Ray B (2018) Deeptest: automated testing of deep-neural-network-driven autonomous cars. In: Proceedings of the 40th international conference on software engineering, pp 303–314

  14. Yu J, Fu Y, Zheng Y, Wang Z, Ye X (2019) Test4deep: an effective white-box testing for deep neural networks. In: 2019 IEEE international conference on computational science and engineering (CSE) and IEEE international conference on embedded and ubiquitous computing (EUC), pp 16–23. IEEE

  15. Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on principles of programming languages, pp 238–252. ACM

  16. Cousot P, Cousot R (1992) Abstract interpretation and application to logic programs. J Logic Program 13(2–3):103

    Article  MathSciNet  Google Scholar 

  17. Singh G, Gehr T, Mirman M, Püschel M, Vechev M (2018) Fast and effective robustness certification. In: Advances in neural information processing systems, pp 10825–10836

  18. Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) Ai2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE symposium on security and privacy (SP), pp 3–18. IEEE

  19. Singh G, Gehr T, Püschel M, Vechev MT (2019) Boosting robustness certification of neural networks. In: ICLR (Poster)

  20. Singh G, Gehr T, Püschel M, Vechev M (2019) An abstract domain for certifying neural networks. Proc ACM Program Lang 3(POPL):41

    Article  Google Scholar 

  21. Blanchet B (2002) Introduction to abstract interpretation, lecture script

  22. Ghorbal K, Goubault E, Putot S (2009) The zonotope abstract domain Taylor1+. In: International conference on computer aided verification, pp 627–633. Springer

  23. Khalsi R, Mziou-Sallami M, Smati I, Ghorbel F (2022) Contourverifier: a novel system for the robustness evaluation of deepcontour classifiers. In: Proceedings of the 14th international conference on agents and artificial intelligence, vol 2

  24. Mziou-Sallami M, Khedher MI, Trabelsi A, Kerboua-Benlarbi S, Bettebghor D (2019) Safety and robustness of deep neural networks object recognition under generic attacks. In: International conference on neural information processing, pp 274–286. Springer

  25. Mziou-Sallami M, Adjed F (2022) Towards a certification of deep image classifiers against convolutional attacks. In: Proceedings of the 14th international conference on agents and artificial intelligence—volume 2: ICAART, pp 419–428. INSTICC (SciTePress). https://doi.org/10.5220/0010870400003116

  26. Holzinger A (2014) In: Interactive knowledge discovery and data mining in biomedical informatics, pp 331–356. Springer

  27. Edelsbrunner H, Harer J (2010) Computational topology: an introduction. American Mathematical Society

  28. Khrulkov V, Oseledets IV (2018) Geometry score: a method for comparing generative adversarial networks. CoRR abs/1802.02664

  29. Maria C (2014) Algorithms and data structures in computational topology. Ph.D. thesis, Université Nice Sophia Antipolis

  30. Sutton RS, Barto AG et al (1998) Introduction to reinforcement learning, vol 135. MIT Press Cambridge

  31. Bellman R (1957) A Markovian decision process. J Math Mech 6(5):679

    MathSciNet  MATH  Google Scholar 

  32. Adjed F, Pelliccia F, Rezzoug M, Schott L (2021) Certification of deep reinforcement learning with multiple outputs using abstract interpretation and safety critical systems. In: Proceedings of the 31st European safety and reliability conference, pp 3185–3191

  33. Leurent E (2018) An environment for autonomous driving decision-making. https://github.com/eleurent/highway-env

  34. Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms, arXiv preprint arXiv:1707.06347

  35. Zhao C, Tang Y, Sun Q, Vasilakos A.V (2021) Deep direct visual odometry. In: IEEE transactions on intelligent transportation systems

  36. Chen J, Zhou J, Cao Z, Vasilakos AV, Dong X, Choo KKR (2019) Lightweight privacy-preserving training and evaluation for discretized neural networks. IEEE Internet Things J 7(4):2663

    Article  Google Scholar 

  37. Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4401–4410

  38. Wu M, Xiong N, Vasilakos A.V, Leung V.C, Chen C.P (2020) RNN-K: a reinforced newton method for consensus-based distributed optimization and control over multiagent systems. IEEE Trans Cybern

Download references

Acknowledgements

This research work has been carried out in the framework of IRT SystemX, Paris-Saclay, France, and therefore granted with public funds within the scope of the French Investments for the future program (Programme Investissements d’Avenir “PIA”). This work is a part of the EPI project (EPI for Evaluation des Performances de l’Intelligence artificielle - AI-based Performance Evaluation of Decision Systems). The project is supervised by IRT systemX and its partners, Apsys, Expleo France, Naval Group and Stellantis. We specially wish to thank Gwenaëlle Berthier for her project management and team leadership and Rosemary MacGillivray for her English proofreading.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Faouzi Adjed.

Ethics declarations

Conflict of interest

The authors have no competing relevant interest to declare about the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adjed, F., Mziou-Sallami, M., Pelliccia, F. et al. Coupling algebraic topology theory, formal methods and safety requirements toward a new coverage metric for artificial intelligence models. Neural Comput & Applic 34, 17129–17144 (2022). https://doi.org/10.1007/s00521-022-07363-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07363-6

Keywords