Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Enhancing Trust in Machine Learning Systems by Formal Methods

With an Application to a Meteorological Problem

  • Conference paper
  • First Online:
Machine Learning and Knowledge Extraction (CD-MAKE 2023)

Abstract

With the deployment of applications based on machine learning techniques the need for understandable explanations of these systems’ results becomes evident. This paper clarifies the concept of an “explanation”: the main goal of an explanation is to build trust in the recipient of the explanation. This can only be achieved by creating an understanding of the results of the AI systems in terms of the users’ domain knowledge. In contrast to most of the approaches found in the literature, which base the explanation of the AI system’s results on the model provided by the machine learning algorithm, this paper tries to find an explanation in the specific expert knowledge of the system’s users. The domain knowledge is defined as a formal model derived from a set of if-then-rules provided by experts. The result from the AI system is represented as a proposition in a temporal logic. Now we attempt to formally prove this proposition within the domain model. We use model checking algorithms and tools for this purpose. If the proof is successful, the result of the AI system is consistent with the model of the domain knowledge. The model contains the rules it is based on and hence the path representing the proof can be translated back to the rules: this explains, why the proposition is consistent with the domain knowledge. The paper describes the application of this approach to a real world example from meteorology, the short-term forecasting of cloud coverage for particular locations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    In Chapter II, Article 5, Paragraph 71 of the introduction reads “… must guarantee … the right … to obtain an explanation of the decision reached after such assessment”. https://www.privacy-regulation.eu/en/22.htm, last accessed 2023/03/01.

  2. 2.

    For more information see for example https://en.allmetsat.com.

  3. 3.

    Source: https://en.allmetsat.com/

  4. 4.

    http://www.prismmodelchecker.org/

References

  1. Achinstein, P.: The Nature of Explanation. Oxford University Press, New York (1983)

    Google Scholar 

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Arras, L., Osman, A., Müller, K.-R., Samek, W.: Evaluating recurrent neural network explanations. In: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 113–126 (2019)

    Google Scholar 

  4. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  5. Atakishiyev, S., et al.: A multi-component framework for the analysis and design of explainable artificial intelligence. Mach. Learn. Knowl. Extr. 3, 900–921 (2021)

    Google Scholar 

  6. Bach, S., et al.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10, e0130140 (2015)

    Article  Google Scholar 

  7. Durán, J.: Dissecting scientific explanation in AI (sXAI): a case for medicine and healthcare. Artif. Intell. 297(C), 103498 (2021)

    Google Scholar 

  8. Evans, R., Grefenstette, E.: Learning explanatory rules from noisy data. J. Artif. Intell. Res. 61, 1–64 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  9. Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51, 1–42 (2018)

    Article  Google Scholar 

  10. Gunning, D.: DARPA’s explainable artificial intelligence (XAI) program (2019)

    Google Scholar 

  11. Habermas, J.: Communication and the Evolution of Society. Toronto: Beacon Press. The book contains translations of 5 essays by Habermas. The quotation is taken from the first essay “What Is Universal Pragmatics”, p. 3. The original German version „Was heißt Universalpragmatik?“ was written 1976 and published by Suhrkamp 1984 in: Vorstudien und Ergänzungen zur Theorie des kommunikativen Handelns, pp. 353–440 (1979)

    Google Scholar 

  12. Hempel, C.G., Oppenheim, P.: Studies in the Logic of Explanation, 1948. In: Readings in the Philosophy of Science, pp. 8–38. Prentice Hall, Englewood Cliffs (1970)

    Google Scholar 

  13. Holland, J., Holyoak, K., Nisbett, R., Thagart, P.: Induction: Processes of Inference, Learning, and Discovery. MIT Press, Cambridge (1986)

    Google Scholar 

  14. Holzinger, A., et al.: Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9, e1312 (2019)

    Article  Google Scholar 

  15. Holzinger, A.: The next frontier: AI we can really trust. In: Kamp, M., et al. (eds.) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. CCIS, vol. 1524, pp. 427–440. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-93736-2_33

  16. Hou, X., Papachristopoulou, K., Saint-Drenan, Y., Kazadzis, S.: Solar radiation nowcasting using a Markov chain multi-model approach. Energies 15(9), 2996 (2022)

    Article  Google Scholar 

  17. Kwiatkowska, M., Norman, G., Parker, D.: Stochastic model checking. In: Bernardo, M., Hillston, J. (eds.) SFM 2007. LNCS, vol. 4486, pp. 220–270. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72522-0_6

    Chapter  Google Scholar 

  18. Longo, L., et al.: Explainable artificial intelligence: concepts, applications, research challenges and visions. In: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pp. 1–16 (2020)

    Google Scholar 

  19. Makridakis, S.: The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90, 46–60 (2017)

    Article  Google Scholar 

  20. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences (2017). arXiv preprint arXiv:1712.00547

  21. Muggleton, S.: Inductive logic programming. N. Gener. Comput. 8, 295–318 (1991)

    Article  MATH  Google Scholar 

  22. Poole, D., Goebel, R., Aleliunas, R.: A Logical Reasoning System for Defaults and Diagnosis, University of Waterloo, Dep. of Computer Science. Research Rep. CS-86-06 (1986)

    Google Scholar 

  23. Pople, H.E.: On the mechanization of abductive logic. In: IJCAI’73: Proceedings of the 3rd International Joint Conference on Artificial Intelligence, pp. 147–152 (1973)

    Google Scholar 

  24. Preece, A.: Asking ‘Why’ in AI: explainability of intelligent systems–perspectives and challenges. Intell. Syst. Account. Financ. Manag. 25, 63–72 (2018)

    Article  Google Scholar 

  25. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)

    Article  Google Scholar 

  26. Singla, P., Duhan, M., Saroha, S.: A comprehensive review and analysis of solar forecasting techniques. Front. Energy 16, 187–223 (2022)

    Google Scholar 

  27. van Fraassen, B.C.: The Scientific Image. Clarendon Press, Oxford (1980)

    Book  Google Scholar 

  28. Voyant, C., et al.: Machine learning methods for solar radiation forecasting: a review. Renew. Energy 105, 569–582 (2017)

    Article  Google Scholar 

  29. Zhang, Q.-S., Zhu, S.-C.: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018). https://doi.org/10.1631/FITEE.1700808

    Article  Google Scholar 

  30. https://en.allmetsat.com. Accessed 01 Mar 2023

  31. earthobservatory.nasa.gov/features/ColorImage?msclkid=21fe225da5ff11ec941903202028b5d1. Accessed 01 Mar 2023

Download references

Acknowledgement

This research was funded in whole, or in part, by the Austrian Science Fund (FWF) P 33656. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Tavolato .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tavolato-Wötzl, C., Tavolato, P. (2023). Enhancing Trust in Machine Learning Systems by Formal Methods. In: Holzinger, A., Kieseberg, P., Cabitza, F., Campagner, A., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2023. Lecture Notes in Computer Science, vol 14065. Springer, Cham. https://doi.org/10.1007/978-3-031-40837-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40837-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40836-6

  • Online ISBN: 978-3-031-40837-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics