Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-60606-9_15guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Ontology-Based Explanations of Neural Networks: A User Perspective

Published: 29 June 2024 Publication History

Abstract

There is a variety of methods focused on interpreting and explaining predictions obtained using neural networks, however, most of these methods are intended for experts in the field of machine learning and artificial intelligence, and not for domain experts. Ontology-based explanation methods aim to address this issue, exploiting the rationale that presenting explanations in terms of the problem domain, accessible and understandable to the human expert, can improve the understandability of explanations. However, very few studies examine real effects of ontology-based explanations and their perception by humans. On the other hand, it is widely recognized that experimental evaluation of explanation techniques is highly important and increasingly attracts attention of both AI and HCI communities. In this paper, we explore users’ interaction with ontology-based explanations of neural networks in order to a) check if such explanations simplify the task of decision-maker, b) assess and compare various forms of ontology-based explanations. We collect both objective performance metrics (i.e., decision time and accuracy) as well as subjective ones (via questionnaire). Our study has shown that ontology-based explanations can improve decision-makers performance, however, complex logical explanations not always better than simple indication of the key concepts influencing the model output.

References

[1]
Agafonov, A., Ponomarev, A.: RevelioNN: retrospective extraction of visual and logical insights for ontology-based interpretation of neural networks. In: 2023 34th Conference of Open Innovations Association (FRUCT), pp. 3–9. IEEE, November 2023., https://ieeexplore.ieee.org/document/10328156/
[2]
Bellucci, M., Delestre, N., Malandain, N., Zanni-merk, C.: Ontologies to build a predictive architecture to classify and explain. In: DeepOntoNLP Workshop @ESWC 2022 (2022). https://hal.archives-ouvertes.fr/hal-03684275
[3]
Bourgeais, V., Zehraoui, F., Ben Hamdoune, M., Hanczar, B.: Deep GONet: self-explainable deep neural network based on Gene Ontology for phenotype prediction from gene expression data. BMC Bioinform. 22, 1–24 (2021)., https://doi.org/10.1186/s12859-021-04370-7
[4]
Bourguin G, Lewandowski A, Bouneffa M, and Ahmad A Farkaš I, Masulli P, Otte S, and Wermter S Towards ontologically explainable classifiers Artificial Neural Networks and Machine Learning – ICANN 2021 2021 Cham Springer 472-484
[5]
Burkart N and Huber MF A survey on the explainability of supervised machine learning J. Artif. Intell. Res. 2021 70 245-317
[6]
Confalonieri R, Weyde T, Besold TR, Del Prado M, and Martín F Trepan reloaded: a knowledge-driven approach to explaining black-box models Front. Artif. Intell. Appl. 2020 325 2457-2464
[7]
Confalonieri, R., Weyde, T., Besold, T.R., Moscoso del Prado Martín, F.: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artif. Intell. 296, 103471 (2021).
[8]
Daniels, Z.A., Frank, L.D., Menart, C., Raymer, M., Hitzler, P.: A framework for explainable deep neural models using external knowledge graphs. In: Pham, T., Solomon, L., Rainey, K. (eds.) Proceedings of SPIE 11413, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, p. 73. SPIE, April 2020., https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11413/2558083/A-framework-for-explainable-deep-neural-models-using-external-knowledge/10.1117/12.2558083.full
[9]
de Sousa Ribeiro, M., Krippahl, L., Leite, J.: Explainable Abstract Trains Dataset, December 2020. http://arxiv.org/abs/2012.12115
[10]
de Sousa Ribeiro, M., Leite, J.: Aligning artificial neural networks and ontologies towards explainable AI. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 4932–4940 (2021). https://ojs.aaai.org/index.php/AAAI/article/view/16626
[11]
Doshi-Velez, F., Kim, B.: Towards A Rigorous Science of Interpretable Machine Learning, February 2017. http://arxiv.org/abs/1702.08608
[12]
Futia G and Vetrò A On the integration of knowledge graphs into deep learning models for a more comprehensible AI-Three challenges for future research Information (Switzerland) 2020 11 2 122
[13]
Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for Explainable AI: Challenges and Prospects (2018). http://arxiv.org/abs/1812.04608
[14]
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)., https://linkinghub.elsevier.com/retrieve/pii/S0167923610002368
[15]
Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of Conference on Human Factors in Computing Systems, pp. 1–15 (2020).
[16]
Lipton, Z.C.: The Mythos of Model Interpretability. Queue 16(3), 31–57 (2018)., https://dl.acm.org/doi/10.1145/3236386.3241340
[17]
Martin, T., Diallo, A.B., Valtchev, P., Lacroix, R.: Bridging the gap between an ontology and deep neural models by pattern mining. In: The Joint Ontology Workshops, JOWO 2020, CEUR vol. 2708 (2020). http://ceur-ws.org/Vol-2708/donlp4.pdf
[18]
Mucha, H., Robert, S., Breitschwerdt, R., Fellmann, M.: Interfaces for explanations in human-AI interaction: proposing a design evaluation approach. In: Proceedings of Conference on Human Factors in Computing Systems (2021).
[19]
Panigutti, C., Perotti, A., Pedreschi, D.: Doctor XAI: an ontology-based approach to black-box sequential data classification explanations. In: FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 629–639 (2020).
[20]
Ponomarev, A., Agafonov, A.: Ontology concept extraction algorithm for deep neural networks. In: 2022 32nd Conference of Open Innovations Association (FRUCT), pp. 221–226. IEEE, November 2022., https://ieeexplore.ieee.org/document/9953838/
[21]
Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and Measuring Model Interpretability, February 2018. http://arxiv.org/abs/1802.07810
[22]
Ribeiro, M., Singh, S., Guestrin, C.: “Why Should I Trust You?”: explaining the predictions of any classifier. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 97–101. Association for Computational Linguistics, Stroudsburg, PA, USA (2016)., http://aclweb.org/anthology/N16-3020
[23]
Slack, D., Friedler, S.A., Scheidegger, C., Roy, C.D.: Assessing the Local Interpretability of Machine Learning Models, February 2019. http://arxiv.org/abs/1902.03501
[24]
Voogd J, de Heer P, Veltman K, Hanckmann P, and van Lith J Holzinger A, Kieseberg P, Tjoa AM, and Weippl E Using relational concept networks for explainable decision support Machine Learning and Knowledge Extraction 2019 Cham Springer 78-93

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Artificial Intelligence in HCI: 5th International Conference, AI-HCI 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Washington, DC, USA, June 29 – July 4, 2024, Proceedings, Part I
Jun 2024
490 pages
ISBN:978-3-031-60605-2
DOI:10.1007/978-3-031-60606-9
  • Editors:
  • Helmut Degen,
  • Stavroula Ntoa

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 29 June 2024

Author Tags

  1. XAI
  2. Explainable AI
  3. Ontology
  4. Ontology-Based Explanations
  5. User Study
  6. Machine Learning
  7. Neural Networks

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media