Abstract
Explainable artificial intelligence (XAI) has gained increasing attention in the medical field, where understanding the reasons for predictions is crucial. In this paper we introduce an interactive and dynamic visual interface providing global, local and counterfactual explanations to end-users, with a use case in healthcare. The dataset used in the study is about predicting an individual’s coronary heart disease (CHD) within 10 years using the decision tree classification method. We evaluated our XAI system with 200 participants. Our results show that the participants reported an overall good assessment of the user interface, with non-expert users showing a higher satisfaction than users who have some degree of knoweldge of AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence. IEEE Access 6, 52138–52160 (2018)
Alonso, J.M., Bugarín, A.: Expliclas: automatic generation of explanations in natural language for weka classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6. IEEE (2019)
Arrieta, A.B., et al.: Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, no. 1, pp. 8–13 (2017)
Bove, C., Aigrain, J., Lesot, M.J., Tijus, C., Detyniecki, M.: Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users. In: Proceedings of 27th International Conference on Intelligent User Interfaces, pp. 807–819 (2022)
Buitinck, L., et al.: Api design for machine learning software: experiences from the scikit-learn project. arXiv preprint arXiv:1309.0238 (2013)
Cheng, H.F., et al.: Explaining decision-making algorithms through ui: strategies to help non-expert stakeholders. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, pp. 1–12 (2019)
Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. Wiley Interdisc. Rev. Data Mining Knowl. Disc. 11(1), e1391 (2021)
Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural Inf. Process. Syst. 8 (1995)
Eifler, R., Brandao, M., Coles, A.J., Frank, J., Hoffmann, J.: Plan-property dependencies are useful: a user study. In: ICAPS 2021 Workshop on Explainable AI Planning (2021)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. In: Annals of Statistics, pp. 1189–1232 (2001)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q., 497–530 (1999)
Grinberg, M.: Flask Web Development: Developing Web Applications with Python. O’Reilly Media, Inc., Sebastopol (2018)
Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable ai: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)
Kulesza, A., Taskar, B., et al.: Determinantal point processes for machine learning. Found. Trends® Mach. Learn. 5(2–3), 123–286 (2012)
Li, D., et al.: Echarts: a declarative framework for rapid construction of web-based visualization. Visual Inf. 2(2), 136–146 (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30 (2017)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Q. 38(1), 73–100 (2014)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in ai. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 279–288 (2019)
Molnar, C.: Interpretable Machine Learning, 2 edn. (2022). https://christophm.github.io/interpretable-ml-book
Moosavi-Dezfooli, S., Fawzi, A., Frossard, P., Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the CVPR, pp. 2574–2582
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)
Myles, A.J., Feudale, R.N., Liu, Y., Woody, N.A., Brown, S.D.: An introduction to decision tree modeling. J. Chemometr. J. Chemometr. Soc. 18(6), 275–285 (2004)
Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Stevenson, A.: Oxford Dictionary of English. Oxford University Press, New York City (2010)
Sutton, R.T., Pincock, D., Baumgart, D.C., Sadowski, D.C., Fedorak, R.N., Kroeker, K.I.: An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Dig. Med. 3(1), 17 (2020)
Swartout, W.R., Buchanan, B.G., Shortliffe, E.H.: Rule-Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project, p. 702. Addison-wesley, Reading (1984)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)
Acknowledge
This work is partially funded by the EPSRC CHAI project (EP/T026820/1)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, J., Liang, Y., Zhao, W., McAreavey, K., Liu, W. (2023). An Interactive XAI Interface with Application in Healthcare for Non-experts. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-44064-9_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44063-2
Online ISBN: 978-3-031-44064-9
eBook Packages: Computer ScienceComputer Science (R0)