Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

An Interactive XAI Interface with Application in Healthcare for Non-experts

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1901))

Included in the following conference series:

  • 1146 Accesses

Abstract

Explainable artificial intelligence (XAI) has gained increasing attention in the medical field, where understanding the reasons for predictions is crucial. In this paper we introduce an interactive and dynamic visual interface providing global, local and counterfactual explanations to end-users, with a use case in healthcare. The dataset used in the study is about predicting an individual’s coronary heart disease (CHD) within 10 years using the decision tree classification method. We evaluated our XAI system with 200 participants. Our results show that the participants reported an overall good assessment of the user interface, with non-expert users showing a higher satisfaction than users who have some degree of knoweldge of AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html.

  2. 2.

    https://www.xoriant.com/blog/decision-trees-for-classification-a-machine-learning-algorithm.

  3. 3.

    https://www.kaggle.com/datasets/christofel04/cardiovascular-study-dataset-predict-heart-disea.

  4. 4.

    https://med.bristol-xai.uk.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence. IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Alonso, J.M., Bugarín, A.: Expliclas: automatic generation of explanations in natural language for weka classifiers. In: 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6. IEEE (2019)

    Google Scholar 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  4. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  5. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, no. 1, pp. 8–13 (2017)

    Google Scholar 

  6. Bove, C., Aigrain, J., Lesot, M.J., Tijus, C., Detyniecki, M.: Contextualization and exploration of local feature importance explanations to improve understanding and satisfaction of non-expert users. In: Proceedings of 27th International Conference on Intelligent User Interfaces, pp. 807–819 (2022)

    Google Scholar 

  7. Buitinck, L., et al.: Api design for machine learning software: experiences from the scikit-learn project. arXiv preprint arXiv:1309.0238 (2013)

  8. Cheng, H.F., et al.: Explaining decision-making algorithms through ui: strategies to help non-expert stakeholders. In: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, pp. 1–12 (2019)

    Google Scholar 

  9. Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. Wiley Interdisc. Rev. Data Mining Knowl. Disc. 11(1), e1391 (2021)

    Article  Google Scholar 

  10. Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural Inf. Process. Syst. 8 (1995)

    Google Scholar 

  11. Eifler, R., Brandao, M., Coles, A.J., Frank, J., Hoffmann, J.: Plan-property dependencies are useful: a user study. In: ICAPS 2021 Workshop on Explainable AI Planning (2021)

    Google Scholar 

  12. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. In: Annals of Statistics, pp. 1189–1232 (2001)

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  14. Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q., 497–530 (1999)

    Google Scholar 

  15. Grinberg, M.: Flask Web Development: Developing Web Applications with Python. O’Reilly Media, Inc., Sebastopol (2018)

    Google Scholar 

  16. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable ai: challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)

  17. Kulesza, A., Taskar, B., et al.: Determinantal point processes for machine learning. Found. Trends® Mach. Learn. 5(2–3), 123–286 (2012)

    Article  MATH  Google Scholar 

  18. Li, D., et al.: Echarts: a declarative framework for rapid construction of web-based visualization. Visual Inf. 2(2), 136–146 (2018)

    Article  MathSciNet  Google Scholar 

  19. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  20. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf

  21. Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Q. 38(1), 73–100 (2014)

    Article  Google Scholar 

  22. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  23. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in ai. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 279–288 (2019)

    Google Scholar 

  24. Molnar, C.: Interpretable Machine Learning, 2 edn. (2022). https://christophm.github.io/interpretable-ml-book

  25. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P., Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the CVPR, pp. 2574–2582

    Google Scholar 

  26. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)

    Google Scholar 

  27. Myles, A.J., Feudale, R.N., Liu, Y., Woody, N.A., Brown, S.D.: An introduction to decision tree modeling. J. Chemometr. J. Chemometr. Soc. 18(6), 275–285 (2004)

    Article  Google Scholar 

  28. Nori, H., Jenkins, S., Koch, P., Caruana, R.: Interpretml: a unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 (2019)

  29. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  30. Stevenson, A.: Oxford Dictionary of English. Oxford University Press, New York City (2010)

    Google Scholar 

  31. Sutton, R.T., Pincock, D., Baumgart, D.C., Sadowski, D.C., Fedorak, R.N., Kroeker, K.I.: An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Dig. Med. 3(1), 17 (2020)

    Article  Google Scholar 

  32. Swartout, W.R., Buchanan, B.G., Shortliffe, E.H.: Rule-Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project, p. 702. Addison-wesley, Reading (1984)

    Google Scholar 

  33. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)

    Google Scholar 

Download references

Acknowledge

This work is partially funded by the EPSRC CHAI project (EP/T026820/1)

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jingyu Hu , Kevin McAreavey or Weiru Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, J., Liang, Y., Zhao, W., McAreavey, K., Liu, W. (2023). An Interactive XAI Interface with Application in Healthcare for Non-experts. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44064-9_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44063-2

  • Online ISBN: 978-3-031-44064-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics