Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581641.3584075acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations

Published: 27 March 2023 Publication History

Abstract

Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and example-based explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
[3]
Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 75, 13 pages. https://doi.org/10.1145/3411764.3445736
[4]
Anna Markella Antoniadi, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A. Becker, and Catherine Mooney. 2021. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Applied Sciences 11, 11 (2021). https://doi.org/10.3390/app11115088
[5]
Aditya Bhattacharya. 2022. Applied Machine Learning Explainability Techniques. In Applied Machine Learning Explainability Techniques. Packt Publishing, Birmingham, UK. https://www.packtpub.com/product/applied-machine-learning-explainability-techniques/9781803246154
[6]
Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users(IUI ’22). Association for Computing Machinery, New York, NY, USA, 807–819. https://doi.org/10.1145/3490099.3511139
[7]
Virginia Braun and Victoria Clarke. 2012. Thematic Analysis. In APA Handbook of Research Methods in Psychology, Vol 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological. American Psychological Association, Washington, DC, US, 57–71. https://doi.org/10.1037/13620-004
[8]
Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. https://doi.org/10.1109/ICHI.2015.26
[9]
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S. Corrado, Martin C. Stumpe, and Michael Terry. 2019. Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making. https://doi.org/10.48550/ARXIV.1902.02960
[10]
Gabriella Casalino, Giovanna Castellano, Vincenzo Pasquadibisceglie, and Gianluca Zaza. 2018. Contact-Less Real-Time Monitoring of Cardiovascular Risk Using Video Imaging and Fuzzy Inference Rules. Information 10 (12 2018), 9. https://doi.org/10.3390/info10010009
[11]
Hao Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).
[12]
Hwayoung Cho, Gail Keenan, Olatunde O Madandola, Fabiana Cristina Dos Santos, Tamara G R Macieira, Ragnhildur I Bjarnadottir, Karen J B Priola, and Karen Dunn Lopez. 2022. Assessing the Usability of a Clinical Decision Support System: Heuristic Evaluation. JMIR Hum Factors 9, 2 (10 May 2022), e31758. https://doi.org/10.2196/31758
[13]
Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, and Francisco Cruz. 2021. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299 (2021), 103525.
[14]
Figma. 2022. Figma. https://www.figma.com/Accessed: 2022-08-25.
[15]
Luis Fregoso-Aparicio, Julieta Noguez, Luis Montesinos, and José García-García. 2021. Machine learning and deep learning predictive models for type 2 diabetes: a systematic review. Diabetology and Metabolic Syndrome 13 (12 2021). https://doi.org/10.1186/s13098-021-00767-9
[16]
Stephanie Galaitsi, Benjamin D. Trump, Jeffrey M. Keisler, Igor Linkov, and Alexander Kott. 2022. Cybertrust: From Explainable to Actionable and Interpretable AI (AI2). (2022). https://doi.org/10.48550/ARXIV.2201.11117
[17]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2013. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72 (01 2013). https://doi.org/10.1016/j.ijhcs.2013.12.007
[18]
Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. ViCE: Visual Counterfactual Explanations for Machine Learning Models. https://doi.org/10.48550/ARXIV.2003.02428
[19]
Stefan Grafberger, Paul Groth, and Sebastian Schelter. 2022. Towards Data-Centric What-If Analysis for Native Machine Learning Pipelines. In Proceedings of the Sixth Workshop on Data Management for End-To-End Machine Learning(Philadelphia, Pennsylvania) (DEEM ’22). Association for Computing Machinery, New York, NY, USA, Article 3, 5 pages. https://doi.org/10.1145/3533028.3533303
[20]
Francisco Gutiérrez, Nyi Nyi Htun, Vero Vanden Abeele, Robin De Croon, and Katrien Verbert. 2022. Explaining Call Recommendations in Nursing Homes: A User-Centered Design Approach for Interacting with Knowledge-Based Health Decision Support Systems. In 27th International Conference on Intelligent User Interfaces (Helsinki, Finland) (IUI ’22). Association for Computing Machinery, New York, NY, USA, 162–172. https://doi.org/10.1145/3490099.3511158
[21]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. https://doi.org/10.48550/ARXIV.1812.04608
[22]
Frederick Hohman, Andrew Head, Rich Caruana, Robert Deline, and Steven Drucker. 2019. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models., 13 pages. https://doi.org/10.1145/3290605.3300809
[23]
Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis, and Douglas B. Kell. 2017. What do we need to build explainable AI systems for the medical domain?https://doi.org/10.48550/ARXIV.1712.09923
[24]
Sheikh Rabiul Islam, William Eberle, Sheikh Khaled Ghafoor, and Mohiuddin Ahmed. 2021. Explainable Artificial Intelligence Approaches: A Survey. (2021). https://doi.org/10.48550/ARXIV.2101.09429
[25]
Liu Jiang, Shixia Liu, and Changjian Chen. 2018. Recent Research Advances on Interactive Machine Learning. (2018). https://doi.org/10.48550/ARXIV.1811.04548
[26]
Yucheng Jin, Nava Tintarev, Nyi Nyi Htun, and Katrien Verbert. 2020. Effects of personal characteristics in control-oriented user interfaces for music recommender systems. User Modeling and User-Adapted Interaction 30 (04 2020). https://doi.org/10.1007/s11257-019-09247-2
[27]
Alessandro Blasimme Julia Amann, Dietmar Frey Effy Vayena, and Vince I. Madai on behalf of the Precise4Q consortium. 2022. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making(2022). https://doi.org/10.1186/s12911-020-01332-6
[28]
Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic Recourse: from Counterfactual Explanations to Interventions. https://doi.org/10.48550/ARXIV.2002.06278
[29]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models., 5686-5697 pages. https://doi.org/10.1145/2858036.2858529
[30]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (Atlanta, Georgia, USA) (IUI ’15). Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/2678025.2701399
[31]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (jul 2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
[32]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. (2020), 1–15. https://doi.org/10.1145/3313831.3376590
[33]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems(CHI ’09). Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
[34]
Scott Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. (2017). https://doi.org/10.48550/ARXIV.1705.07874
[35]
Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karlaš, William Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Douwe Kiela, David Jurado, David Kanter, Rafael Mosquera, Juan Ciro, Lora Aroyo, Bilge Acun, Sabri Eyuboglu, Amirata Ghorbani, Emmett Goodman, Tariq Kane, Christine R. Kirkpatrick, Tzu-Sheng Kuo, Jonas Mueller, Tristan Thrush, Joaquin Vanschoren, Margaret Warren, Adina Williams, Serena Yeung, Newsha Ardalani, Praveen Paritosh, Ce Zhang, James Zou, Carole-Jean Wu, Cody Coleman, Andrew Ng, Peter Mattson, and Vijay Janapa Reddi. 2022. DataPerf: Benchmarks for Data-Centric AI Development. https://doi.org/10.48550/ARXIV.2207.10062
[36]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. (2019). https://doi.org/10.48550/ARXIV.1908.09635
[37]
Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. https://doi.org/10.48550/ARXIV.1706.07269
[38]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 607–617.
[39]
Department of Statistics. 2022. Statistics Online S.6 Test of Proportion. PennState Eberly College of Science. https://online.stat.psu.edu/statprogram/reviews/statistical-concepts/proportionsAccessed: 2022-08-30.
[40]
Jeroen Ooge and Katrien Verbert. 2022. Explaining Artificial Intelligence with Tailored Interactive Visualisations., 120-123 pages. https://doi.org/10.1145/3490100.3516481
[41]
Urja Pawar, Donna O’Shea, Susan Rea, and Ruairi O’Reilly. 2020. Incorporating Explainable Artificial Intelligence (XAI) to aid the Understanding of Machine Learning in the Healthcare Domain.
[42]
Prolific Platform. 2022. Prolific. https://www.prolific.co/Accessed: 2022-08-15.
[43]
Sreekanth Rallapalli. 2016. Improving Healthcare-Big Data Analytics for Electronic Health Records on Cloud. Journal of Advances in Information Technology 7, 1 (2016), 65–68. https://doi.org/10.12720/jait.7.1.65-68
[44]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. (2016). https://doi.org/10.48550/ARXIV.1602.04938
[45]
Abdallah Abbas Ritvij Singh, Edward Korot Sara Beqiri, Peter Woodward-Court Robbert Struyven, and Pearse Keane. 2021. Exploring the What-If-Tool as a solution for machine learning explainability in clinical practice. Invest. Ophthalmol. Vis. Sci. 2021;62(8):79. (2021).
[46]
Diego Rojo, Nyi Nyi Htun, Denis Parra, Robin De Croon, and Katrien Verbert. 2021. AHMoSe: A knowledge-based visual support system for selecting regression machine learning models. Computers and Electronics in Agriculture 187 (aug 2021), 106183. https://doi.org/10.1016/j.compag.2021.106183
[47]
Sheila A Ryan. 1985. An expert system for nursing practice. Journal of medical systems 9, 1 (1985), 29–41.
[48]
Waddah Saeed and Christian Omlin. 2021. Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities. (2021). https://doi.org/10.48550/ARXIV.2111.06420
[49]
Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, and Frank Vetere. 2021. Directive Explanations for Actionable Explainability in Machine Learning Applications. (2021). https://doi.org/10.48550/ARXIV.2102.02671
[50]
Kacper Sokol and Peter Flach. 2020. Explainability fact sheets. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM. https://doi.org/10.1145/3351095.3372870
[51]
Alexander Spangher and Berk Ustun. 2018. Actionable Recourse in Linear Classification. https://doi.org/10.1145/3287560.3287566
[52]
Thilo Spinner, Udo Schlegel, Hanna Hauptmann, and Mennatallah El-Assady. 2019. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. (07 2019).
[53]
Gregor Stiglic, P Kocbek, Leona Cilar Budler, Nino Fijacko, Andraz Stozer, Jelka Zaletel, Aziz Sheikh, and Petra Povalej. 2018. Development of a screening tool using electronic health records for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose detection in the Slovenian population. Diabetic medicine : a journal of the British Diabetic Association 35 (02 2018). https://doi.org/10.1111/dme.13605
[54]
Gregor Stiglic, Primoz Kocbek, Nino Fijacko, Marinka Zitnik, Katrien Verbert, and Leona Cilar. 2020. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 5(2020), e1379.
[55]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. 26th International Conference on Intelligent User Interfaces (2021).
[56]
Erico Tjoa and Cuntai Guan. 2021. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32, 11 (nov 2021), 4793–4813. https://doi.org/10.1109/tnnls.2020.3027314
[57]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831
[58]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viegas, and Jimbo Wilson. 2019. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics PP (08 2019), 1–1. https://doi.org/10.1109/TVCG.2019.2934619
[59]
Qualtrics XM. 2022. Qualtrics. https://www.qualtrics.com/Accessed: 2022-08-15.

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2024)Human-centered evaluation of explainable AI applications: a systematic reviewFrontiers in Artificial Intelligence10.3389/frai.2024.14564867Online publication date: 17-Oct-2024
  • (2024)Fairness and Transparency in Music Recommender Systems: Improvements for ArtistsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688024(1368-1375)Online publication date: 8-Oct-2024
  • Show More Cited By

Index Terms

  1. Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
          March 2023
          972 pages
          ISBN:9798400701061
          DOI:10.1145/3581641
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 27 March 2023

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Explainable AI
          2. Human-centered AI
          3. Interpretable AI
          4. Responsible AI
          5. Visual Analytics
          6. XAI

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          • FWO
          • KU Leuven Internal Funds

          Conference

          IUI '23
          Sponsor:

          Acceptance Rates

          Overall Acceptance Rate 746 of 2,811 submissions, 27%

          Upcoming Conference

          IUI '25

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)206
          • Downloads (Last 6 weeks)22
          Reflects downloads up to 10 Feb 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
          • (2024)Human-centered evaluation of explainable AI applications: a systematic reviewFrontiers in Artificial Intelligence10.3389/frai.2024.14564867Online publication date: 17-Oct-2024
          • (2024)Fairness and Transparency in Music Recommender Systems: Improvements for ArtistsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688024(1368-1375)Online publication date: 8-Oct-2024
          • (2024)Feedback, Control, or Explanations? Supporting Teachers With Steerable Distractor-Generating AIProceedings of the 14th Learning Analytics and Knowledge Conference10.1145/3636555.3636933(690-700)Online publication date: 18-Mar-2024
          • (2024)"How Good Is Your Explanation?": Towards a Standardised Evaluation Approach for Diverse XAI Methods on Multiple Dimensions of ExplainabilityAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664911(513-515)Online publication date: 27-Jun-2024
          • (2024)Representation Debiasing of Generated Data Involving Domain ExpertsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664910(516-522)Online publication date: 27-Jun-2024
          • (2024)An Explanatory Model Steering System for Collaboration between Domain Experts and AIAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664886(75-79)Online publication date: 27-Jun-2024
          • (2024)Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI InteractionsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3638177(1-6)Online publication date: 11-May-2024
          • (2024)Designing and Evaluating Explanations for a Predictive Health Dashboard: A User-Centred Case StudyExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637140(1-8)Online publication date: 11-May-2024
          • (2024)EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data ConfigurationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642106(1-27)Online publication date: 11-May-2024
          • Show More Cited By

          View Options

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media