Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3581314acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations

Published: 19 April 2023 Publication History

Abstract

Explainability (XAI) has matured in recent years to provide more human-centered explanations of AI-based decision systems. While static explanations remain predominant, interactive XAI has gathered momentum to support the human cognitive process of explaining. However, the evidence regarding the benefits of interactive explanations is unclear. In this paper, we map existing findings by conducting a detailed scoping review of 48 empirical studies in which interactive explanations are evaluated with human users. We also create a classification of interactive techniques specific to XAI and group the resulting categories according to their role in the cognitive process of explanation: "selective", "mutable" or "dialogic". We identify the effects of interactivity on several user-based metrics. We find that interactive explanations improve perceived usefulness and performance of the human+AI team but take longer. We highlight conflicting results regarding cognitive load and overconfidence. Lastly, we describe underexplored areas including measuring curiosity or learning or perturbing outcomes.

Supplementary Material

MP4 File (3544548.3581314-talk-video.mp4)
Pre-recorded Video Presentation
MP4 File (3544548.3581314-video-preview.mp4)
Video Preview

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3173574.3174156
[2]
A. Adadi and M. Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052 Conference Name: IEEE Access.
[3]
Sabbir Ahmad, Andy Bryant, Erica Kleinman, Zhaoqing Teng, Truong-Huy D. Nguyen, and Magy Seif El-Nasr. 2019. Modeling Individual and Team Behavior through Spatio-temporal Analysis. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play(CHI PLAY ’19). Association for Computing Machinery, New York, NY, USA, 601–612. https://doi.org/10.1145/3311350.3347188
[4]
Yongsu Ahn, Muheng Yan, Yu-Ru Lin, Wen-Ting Chung, and Rebecca Hwa. 2022. Tribe or Not? Critical Inspection of Group Differences Using TribalGram. ACM Transactions on Interactive Intelligent Systems 12, 1 (March 2022), 5:1–5:34. https://doi.org/10.1145/3484509
[5]
R. Amar, J. Eagan, and J. Stasko. 2005. Low-level components of analytic activity in information visualization. In IEEE Symposium on Information Visualization, 2005. INFOVIS 2005.IEEE, Minneapolis MN USA, 111–117. https://doi.org/10.1109/INFVIS.2005.1532136 ISSN: 1522-404X.
[6]
Geoffrey R. Amthor. 1992. Multimedia in education: an introduction. Int. Business Mag. (1992), 32–39.
[7]
Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445736
[8]
Hilary Arksey and Lisa O’Malley. 2005. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 8, 1 (Feb. 2005), 19–32. https://doi.org/10.1080/1364557032000119616 Publisher: Routledge _eprint: https://doi.org/10.1080/1364557032000119616.
[9]
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. http://arxiv.org/abs/1909.03012 arXiv:1909.03012 [cs, stat].
[10]
S. Sandra Bae, Clement Zheng, Mary Etta West, Ellen Yi-Luen Do, Samuel Huron, and Danielle Albers Szafir. 2022. Making Data Tangible: A Cross-disciplinary Design Space for Data Physicalization. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–18. https://doi.org/10.1145/3491102.3501939
[11]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3411764.3445717
[12]
Philip Barker. 1994. Designing Interactive Learning. In Design and Production of Multimedia and Simulation-based Learning Material, Ton de Jong and Luigi Sarti (Eds.). Springer Netherlands, Dordrecht, 1–30. https://doi.org/10.1007/978-94-011-0942-0_1
[13]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[14]
R. K. E. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilović, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (July 2019), 4:1–4:15. https://doi.org/10.1147/JRD.2019.2942287
[15]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3173951
[16]
Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 807–819. https://doi.org/10.1145/3490099.3511139
[17]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 454–464. https://doi.org/10.1145/3377325.3377498
[18]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 188:1–188:21. https://doi.org/10.1145/3449287
[19]
Michael L. Callaham, Robert L. Wears, Ellen J. Weber, Christopher Barton, and Gary Young. 1998. Positive-Outcome Bias and Other Limitations in the Outcome of Research Abstracts Submitted to a Scientific Meeting. JAMA 280, 3 (July 1998), 254–257. https://doi.org/10.1001/jama.280.3.254
[20]
Furui Cheng, Dongyu Liu, Fan Du, Yanna Lin, Alexandra Zytek, Haomin Li, Huamin Qu, and Kalyan Veeramachaneni. 2022. VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models. IEEE Transactions on Visualization and Computer Graphics 28, 1 (Jan. 2022), 378–388. https://doi.org/10.1109/TVCG.2021.3114836
[21]
Furui Cheng, Yao Ming, and Huamin Qu. 2021. DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 27, 2 (Feb. 2021), 1438–1447. https://doi.org/10.1109/TVCG.2020.3030342
[22]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300789
[23]
Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In 26th International Conference on Intelligent User Interfaces(IUI ’21). Association for Computing Machinery, New York, NY, USA, 307–317. https://doi.org/10.1145/3397481.3450644
[24]
Dennis Collaris and Jarke J. van Wijk. 2020. ExplainExplore: Visual Exploration of Machine Learning Explanations. In 2020 IEEE Pacific Visualization Symposium (PacificVis). IEEE, Tianjin, China, 26–35. https://doi.org/10.1109/PacificVis48177.2020.7090 ISSN: 2165-8773.
[25]
Jason A. Colquitt and Jessica B. Rodell. 2015. Measuring justice and fairness. In The Oxford handbook of justice in the workplace. Oxford University Press, New York, NY, US, 187–202. https://doi.org/10.1093/oxfordhb/9780199981410.013.8
[26]
Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, and Francisco Cruz. 2021. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence 299 (Oct. 2021), 103525. https://doi.org/10.1016/j.artint.2021.103525
[27]
Maartje M. A. de Graaf and Bertram F. Malle. 2017. How People Explain Action (and Autonomous Intelligent Systems Should Too). In 2017 AAAI Fall Symposia, Arlington, Virginia, USA, November 9-11, 2017. AAAI Press, 19–26. https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/16009
[28]
John Dewey. 1903. Democracy in Education. THE ELEMENTARY SCHOOL TEACHER(1903), 12.
[29]
Alan Dix and Geoffrey Ellis. 1998. Starting simple: adding value to static visualisation through simple interaction. In Proceedings of the working conference on Advanced visual interfaces(AVI ’98). Association for Computing Machinery, New York, NY, USA, 124–134. https://doi.org/10.1145/948496.948514
[30]
Jonathan Dodge, Andrew A. Anderson, Matthew Olson, Rupika Dikkala, and Margaret Burnett. 2022. How Do People Rank Multiple Mutant Agents?. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 191–211. https://doi.org/10.1145/3490099.3511115
[31]
Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, and Denis Parra. 2019. The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, 408–416. https://doi.org/10.1145/3301275.3302274
[32]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. https://doi.org/10.48550/arXiv.1702.08608 arXiv:1702.08608 [cs, stat].
[33]
Filip Karlo Došilović, Mario Brčić, and Nikica Hlupić. 2018. Explainable artificial intelligence: A survey. In 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, Opatija, Croatia, 0210–0215. https://doi.org/10.23919/MIPRO.2018.8400040
[34]
Chris Evans and Nicola J. Gibbons. 2007. The interactivity effect in multimedia learning. Computers & Education 49, 4 (Dec. 2007), 1147–1160. https://doi.org/10.1016/j.compedu.2006.01.008
[35]
Shi Feng and Jordan Boyd-Graber. 2019. What can AI do for me? evaluating machine learning interpretations in cooperative play. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 229–239. https://doi.org/10.1145/3301275.3302265
[36]
Juliana J. Ferreira and Mateus S. Monteiro. 2020. What Are People Doing About XAI User Experience? A Survey on AI Explainability Research and Practice. In Design, User Experience, and Usability. Design for Contemporary Interactive Environments(Lecture Notes in Computer Science), Aaron Marcus and Elizabeth Rosenzweig (Eds.). Springer International Publishing, Cham, 56–73. https://doi.org/10.1007/978-3-030-49760-6_4
[37]
James D. Foley, Foley Dan Van, Andries Van Dam, Steven K. Feiner, and John F. Hughes. 1996. Computer Graphics: Principles and Practice. Addison-Wesley Professional, USA.
[38]
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (Jan. 2021), 1–28. https://doi.org/10.1145/3432934
[39]
Azin Ghazimatin, Soumajit Pramanik, Rishiraj Saha Roy, and Gerhard Weikum. 2021. ELIXIR: Learning from User Feedback on Explanations to Improve Recommender Models. In Proceedings of the Web Conference 2021(WWW ’21). Association for Computing Machinery, New York, NY, USA, 3850–3860. https://doi.org/10.1145/3442381.3449848
[40]
H. P. Grice. 1975. Logic and Conversation. Brill. https://doi.org/10.1163/9789004368811_003 Pages: 41-58 Section: Speech Acts.
[41]
Ziwei Gu, Jing Nathan Yan, and Jeffrey M. Rzeszotarski. 2021. Understanding User Sensemaking in Machine Learning Fairness Assessment Systems. In Proceedings of the Web Conference 2021(WWW ’21). Association for Computing Machinery, New York, NY, USA, 658–668. https://doi.org/10.1145/3442381.3450092
[42]
Lijie Guo, Elizabeth M. Daly, Oznur Alkan, Massimiliano Mattetti, Owen Cornec, and Bart Knijnenburg. 2022. Building Trust in Interactive Machine Learning via User Contributed Interpretable Rules. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 537–548. https://doi.org/10.1145/3490099.3511111
[43]
Sam Hepenstal, Leishi Zhang, Neesha Kodagoda, and B. l. william Wong. 2021. Developing Conversational Agents for Use in Criminal Investigations. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–35. https://doi.org/10.1145/3444369
[44]
Diana C. Hernandez-Bocanegra and Jürgen Ziegler. 2021. Conversational review-based explanations for recommender systems: Exploring users’ query behavior. In CUI 2021 - 3rd Conference on Conversational User Interfaces(CUI ’21). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3469595.3469596
[45]
Germund Hesslow. 1988. The Problem of Causal Selection. In Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, Denis J. Hilton (Ed.). New York University Press.
[46]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2019. Metrics for Explainable AI: Challenges and Prospects. http://arxiv.org/abs/1812.04608 arXiv:1812.04608 [cs].
[47]
Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M. Drucker. 2019. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–13. https://doi.org/10.1145/3290605.3300809
[48]
Maia Jacobs, Jeffrey He, Melanie F. Pradier, Barbara Lam, Andrew C. Ahn, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445385
[49]
Lars-Erik Janlert and Erik Stolterman. 2017. The Meaning of Interactivity—Some Proposals for Definitions and Measures. Human–Computer Interaction 32, 3 (May 2017), 103–138. https://doi.org/10.1080/07370024.2016.1226139 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/07370024.2016.1226139.
[50]
Shichao Jia, Zeyu Li, Nuo Chen, and Jiawan Zhang. 2022. Towards Visual Explainable Active Learning for Zero-Shot Classification. IEEE Transactions on Visualization and Computer Graphics 28, 1 (Jan. 2022), 791–801. https://doi.org/10.1109/TVCG.2021.3114793
[51]
Zhuochen Jin, Shuyuan Cui, Shunan Guo, David Gotz, Jimeng Sun, and Nan Cao. 2020. CarePre: An Intelligent Clinical Decision Assistance System. ACM Transactions on Computing for Healthcare 1, 1 (March 2020), 6:1–6:20. https://doi.org/10.1145/3344258
[52]
Daniel Kahneman and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47, 2 (1979), 263–291. https://doi.org/10.2307/1914185 Publisher: [Wiley, Econometric Society].
[53]
D.A. Keim. 2002. Information visualization and visual data mining. IEEE Transactions on Visualization and Computer Graphics 8, 1 (Jan. 2002), 1–8. https://doi.org/10.1109/2945.981847
[54]
Carmel Kent, Esther Laslo, and Sheizaf Rafaeli. 2016. Interactivity in online discussions and learning outcomes. Computers & Education 97 (June 2016), 116–128. https://doi.org/10.1016/j.compedu.2016.03.002
[55]
Anjali Khurana, Parsa Alamzadeh, and Parmit K. Chilana. 2021. ChatrEx: Designing Explainable Chatbot Interfaces for Enhancing Usefulness, Transparency, and Trust. In 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, St Louis, MO, USA, 1–11. https://doi.org/10.1109/VL/HCC51201.2021.9576440 ISSN: 1943-6106.
[56]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). https://doi.org/10.48550/arXiv.1711.11279 arXiv:1711.11279 [stat].
[57]
Chris Kim, Xiao Lin, Christopher Collins, Graham W. Taylor, and Mohamed R. Amer. 2021. Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine Learning. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Sept. 2021), 23:1–23:34. https://doi.org/10.1145/3465407
[58]
Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction 22, 4 (Oct. 2012), 441–504. https://doi.org/10.1007/s11257-011-9118-4
[59]
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. Concept Bottleneck Models. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020(Proceedings of Machine Learning Research, Vol. 119). PMLR, Virtual Event, 5338–5348. http://proceedings.mlr.press/v119/koh20a.html
[60]
Yubo Kou and Xinning Gui. 2020. Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 102:1–102:27. https://doi.org/10.1145/3415173
[61]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 379–390. https://doi.org/10.1145/3301275.3302306
[62]
Maria Kouvela, Ilias Dimitriadis, and Athena Vakali. 2020. Bot-Detective: An explainable Twitter bot detection service with crowdsourcing functionalities. In Proceedings of the 12th International Conference on Management of Digital EcoSystems(MEDES ’20). Association for Computing Machinery, New York, NY, USA, 55–63. https://doi.org/10.1145/3415958.3433075
[63]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 5686–5697. https://doi.org/10.1145/2858036.2858529
[64]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces(IUI ’15). Association for Computing Machinery, New York, NY, USA, 126–137. https://doi.org/10.1145/2678025.2701399
[65]
Bum Chul Kwon, Min-Je Choi, Joanne Taery Kim, Edward Choi, Young Bin Kim, Soonwook Kwon, Jimeng Sun, and Jaegul Choo. 2019. RetainVis: Visual Analytics with Interpretable and Interactive Recurrent Neural Networks on Electronic Medical Records. IEEE Transactions on Visualization and Computer Graphics 25, 1 (Jan. 2019), 299–309. https://doi.org/10.1109/TVCG.2018.2865027
[66]
Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. https://doi.org/10.48550/arXiv.2112.11471 arXiv:2112.11471 [cs].
[67]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (July 2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
[68]
Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 182:1–182:26. https://doi.org/10.1145/3359284
[69]
James R. Lewis. 1991. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: the ASQ. ACM SIGCHI Bulletin 23, 1 (Jan. 1991), 78–81. https://doi.org/10.1145/122672.122692
[70]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590
[71]
Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing(UbiComp ’09). Association for Computing Machinery, New York, NY, USA, 195–204. https://doi.org/10.1145/1620545.1620576
[72]
Peter Lipton. 1990. Contrastive Explanation*. Royal Institute of Philosophy Supplements 27 (March 1990), 247–266. https://doi.org/10.1017/S1358246100005130 Publisher: Cambridge University Press.
[73]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 408:1–408:45. https://doi.org/10.1145/3479552
[74]
D. Mcknight, Michelle Carter, Jason Thatcher, and Paul Clay. 2011. Trust in a specific technology: An Investigation of its Components and Measures. ACM Transactions on Management Information Systems 2 (June 2011), 12–32. https://doi.org/10.1145/1985347.1985353
[75]
Gaspar Isaac Melsión, Ilaria Torre, Eva Vidal, and Iolanda Leite. 2021. Using Explainability to Help Children UnderstandGender Bias in AI. In Interaction Design and Children. ACM, Athens Greece, 87–99. https://doi.org/10.1145/3459990.3460719
[76]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 397–407. https://doi.org/10.1145/3301275.3302313
[77]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
[78]
Yao Ming, Huamin Qu, and Enrico Bertini. 2019. RuleMatrix: Visualizing and Understanding Classifiers with Rules. IEEE Transactions on Visualization and Computer Graphics 25, 1 (Jan. 2019), 342–352. https://doi.org/10.1109/TVCG.2018.2864812
[79]
Akira Miyake and Priti Shah (Eds.). 1999. Models of working memory: Mechanisms of active maintenance and executive control. Cambridge University Press, New York, NY, US. https://doi.org/10.1017/CBO9781139174909 Pages: xx, 506.
[80]
David Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, and The PRISMA Group. 2009. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Medicine 6, 7 (July 2009), e1000097. https://doi.org/10.1371/journal.pmed.1000097
[81]
Cecily Morrison, Kit Huckvale, Bob Corish, Richard Banks, Martin Grayson, Jonas Dorn, Abigail Sellen, and Sân Lindley. 2018. Visualizing Ubiquitously Sensed Measures of Motor Ability in Multiple Sclerosis: Reflections on Communicating Machine Learning in Practice. ACM Transactions on Interactive Intelligent Systems 8, 2 (July 2018), 1–28. https://doi.org/10.1145/3181670
[82]
C. D. Mulrow. 1994. Systematic Reviews: Rationale for systematic reviews. BMJ 309, 6954 (Sept. 1994), 597–599. https://doi.org/10.1136/bmj.309.6954.597 Publisher: British Medical Journal Publishing Group Section: Education and debate.
[83]
Zachary Munn, Micah D. J. Peters, Cindy Stern, Catalin Tufanaru, Alexa McArthur, and Edoardo Aromataris. 2018. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology 18, 1 (Nov. 2018), 143. https://doi.org/10.1186/s12874-018-0611-x
[84]
Mohammad Naiseh, Reem S. Al-Mansoori, Dena Al-Thani, Nan Jiang, and Raian Ali. 2021. Nudging through Friction: An Approach for Calibrating Trust in Explainable AI. In 2021 8th International Conference on Behavioral and Social Computing (BESC). IEEE, Doha, Qatar, 1–5. https://doi.org/10.1109/BESC53957.2021.9635271
[85]
Mahsan Nourani, Chiradeep Roy, Jeremy E Block, Donald R Honeycutt, Tahrima Rahman, Eric Ragan, and Vibhav Gogate. 2021. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In 26th International Conference on Intelligent User Interfaces. Association for Computing Machinery, New York, NY, USA, 340–350. https://doi.org/10.1145/3397481.3450639
[86]
Heather L. O’Brien, Paul Cairns, and Mark Hall. 2018. A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. International Journal of Human-Computer Studies 112 (April 2018), 28–39. https://doi.org/10.1016/j.ijhcs.2018.01.004
[87]
Matthew J Page, Joanne E McKenzie, Patrick M Bossuyt, Isabelle Boutron, Tammy C Hoffmann, Cynthia D Mulrow, Larissa Shamseer, Jennifer M Tetzlaff, Elie A Akl, Sue E Brennan, 2021. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Systematic reviews 10, 1 (2021), 1–11.
[88]
Sean Penney, Jonathan Dodge, Claudia Hilderbrand, Andrew Anderson, Logan Simpson, and Margaret Burnett. 2018. Toward Foraging for Understanding of StarCraft Agents: An Empirical Study. In 23rd International Conference on Intelligent User Interfaces(IUI ’18). Association for Computing Machinery, New York, NY, USA, 225–237. https://doi.org/10.1145/3172944.3172946
[89]
Sayantan Polley, Suhita Ghosh, Marcus Thiel, Michael Kotzyba, and Andreas Nürnberger. 2020. SIMFIC: An Explainable Book Search Companion. In 2020 IEEE International Conference on Human-Machine Systems (ICHMS). IEEE, Rome, Italy, 1–6. https://doi.org/10.1109/ICHMS49158.2020.9209581
[90]
Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning, Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Xavier Baró, Yağmur Güçlütürk, Umut Güçlü, and Marcel van Gerven (Eds.). Springer International Publishing, Cham, 19–36. https://doi.org/10.1007/978-3-319-98131-4_2
[91]
Juan Rebanal, Jordan Combitsis, Yuqi Tang, and Xiang ’Anthony’ Chen. 2021. XAlgo: a Design Probe of Explaining Algorithms’ Internal States via Question-Answering. In 26th International Conference on Intelligent User Interfaces(IUI ’21). Association for Computing Machinery, New York, NY, USA, 329–339. https://doi.org/10.1145/3397481.3450676
[92]
Dent M. Rhodes and Janet White Azbell. 1985. Designing Interactive Video Instruction Professionally. Training and Development Journal 39, 12 (1985), 31–33.
[93]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’16). Association for Computing Machinery, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778
[94]
Andrew Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, and Finale Doshi-Velez. 2021. Evaluating the Interpretability of Generative Models by Interactive Reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–15. https://doi.org/10.1145/3411764.3445296
[95]
Steven F. Roth and Joe Mattis. 1990. Data characterization for intelligent graphics presentation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’90). Association for Computing Machinery, New York, NY, USA, 193–200. https://doi.org/10.1145/97243.97273
[96]
Denise M. Rousseau, Sim B. Sitkin, Ronald S. Burt, and Colin Camerer. 1998. Introduction to Special Topic Forum: Not so Different after All: A Cross-Discipline View of Trust. The Academy of Management Review 23, 3 (1998), 393–404. http://www.jstor.org/stable/259285
[97]
Maria Roussou. 2004. Learning by doing and learning through play: an exploration of interactivity in virtual environments for children. Computers in Entertainment 2, 1 (Jan. 2004), 10. https://doi.org/10.1145/973801.973818
[98]
James Schaffer, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O’Donovan. 2015. Getting the Message? A Study of Explanation Interfaces for Microblog Data Analysis. In Proceedings of the 20th International Conference on Intelligent User Interfaces(IUI ’15). Association for Computing Machinery, New York, NY, USA, 345–356. https://doi.org/10.1145/2678025.2701406
[99]
Kara Schick-Makaroff, Marjorie MacDonald, Marilyn Plummer, Judy Burgess, and Wendy Neander. 2016. What Synthesis Methodology Should I Use? A Review and Analysis of Approaches to Research Synthesis. AIMS public health 3, 1 (March 2016), 172–215. https://doi.org/10.3934/publichealth.2016.1.172
[100]
Johanes Schneider and Joshua Handali. 2019. Personalized explanation in machine learning: A conceptualization. In Proceedings of the European Conference on Information Systems(ECIS 2019). arXiv, Stockholm-Uppsala, Sweden. https://doi.org/10.48550/arXiv.1901.00770 arXiv:1901.00770 [cs, stat].
[101]
Richard Schwier and Earl R. Misanchuk. 1993. Interactive Multimedia Instruction. Educational Technology.
[102]
Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? How controllable attributes affect human judgments. https://doi.org/10.48550/arXiv.1902.08654 arXiv:1902.08654 [cs].
[103]
Rita Sevastjanova, Wolfgang Jentner, Fabian Sperrle, Rebecca Kehlbeck, Jürgen Bernard, and Mennatallah El-assady. 2021. QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (Dec. 2021), 1–38. https://doi.org/10.1145/3429448
[104]
Lei Shi, Zhiyang Teng, Le Wang, Yue Zhang, and Alexander Binder. 2019. DeepClue: Visual Interpretation of Text-Based Deep Stock Prediction. IEEE Transactions on Knowledge and Data Engineering 31, 6 (June 2019), 1094–1108. https://doi.org/10.1109/TKDE.2018.2854193
[105]
Rod Sims. 1997. Interactivity: A forgotten art?Computers in Human Behavior 13, 2 (May 1997), 157–180. https://doi.org/10.1016/S0747-5632(97)00004-6
[106]
Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, and Sameer Singh. 2022. TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations. https://doi.org/10.48550/arXiv.2207.04154 arXiv:2207.04154 [cs].
[107]
Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 56–67. https://doi.org/10.1145/3351095.3372870
[108]
Kacper Sokol and Peter Flach. 2020. One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency. KI - Künstliche Intelligenz 34, 2 (June 2020), 235–250. https://doi.org/10.1007/s13218-020-00637-y arXiv:2001.09734 [cs, stat].
[109]
Francesco Sovrano and Fabio Vitali. 2021. From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein’s Theory of Explanation. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 81–91. https://doi.org/10.1145/3397481.3450655
[110]
Thilo Spinner, Udo Schlegel, Hanna Schäfer, and Mennatallah El-Assady. 2020. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning. IEEE Transactions on Visualization and Computer Graphics 26, 1 (Jan. 2020), 1064–1074. https://doi.org/10.1109/TVCG.2019.2934629
[111]
Aaron Springer and Steve Whittaker. 2019. Progressive disclosure: empirically motivated approaches to designing effective transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces(IUI ’19). Association for Computing Machinery, New York, NY, USA, 107–120. https://doi.org/10.1145/3301275.3302322
[112]
Jonathan Steuer. 1992. Defining Virtual Reality: Dimensions Determining Telepresence. Journal of Communication(1992), 73–93.
[113]
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D. Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 212–228. https://doi.org/10.1145/3490099.3511119
[114]
S. Shyam Sundar, Qian Xu, and Saraswathi Bellur. 2010. Designing interactivity in media interfaces: a communications perspective. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’10). Association for Computing Machinery, New York, NY, USA, 2247–2256. https://doi.org/10.1145/1753326.1753666
[115]
Harini Suresh, Kathleen M Lewis, John Guttag, and Arvind Satyanarayan. 2022. Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs. In 27th International Conference on Intelligent User Interfaces(IUI ’22). Association for Computing Machinery, New York, NY, USA, 767–781. https://doi.org/10.1145/3490099.3511160
[116]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 109–119. https://doi.org/10.1145/3397481.3450662
[117]
Nava Tintarev. 2007. Explanations of recommendations. In Proceedings of the 2007 ACM conference on Recommender systems(RecSys ’07). Association for Computing Machinery, New York, NY, USA, 203–206. https://doi.org/10.1145/1297231.1297275
[118]
Andrea C. Tricco, Erin Lillie, Wasifa Zarin, Kelly K. O’Brien, Heather Colquhoun, Danielle Levac, David Moher, Micah D.J. Peters, Tanya Horsley, Laura Weeks, Susanne Hempel, Elie A. Akl, Christine Chang, Jessie McGowan, Lesley Stewart, Lisa Hartling, Adrian Aldcroft, Michael G. Wilson, Chantelle Garritty, Simon Lewin, Christina M. Godfrey, Marilyn T. Macdonald, Etienne V. Langlois, Karla Soares-Weiser, Jo Moriarty, Tammy Clifford, Özge Tunçalp, and Sharon E. Straus. 2018. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine 169, 7 (Oct. 2018), 467–473. https://doi.org/10.7326/M18-0850 Publisher: American College of Physicians.
[119]
Chun-Hua Tsai, Yue You, Xinning Gui, Yubo Kou, and John M. Carroll. 2021. Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3411764.3445101
[120]
Betty Vandenbosch and Michael J. Ginzberg. 1996. Lotus Notes® and Collaboration: Plus ça change...Journal of Management Information Systems 13, 3 (Dec. 1996), 65–81. https://doi.org/10.1080/07421222.1996.11518134 Publisher: Routledge _eprint: https://doi.org/10.1080/07421222.1996.11518134.
[121]
Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, and Mattias Wahde. 2021. Model learning with personalized interpretability estimation (ML-PIE). In Proceedings of the Genetic and Evolutionary Computation Conference Companion(GECCO ’21). Association for Computing Machinery, New York, NY, USA, 1355–1364. https://doi.org/10.1145/3449726.3463166
[122]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831
[123]
Jane Webster and Richard T. Watson. 2002. Analyzing the Past to Prepare for the Future: Writing a Literature Review. MIS Quarterly 26, 2 (2002), xiii–xxiii. https://www.jstor.org/stable/4132319 Publisher: Management Information Systems Research Center, University of Minnesota.
[124]
Daricia Wilkinson, Öznur Alkan, Q. Vera Liao, Massimiliano Mattetti, Inge Vejsbjerg, Bart P. Knijnenburg, and Elizabeth Daly. 2021. Why or Why Not? The Effect of Justification Styles on Chatbot Recommendations. ACM Transactions on Information Systems 39, 4 (Oct. 2021), 1–21. https://doi.org/10.1145/3441715
[125]
Leland Wilkinson. 2005. Introduction. In The Grammar of Graphics. Springer, New York, NY, 1–19. https://doi.org/10.1007/0-387-28695-0_1
[126]
Tongshuang Wu, Marco Túlio Ribeiro, Jeffrey Heer, and Daniel S. Weld. 2021. Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, Chengqing Zong, Fei Xia, Wenjie Li 0002, and Roberto Navigli (Eds.). Association for Computational Linguistics, Virtual Event, 6707–6723. https://aclanthology.org/2021.acl-long.523
[127]
Jing Nathan Yan, Ziwei Gu, Hubert Lin, and Jeffrey M. Rzeszotarski. 2020. Silva: Interactively Assessing Machine Learning Fairness Using Causality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376447
[128]
Ji Soo Yi, Youn ah Kang, John Stasko, and J.A. Jacko. 2007. Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Transactions on Visualization and Computer Graphics 13, 6 (Nov. 2007), 1224–1231. https://doi.org/10.1109/TVCG.2007.70515

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2025)XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approachInformation Fusion10.1016/j.inffus.2024.102782116(102782)Online publication date: Apr-2025
  • (2025)ContractMind: Trust-calibration interaction design for AI contract review toolsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103411196(103411)Online publication date: Feb-2025
  • Show More Cited By

Index Terms

  1. On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      14911 pages
      ISBN:9781450394215
      DOI:10.1145/3544548
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      • Honorable Mention

      Author Tags

      1. artificial intelligence
      2. explainability
      3. human-grounded evaluations
      4. interactivity
      5. interpretability

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,111
      • Downloads (Last 6 weeks)81
      Reflects downloads up to 05 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
      • (2025)XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approachInformation Fusion10.1016/j.inffus.2024.102782116(102782)Online publication date: Apr-2025
      • (2025)ContractMind: Trust-calibration interaction design for AI contract review toolsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103411196(103411)Online publication date: Feb-2025
      • (2024)Integration of User-Centered Design in the Development of Big Data and Machine Learning-Based ApplicationsProceedings of the XXIII Brazilian Symposium on Human Factors in Computing Systems10.1145/3702038.3702097(1-12)Online publication date: 7-Oct-2024
      • (2024)The X Factor: On the Relationship between User eXperience and eXplainabilityProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685352(1-12)Online publication date: 13-Oct-2024
      • (2024)Envisioning Possibilities and Challenges of AI for Personalized Cancer CareCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681885(415-421)Online publication date: 11-Nov-2024
      • (2024)Outcome First or Overview First? Optimizing Patient-Oriented Framework for Evidence-Based Healthcare Treatment Selections with XAI ToolsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681859(248-254)Online publication date: 11-Nov-2024
      • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
      • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
      • (2024)An Explanatory Model Steering System for Collaboration between Domain Experts and AIAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664886(75-79)Online publication date: 27-Jun-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media