Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581641.3584082acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study

Published: 27 March 2023 Publication History

Abstract

Plural counterfactual examples have been proposed to explain the prediction of a classifier by offering a user several instances of minimal modifications that may be performed to change the prediction. Yet, such explanations may provide too much information, generating potential confusion for the end-users with no specific knowledge, neither on the machine learning, nor on the application domains. In this paper, we investigate the design of explanation user interfaces for plural counterfactual examples offering comparative analysis features to mitigate this potential confusion and improve the intelligibility of such explanations for non-expert users. We propose an implementation of such an enhanced explanation user interface, illustrating it in a financial scenario related to a loan application. We then present the results of a lab user study conducted with 112 participants to evaluate the effectiveness of having plural examples and of offering comparative analysis principles, both on the objective understanding and satisfaction of such explanations. The results demonstrate the effectiveness of the plural condition, both on objective understanding and satisfaction scores, as compared to having a single counterfactual example. Beside the statistical analysis, we perform a thematic analysis of the participants’ responses to the open-response questions, that also shows encouraging results for the comparative analysis features on the objective understanding.

References

[1]
Alejandro Barredo Arrieta, Natalia Díaz Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[2]
Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th Int. Conf. on Intelligent User Interfaces. Association for Computing Machinery, New York, NY, United States, 807–819.
[3]
Ruth MJ Byrne and Alessandra Tasso. 2019. Counterfactual reasoning: Inferences from hypothetical conditionals. In Proc. of the Sixteenth Annual Conf. of the Cognitive Science Society. Routledge, New York, NY, United States, 124–130.
[4]
Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The Effects of Example-Based Explanations in a Machine Learning Interface. In Proc. of the 24th Int. Conf. on Intelligent User Interfaces, IUI’19. Association for Computing Machinery, New York, NY, United States, 258–262.
[5]
Hao Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, United States, 1–12.
[6]
Michael Chromik and Andreas Butz. 2021. Human-XAI interaction: a review and design principles for explanation user interfaces. In Proc. of the 18th TC13 Int. Conf. on Human-Computer Interaction, INTERACT 2021. Springer, New York, NY, United States, 619–640.
[7]
Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In 26th Int. Conf. on Intelligent User Interfaces (College Station, TX, United States) (IUI ’21). Association for Computing Machinery, New York, NY, United States, 307–317. https://doi.org/10.1145/3397481.3450644
[8]
Victoria Clarke, Virginia Braun, and Nikki Hayfield. 2015. Thematic analysis. Qualitative psychology: A practical guide to research methods 222, 2015(2015), 248.
[9]
Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-objective counterfactual explanations. In Int. Conf. on Parallel Problem Solving from Nature. Springer-Verlag, Berlin, Heidelberg, 448–469.
[10]
Michael D. Ekstrand, F. Maxwell Harper, Martijn C. Willemsen, and Joseph A. Konstan. 2014. User Perception of Differences in Recommender Algorithms. In Proc. of the 8th ACM Conf. on Recommender Systems (Foster City, Silicon Valley, California, United States) (RecSys ’14). Association for Computing Machinery, New York, NY, United States, 161–168. https://doi.org/10.1145/2645710.2645737
[11]
Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. ViCE: visual counterfactual explanations for machine learning models. In Proc. of the 25th Int. Conf. on Intelligent User Interfaces, IUI’20. Association for Computing Machinery, New York, NY, United States, 531–535.
[12]
Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2021. AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation. In IEEE Visualization Conf., VIS 2022. IEEE, New York, NY, United States, 31–35.
[13]
Ricardo Guidotti. 2022. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery(2022), 1–55.
[14]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arxiv:1805.10820
[15]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arxiv:1812.04608
[16]
Hans Hofmann. 1994. Statlog (German Credit Data). UCI Machine Learning Repository.
[17]
Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. 2022. Integrating Prior Knowledge in Post-hoc Explanations. In Information Processing and Management of Uncertainty in Knowledge-Based Systems, Davide Ciucci, Inés Couso, Jesús Medina, Dominik Ślęzak, Davide Petturiti, Bernadette Bouchon-Meunier, and Ronald R. Yager (Eds.). Springer Int. Publishing, Cham, 707–719.
[18]
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2022. Towards Causal Algorithmic Recourse. In Int. Conf. on Machine Learning, ICML 2022 - Workshop on Extending Explainable AI Beyond Deep Models and Classifiers. Springer, Cham, 139–166.
[19]
Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. In Proc.of the Thirtieth Int. Joint Conf. on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence, 4466–4474. https://doi.org/10.24963/ijcai.2021/609 Survey Track.
[20]
René F. Kizilcec. 2016. How Much Information? Effects of Transparency on Trust in an Algorithmic Interface. In Proc. of the 2016 CHI Conf. on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, United States, 2390–2395. https://doi.org/10.1145/2858036.2858402
[21]
Matev Kunaver and Toma Porl. 2017. Diversity in Recommender Systems A Survey. Know.-Based Syst. 123, C (may 2017), 154–162. https://doi.org/10.1016/j.knosys.2017.02.009
[22]
Himabindu Lakkaraju and Osbert Bastani. 2020. "How Do I Fool You?": Manipulating User Trust via Misleading Black Box Explanations. In Proc. of the AAAI/ACM Conf. on AI, Ethics, and Society (New York, NY, USA) (AIES ’20). Association for Computing Machinery, New York, NY, USA, 79–85. https://doi.org/10.1145/3375627.3375833
[23]
Michael T. Lash, Qihang Lin, Nick Street, Jennifer G. Robinson, and Jeffrey Ohlmann. 2017. Generalized Inverse Classification. In Proc.of the 2017 SIAM Int. Conf. on Data Mining, SDM2017. SIAM, Philadelphia, PA, United States, 162–170. https://doi.org/10.1137/1.9781611974973.19
[24]
Thibault Laugel, Marie Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2019. The dangers of post-hoc interpretability: Unjustified counterfactual explanations. In Proc. of the Int. Joint Conf. on Artificial Intelligence, IJCAI’19. International Joint Conferences on Artificial Intelligence, 2801–2807.
[25]
Thai Le, Suhang Wang, and Dongwon Lee. 2020. GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model’s Prediction. In Proc. of the 26th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining. Association for Computing Machinery, New York, NY, United States, 238–248.
[26]
Zachary C. Lipton. 2016. The Mythos of Model Interpretability. In Proc. of the Int. Conf. on Machine Learning, ICML’16 - Workshop on Human Interpretability in Machine Learning. Association for Computing Machinery, New York, NY, United States, 36–43.
[27]
Arnaud Van Looveren and Janis Klaise. 2021. Interpretable counterfactual explanations guided by prototypes. In Joint European Conf. on Machine Learning and Knowledge Discovery in Databases. Springer, 650–665.
[28]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proc. of the Int. Conf of Advances in Neural Information Processing Systems, NeurIPS’17. Curran Associates Inc., Red Hook, NY, United States, 4765–4774.
[29]
Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2019. Preserving causal constraints in counterfactual explanations for machine learning classifiers. arxiv:1912.03277
[30]
Christian Meske and Enrico Bunde. 2022. Design principles for user interfaces in AI-Based decision support systems: The case of explainable hate speech detection. Information Systems Frontiers(2022), 1–31.
[31]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[32]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A survey of evaluation methods and measures for interpretable machine learning. arxiv:1811.11839
[33]
Christoph Molnar. 20120. Interpretable Machine Learning - A Guide for Making Black Box Models Explainable.
[34]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proc.of the 2020 Conf. on fairness, accountability, and transparency, FAccT 2020. Association for Computing Machinery, New York, NY, United States, 607–617.
[35]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: feasible and actionable counterfactual explanations. In Proc.of the AAAI/ACM Conf. on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, United States, 344–350.
[36]
Yanou Ramon, Tom Vermeire, Olivier Toubia, David Martens, and Theodoros Evgeniou. 2021. Understanding Consumer Preferences for Explanations Generated by XAI Algorithms. arxiv:2107.02624
[37]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why should i trust you?" Explaining the predictions of any classifier. In Proc. of the 22nd ACM SIGKDD Int. Conf. on knowledge discovery and data mining, KDD 16. Association for Computing Machinery, New York, NY, USA, 1135–1144.
[38]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proc. of the AAAI Conf. on artificial intelligence, Vol. 32. AAAI Press, Palo Alto, CA, United States, Article 187, 9 pages.
[39]
Ruoxi Shang, K. J. Kevin Feng, and Chirag Shah. 2022. Why Am I Not Seeing It? Understanding Users’ Needs for Counterfactual Explanations in Everyday Recommendations. In Proc.of the 2022 Conf. on fairness, accountability, and transparency, FAccT 2022 (Seoul, Republic of Korea). Association for Computing Machinery, New York, NY, United States, 1330–1340. https://doi.org/10.1145/3531146.3533189
[40]
Ben Shneiderman. 2020. Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4(2020), 1–31.
[41]
Leonard J Simms, Kerry Zelazny, Trevor F Williams, and Lee Bernstein. 2019. Does the number of response options matter? Psychometric perspectives using personality questionnaire data.Psychological assessment 31, 4 (2019), 557.
[42]
Kacper Sokol and Peter A Flach. 2018. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. In Proc.of the 27th Int. Joint Conf. on Artificial Intelligence, IJCAI-18. International Joint Conferences on Artificial Intelligence, 5868–5870.
[43]
Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. https://doi.org/10.48550/ARXIV.2010.10596
[44]
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7, 2 (2017), 76–99.
[45]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, United States, 1–15.
[46]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th Int. Conf. on Intelligent User Interfaces(College Station, TX, United States) (IUI ’21). Association for Computing Machinery, New York, NY, United States, 318–328. https://doi.org/10.1145/3397481.3450650
[47]
Greta Warren, Mark T Keane, and Ruth MJ Byrne. 2022. Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI. https://doi.org/10.48550/ARXIV.2204.10152
[48]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1(2019), 56–65.
[49]
Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In Proc.of the 2022 CHI Conf. on Human Factors in Computing Systems (New Orleans, LA, United States) (CHI ’22). Association for Computing Machinery, New York, NY, United States, Article 181, 24 pages. https://doi.org/10.1145/3491102.3501826

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2025)Manipulation Risks in Explainable AI: The Implications of the Disagreement ProblemMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74633-8_12(185-200)Online publication date: 1-Jan-2025
  • (2024)Après-propos. L’intelligence artificielle : une autre intelligence ?Enfance10.3917/enf2.241.0051N° 1:1(51-60)Online publication date: 28-Mar-2024
  • Show More Cited By

Index Terms

  1. Investigating the Intelligibility of Plural Counterfactual Examples for Non-Expert Users: an Explanation User Interface Proposition and User Study

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
    March 2023
    972 pages
    ISBN:9798400701061
    DOI:10.1145/3581641
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 March 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. XAI
    2. XUI
    3. explainable AI
    4. human-centered AI methods
    5. interface design
    6. user studies

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    IUI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)208
    • Downloads (Last 6 weeks)13
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
    • (2025)Manipulation Risks in Explainable AI: The Implications of the Disagreement ProblemMachine Learning and Principles and Practice of Knowledge Discovery in Databases10.1007/978-3-031-74633-8_12(185-200)Online publication date: 1-Jan-2025
    • (2024)Après-propos. L’intelligence artificielle : une autre intelligence ?Enfance10.3917/enf2.241.0051N° 1:1(51-60)Online publication date: 28-Mar-2024
    • (2024)Actionable Recourse for Automated Decisions: Examining the Effects of Counterfactual Explanation Type and Presentation on Lay User UnderstandingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658997(1682-1700)Online publication date: 3-Jun-2024
    • (2024)Beyond the Seeds: Fairness Testing via Counterfactual Analysis of Non-Seed InstancesIEEE Access10.1109/ACCESS.2024.350216412(172879-172891)Online publication date: 2024
    • (2024)Unbiasing on the Fly: Explanation-Guided Human Oversight of Machine Learning DecisionsCybernetics and Control Theory in Systems10.1007/978-3-031-70300-3_20(300-311)Online publication date: 17-Oct-2024
    • (2023)Achieving Diversity in Counterfactual Explanations: a Review and DiscussionProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594122(1859-1869)Online publication date: 12-Jun-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media