Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3397481.3450644acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI

Published: 14 April 2021 Publication History

Abstract

Unintended consequences of deployed AI systems fueled the call for more interpretability in AI systems. Often explainable AI (XAI) systems provide users with simplifying local explanations for individual predictions but leave it up to them to construct a global understanding of the model behavior. In this work, we examine if non-technical users of XAI fall for an illusion of explanatory depth when interpreting additive local explanations. We applied a mixed methods approach consisting of a moderated study with 40 participants and an unmoderated study with 107 crowd workers using a spreadsheet-like explanation interface based on the SHAP framework. We observed what non-technical users do to form their mental models of global AI model behavior from local explanations and how their perception of understanding decreases when it is examined.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA, Article 582, 18 pages. https://doi.org/10.1145/3173574.3174156
[2]
A. Adadi and M. Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
[3]
Ahmed Alqaraawi, M. Schuessler, P. Weiß, E. Costanza, and N. Bianchi-Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. Proceedings of the 25th International Conference on Intelligent User Interfaces (2020).
[4]
Adam L. Alter, Daniel M. Oppenheimer, and Jeffrey C. Zemla. 2010. Missing the trees for the forest: a construal level account of the illusion of explanatory depth.Journal of personality and social psychology 99 3 (2010), 436–51.
[5]
Umang Bhatt, Alice Xiang, S. Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and P. Eckersley. 2020. Explainable machine learning in deployment. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).
[6]
Reuben Binns, M. V. Kleek, M. Veale, Ulrik Lyngs, Jun Zhao, and N. Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions. ArXiv abs/1801.10408(2018).
[7]
Or Biran and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8. 1.
[8]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 454–464. https://doi.org/10.1145/3377325.3377498
[9]
Adrian Bussone, S. Stumpf, and D. O’Sullivan. 2015. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems. 2015 International Conference on Healthcare Informatics (2015), 160–169.
[10]
C. J. Cai, S. Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. ”Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-Computer Interaction 3 (2019), 1 – 24.
[11]
N. Chater. 1999. The Search for Simplicity: A Fundamental Cognitive Principle?Quarterly Journal of Experimental Psychology 52 (1999), 273 – 302.
[12]
Hao Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019).
[13]
Michael Chromik. 2020. reSHAPe: A Framework for Interactive Explanations in XAI Based on SHAP. In Proceedings of 18th European Conference on Computer-Supported Cooperative Work. European Society for Socially Embedded Technologies (EUSSET).
[14]
Dennis Collaris, Leo M. Vink, and Jarke J. van Wijk. 2018. Instance-Level Explanations for Fraud Detection: A Case Study. ArXiv abs/1806.07129(2018).
[15]
Devleena Das and S. Chernova. 2020. Leveraging rationales to improve human task performance. Proceedings of the 25th International Conference on Intelligent User Interfaces (2020).
[16]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretability. CoRR abs/1702.08608(2017). arxiv:1702.08608http://arxiv.org/abs/1702.08608
[17]
Upol Ehsan, Pradyumna Tambwekar, L. Chan, B. Harrison, and Mark O. Riedl. 2019. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. Proceedings of the 24th International Conference on Intelligent User Interfaces (2019).
[18]
Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. Wichmann. 2020. Shortcut Learning in Deep Neural Networks. ArXiv abs/2004.07780(2020).
[19]
Joey F. George, Kevin Duffy, and Manju K. Ahuja. 2000. Countering the anchoring and adjustment bias with decision support systems. Decis. Support Syst. 29(2000), 195–206.
[20]
Alicja Gosiewska and Przemyslaw Biecek. 2020. Do Not Trust Additive Explanations. arxiv:1903.11420 [cs.LG]
[21]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. Comput. Surveys 51, 5 (aug 2018). https://doi.org/10.1145/3236009
[22]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. CoRR abs/1812.04608(2018). arxiv:1812.04608http://arxiv.org/abs/1812.04608
[23]
Hsieh-Hong Huang, J. Hsu, and Cheng-Yuan Ku. 2012. Understanding the role of computer-mediated counter-argument in countering confirmation bias. Decis. Support Syst. 53(2012), 438–447.
[24]
D. Kahneman. 2011. Thinking, Fast and Slow.
[25]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376219
[26]
Tae-Nyun Kim and Hayeon Song. 2020. The Effect of Message Framing and Timing on the Acceptance of Artificial Intelligence’s Suggestion. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (2020).
[27]
A. Lau and E. Coiera. 2009. Research Paper: Can Cognitive Biases during Consumer Health Information Searches Be Reduced to Improve Decision Making?Journal of the American Medical Informatics Association : JAMIA 16 1 (2009), 54–65.
[28]
Rebecca Lawson. 2006. The science of cycology: Failures to understand how everyday objects work. Memory and Cognition 34(2006), 1667–1675.
[29]
Zachary C. Lipton. 2018. The Mythos of Model Interpretability. Queue 16, 3, Article 30 (June 2018), 27 pages. https://doi.org/10.1145/3236386.3241340
[30]
Scott M. Lundberg, G. Erion, Hugh Chen, Alex J. DeGrave, J. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence 2 (2020), 56–67.
[31]
J. McGuirl and N. Sarter. 2006. Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information. Human Factors: The Journal of Human Factors and Ergonomics Society 48 (2006), 656 – 665.
[32]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1 – 38. https://doi.org/10.1016/j.artint.2018.07.007
[33]
Candice M. Mills and Frank C. Keil. 2004. Knowing the limits of one’s understanding: the development of an awareness of an illusion of explanatory depth.Journal of experimental child psychology 87 1 (2004), 1–32.
[34]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A Survey of Evaluation Methods and Measures for Interpretable Machine Learning. CoRR abs/1811.11839(2018). arxiv:1811.11839http://arxiv.org/abs/1811.11839
[35]
Shane T. Mueller, Robert R. Hoffman, William J. Clancey, Abigail Emrey, and Gary Klein. 2019. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI. CoRR abs/1902.01876(2019). arxiv:1902.01876http://arxiv.org/abs/1902.01876
[36]
W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and B. Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116 (2019), 22071 – 22080.
[37]
Gregory L. Murphy and Douglas L. Medin. 1985. The role of theories in conceptual coherence.Psychological review 92 3 (1985), 289–316.
[38]
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. CoRR abs/1802.00682(2018). arxiv:1802.00682http://arxiv.org/abs/1802.00682
[39]
Don Norman. 2013. The design of everyday things: Revised and expanded edition. Basic books.
[40]
M. Nourani, Donald R. Honeycutt, Jeremy E. Block, Chiradeep Roy, Tahrima Rahman, Eric D. Ragan, and V. Gogate. 2020. Investigating the Importance of First Impressions and Explainable AI with Interactive Video Analysis. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (2020).
[41]
Ingrid Nunes and Dietmar Jannach. 2017. A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems. User Modeling and User-Adapted Interaction 27, 3-5 (Dec. 2017), 393–444. https://doi.org/10.1007/s11257-017-9195-0
[42]
Andrés Páez. 2019. The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines (2019), 1–19.
[43]
Shubham Rathi. 2019. Generating Counterfactual and Contrastive Explanations using SHAP. arxiv:1906.09293 [cs.LG]
[44]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(2016).
[45]
Mireia Ribera and Àgata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. In IUI Workshops.
[46]
Leonid Rozenblit and Frank C. Keil. 2002. The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive science 26 5(2002), 521–562.
[47]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206–215.
[48]
Heleen Rutjes, M. C. Willemsen, and W. IJsselsteijn. 2019. Considerations on explainable AI and users’ mental models. In CHI 2019.
[49]
James Schaffer, J. O’Donovan, James Michaelis, A. Raglin, and Tobias Höllerer. 2019. I can do better than your AI: expertise and explanations. Proceedings of the 24th International Conference on Intelligent User Interfaces (2019).
[50]
Lisa Scharrer, Yvonne Rupieper, M. Stadtler, and R. Bromme. 2017. When science becomes too easy: Science popularization inclines laypeople to underrate their dependence on experts. Public Understanding of Science 26 (2017), 1003 – 1018.
[51]
Lloyd S Shapley. 1953. A value for n-person games. Contributions to the Theory of Games 2, 28 (1953), 307–317.
[52]
L. Skitka, K. Mosier, M. Burdick, and B. Rosenblatt. 2000. Automation Bias and Errors: Are Crews Better Than Individuals?The International Journal of Aviation Psychology 10 (2000), 85 – 97.
[53]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and H. Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020).
[54]
Kacper Sokol and Peter A. Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).
[55]
Jacob Solomon. 2014. Customization bias in decision support systems. In CHI ’14.
[56]
Richard Tomsett, Dave Braines, Dan Harborne, A. Preece, and S. Chakraborty. 2018. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. ArXiv abs/1806.07552(2018).
[57]
Michelle Vaccaro and Jim Waldo. 2019. The effects of mixing machine learning and human judgment. Commun. ACM 62(2019), 104 – 110.
[58]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 601.
[59]
Daniel S. Weld and Gagan Bansal. 2019. The challenge of crafting intelligible intelligence. Commun. ACM 62(2019), 70 – 79.
[60]
Andrew Zeveney and Jessecae Marsh. 2016. The Illusion of Explanatory Depth in a Misunderstood Field: The IOED in Mental Disorders. In CogSci.

Cited By

View all
  • (2025)Information that mattersInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103380193:COnline publication date: 1-Jan-2025
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2025)Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for HumanNeural Processing Letters10.1007/s11063-025-11732-257:1Online publication date: 7-Feb-2025
  • Show More Cited By

Index Terms

  1. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '21: Proceedings of the 26th International Conference on Intelligent User Interfaces
    April 2021
    618 pages
    ISBN:9781450380171
    DOI:10.1145/3397481
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 April 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Shapley explanation
    2. cognitive bias
    3. explainable AI
    4. understanding

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    IUI '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)494
    • Downloads (Last 6 weeks)50
    Reflects downloads up to 10 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Information that mattersInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103380193:COnline publication date: 1-Jan-2025
    • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
    • (2025)Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for HumanNeural Processing Letters10.1007/s11063-025-11732-257:1Online publication date: 7-Feb-2025
    • (2024)Debating with more persuasive LLMs leads to more truthful answersProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693020(23662-23733)Online publication date: 21-Jul-2024
    • (2024)The European commitment to human-centered technology: the integral role of HCI in the EU AI Act’s successi-com10.1515/icom-2024-001423:2(249-261)Online publication date: 15-Jul-2024
    • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
    • (2024)Towards Understanding Human-AI Reliance Patterns Through Explanation StylesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678996(861-865)Online publication date: 5-Oct-2024
    • (2024)"If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural IndiaProceedings of the ACM on Human-Computer Interaction10.1145/36373488:CSCW1(1-28)Online publication date: 26-Apr-2024
    • (2024)"How Good Is Your Explanation?": Towards a Standardised Evaluation Approach for Diverse XAI Methods on Multiple Dimensions of ExplainabilityAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664911(513-515)Online publication date: 27-Jun-2024
    • (2024)“DecisionTime”: A Configurable Framework for Reproducible Human-AI Decision-Making StudiesAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664885(66-69)Online publication date: 27-Jun-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media