Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3301275.3302289acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
short-paper
Open access

The effects of example-based explanations in a machine learning interface

Published: 17 March 2019 Publication History

Abstract

The black-box nature of machine learning algorithms can make their predictions difficult to understand and explain to end-users. In this paper, we propose and evaluate two kinds of example-based explanations in the visual domain, normative explanations and comparative explanations (Figure 1), which automatically surface examples from the training set of a deep neural net sketch-recognition algorithm. To investigate their effects, we deployed these explanations to 1150 users on QuickDraw, an online platform where users draw images and see whether a recognizer has correctly guessed the intended drawing. When the algorithm failed to recognize the drawing, those who received normative explanations felt they had a better understanding of the system, and perceived the system to have higher capability. However, comparative explanations did not always improve perceptions of the algorithm, possibly because they sometimes exposed limitations of the algorithm and may have led to surprise. These findings suggest that examples can serve as a vehicle for explaining algorithmic behavior, but point to relative advantages and disadvantages of using different kinds of examples, depending on the goal.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 582.
[2]
Carrie J Cai. 2013. Adapting arcade games for learning. In CHI'13 Extended Abstracts on Human Factors in Computing Systems. ACM, 2665--2670.
[3]
Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, and Michael Terry. 2019. Refinement Tools for Coping with Imperfect Algorithms during Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.
[4]
Matteo Colombo, Leandra Bucher, and Jan Sprenger. 2017. Determinants of judgments of explanatory power: Credibility, Generality, and Statistical Relevance. Frontiers in psychology 8 (2017), 1430.
[5]
Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces. ACM, 211--223.
[6]
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer features of a deep network. University of Montreal 1341, 3 (2009), 1.
[7]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems. ACM, 2371--2382.
[8]
Motahhare Eslami, Sneha R Krishna Kumaran, Christian Sandvig, and Karrie Karahalios. 2018. Communicating Algorithmic Process in Online Behavioral Advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 432.
[9]
Bryce Goodman and Seth Flaxman. 2016. European Union regulations on algorithmic decision-making and a" right to explanation". arXiv preprint arXiv:1606.08813 (2016).
[10]
Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 241--250.
[11]
Daniel Keysers, Thomas Deselaers, Henry A Rowley, Li-Lun Wang, and Victor Carbune. 2017. Multi-Language Online Handwriting Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39, 6 (2017), 1180--1194.
[12]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning. 2673--2682.
[13]
René F Kizilcec. 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2390--2395.
[14]
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730 (2017).
[15]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1--10.
[16]
Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. ACM, 195--204.
[17]
Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128.
[18]
Bertram F Malle. 2011. Attribution theories: How people make sense of behavior. Theories in social psychology 23 (2011), 72--95.
[19]
David Martens and Foster Provost. 2013. Explaining data-driven document classifications. (2013).
[20]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709--734.
[21]
Tim Miller. 2017. Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017).
[22]
Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In Proceedings of the 11th international conference on Intelligent user interfaces. ACM, 93--100.
[23]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 103.
[24]
Stephen J Read and Amy Marcus-Newhall. 1993. Explanatory coherence in social explanations: A parallel distributed processing account. Journal of Personality and Social Psychology 65, 3 (1993), 429.
[25]
Alexander Renkl. 2014. Toward an instructionally oriented theory of example-based learning. Cognitive science 38, 1 (2014), 1--37.
[26]
Alexander Renkl, Tatjana Hilbert, and Silke Schworm. 2009. Example-based learning in heuristic domains: A cognitive load theory account. Educational Psychology Review 21, 1 (2009), 67--78.
[27]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135--1144.
[28]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence.
[29]
James Schaffer, Prasanna Giridhar, Debra Jones, Tobias Höllerer, Tarek Abdelzaher, and John O'Donovan. 2015. Getting the message?: A study of explanation interfaces for microblog data analysis. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 345--356.
[30]
Kelly G Shaver. 2012. The attribution of blame: Causality, responsibility, and blameworthiness. Springer Science & Business Media.
[31]
Muzafer Sherif. 1936. The psychology of social norms. (1936).
[32]
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017).
[33]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. (2017).

Cited By

View all
  • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
  • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
  • (2024)The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36870568:CSCW2(1-37)Online publication date: 8-Nov-2024
  • Show More Cited By

Index Terms

  1. The effects of example-based explanations in a machine learning interface

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '19: Proceedings of the 24th International Conference on Intelligent User Interfaces
    March 2019
    713 pages
    ISBN:9781450362726
    DOI:10.1145/3301275
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 March 2019

    Check for updates

    Badges

    • Best Short Paper

    Author Tags

    1. example-based explanations
    2. explainable AI
    3. human-AI interaction
    4. machine learning

    Qualifiers

    • Short-paper

    Conference

    IUI '19
    Sponsor:

    Acceptance Rates

    IUI '19 Paper Acceptance Rate 71 of 282 submissions, 25%;
    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)783
    • Downloads (Last 6 weeks)80
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
    • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
    • (2024)The Explanation That Hits Home: The Characteristics of Verbal Explanations That Affect Human Perception in Subjective Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36870568:CSCW2(1-37)Online publication date: 8-Nov-2024
    • (2024)Evaluating What Others Say: The Effect of Accuracy Assessment in Shaping Mental Models of AI SystemsProceedings of the ACM on Human-Computer Interaction10.1145/36869128:CSCW2(1-26)Online publication date: 8-Nov-2024
    • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
    • (2024)Slide to Explore 'What If': An Analysis of Explainable InterfacesAdjunct Proceedings of the 2024 Nordic Conference on Human-Computer Interaction10.1145/3677045.3685416(1-6)Online publication date: 13-Oct-2024
    • (2024)Categorical and Continuous Features in Counterfactual Explanations of AI SystemsACM Transactions on Interactive Intelligent Systems10.1145/367390714:4(1-37)Online publication date: 20-Jun-2024
    • (2024)(X)AI as a Teacher: Learning with Explainable Artificial IntelligenceProceedings of Mensch und Computer 202410.1145/3670653.3677504(571-576)Online publication date: 1-Sep-2024
    • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
    • (2024)Development and translation of human-AI interaction models into working prototypes for clinical decision-makingProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660697(1607-1619)Online publication date: 1-Jul-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media