Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/3378680.3378831acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Towards understanding user preferences for explanation types in model reconciliation

Published: 10 January 2020 Publication History

Abstract

Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.

References

[1]
S. Russell and P. Norvig, Artificial Intelligence - A Modern Approach. Prentice Hall, 2003.
[2]
D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and D. Wilkins, "PDDL - The Planning Domain Definition Language," 1998.
[3]
T. Chakraborti, S. Sreedharan, Y. Zhang, and S. Kambhampati, "Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy," in IJCAI, 2017.
[4]
T. Chakraborti, S. Sreedharan, S. Grover, and S. Kambhampati, "Plan Explanations as Model Reconciliation - An Empirical Study," HRI, 2019.
[5]
T. Miller, "Explanation in Artificial Intelligence: Insights from the Social Sciences," AIJ, 2018.
[6]
T. Lombrozo, "Explanation and Abductive Inference," Oxford Handbook of Thinking and Reasoning, pp. 260--276, 2012.
[7]
D. Hilton, "Social attribution and explanation," in The Oxford Handbook of Causal Reasoning.
[8]
International Planning Competition, "IPC Competition Domains," https://goo.gl/3yyDn4, 2011.
[9]
S. Sreedharan, T. Chakraborti, and S. Kambhampati, "Handling Model Uncertainty and Multiplicity in Explanations as Model Reconciliation," in ICAPS, 2018.
  1. Towards understanding user preferences for explanation types in model reconciliation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '19: Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction
    March 2019
    812 pages
    ISBN:9781538685556

    Sponsors

    In-Cooperation

    • IEEE-RAS: Robotics and Automation

    Publisher

    IEEE Press

    Publication History

    Published: 10 January 2020

    Check for updates

    Qualifiers

    • Research-article

    Conference

    HRI '19
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 41
      Total Downloads
    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 23 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media