Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Explainable Embodied Agents Through Social Cues: A Review

Published: 11 July 2021 Publication History

Abstract

The issue of how to make embodied agents explainable has experienced a surge of interest over the past 3 years, and there are many terms that refer to this concept, such as transparency and legibility. One reason for this high variance in terminology is the unique array of social cues that embodied agents can access in contrast to that accessed by non-embodied agents. Another reason is that different authors use these terms in different ways. Hence, we review the existing literature on explainability and organize it by (1) providing an overview of existing definitions, (2) showing how explainability is implemented and how it exploits different social cues, and (3) showing how the impact of explainability is measured. Additionally, we present a list of open questions and challenges that highlight areas that require further investigation by the community. This provides the interested reader with an overview of the current state of the art.

References

[1]
Kumar Akash, Katelyn Polson, Tahira Reid, and Neera Jain. 2018. Improving human-machine collaboration through transparency-based feedback—Part I: Human trust and workload model. IFAC-PapersOnLine 51, 34 (2018), 315–321.
[2]
Kumar Akash, Tahira Reid, and Neera Jain. 2018. Improving human-machine collaboration through transparency-based feedback—Part II: Control design and synthesis. IFAC-PapersOnLine 51, 34 (2018), 322–328. https://doi.org/10.1016/j.ifacol.2019.01.026
[3]
Victoria Alonso and Paloma De La Puente. 2018. System transparency in shared autonomy: A mini review. Frontiers in Neurorobotics 12 (2018), 83.
[4]
Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS’19). ACM, New York, NY, 1078–1088.
[5]
Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. 2009. A survey of robot learning from demonstration. Robotics and Autonomous Systems 57, 5 (2009), 469–483.
[6]
M. Arnold, D. Piorkowski, D. Reimer, J. Richards, J. Tsay, K. R. Varshney, R. K. E. Bellamy, et al. 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4-5 (2019), Article 6, 13 pages.
[7]
Kim Baraka, Stephanie Rosenthal, and Manuela M. Veloso. 2016. Enhancing human understanding of a mobile robot’s state and actions using expressive lights. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, Los Alamitos, CA, 652–657.
[8]
Kim Baraka and Manuela M. Veloso. 2018. Mobile service robot state revealing through expressive lights: Formalism, design, and evaluation. International Journal of Social Robotics 10, 1 (Jan. 2018), 65–92.
[9]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82–115.
[10]
Michael W. Boyce, Jessie Y. C. C. Chen, Anthony R. Selkowitz, and Shan G. Lakhmani. 2015. Effects of agent transparency on operator trust. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (HRI’15 Extended Abstracts), Vol. 02-05-Marc. IEEE, Los Alamitos, CA, 179–180.
[11]
Joost Broekens and Mohamed Chetouani. 2019. Towards transparent robot learning through TDRL-based emotional expressions. IEEE Transactions on Affective Computing. Early Access, January 16, 2019.
[12]
Barry Brown and Eric Laurier. 2017. The trouble with autopilots: Assisted and autonomous driving on the social road. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Vol. 2017-May. ACM, New York, NY, 416–429.
[13]
Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, and Subbarao Kambhampati. 2018. Explicability? Legibility? Predictability? Transparency? Privacy? Security? The emerging landscape of interpretable agent behavior. arxiv:1811.09722
[14]
Tathagata Chakraborti, Sarath Sreedharan, and Subbarao Kambhampati. 2019. Balancing explicability and explanations for human-aware planning. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19), Vol. 2019-August. 1335–1343.
[15]
Tathagata Chakraborti, Sarath Sreedharan, Anagha Kulkarni, and Subbarao Kambhampati. 2018. Projection-aware task planning and execution for human-in-the-loop operation of robots in a mixed-reality workspace. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’18). IEEE, Los Alamitos, CA, 4476–4482.
[16]
Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2017. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17). 156–163. arxiv:cs.AI/1701.08317
[17]
Crystal Chao, Maya Cakmak, and Andrea L. Thomaz. 2010. Transparent active learning for robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI’10). IEEE, Los Alamitos, CA, 317–324.
[18]
Jessie Y. C. Chen, Michael J. Barnes, Anthony R. Selkowitz, and Kimberly Stowers. 2017. Effects of agent transparency on human-autonomy teaming effectiveness. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC’17). IEEE, Los Alamitos, CA, 1838–1843.
[19]
Jessie Y. C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael J. Barnes. 2018. Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science 19, 3 (2018), 259–282.
[20]
Jessie Y. C. Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael J. Barnes. 2014. Situation Awareness–Based Agent Transparency. Technical Report. Army Research Laboratory, Washington, DC. https://www.researchgate.net/publication/264963346_Situation_Awareness-Based_Agent_Transparency.
[21]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arxiv:1702.086ACM08.
[22]
Anca D. Dragan, Kenton C. T. Lee, and Siddhartha S. Srinivasa. 2013. Legibility and predictability of robot motion. In Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI’13). IEEE, Los Alamitos, CA, 301–308.
[23]
Mica R. Endsley. 2017. Toward a theory of situation awareness in dynamic systems. Human Error in Aviation 37 (2017), 217–249.
[24]
Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz, and Aurelia Tamo-Larrieux. 2019. Robots and transparency: The multiple dimensions of transparency in the context of robot technologies. IEEE Robotics and Automation Magazine 26, 2 (March 2019), 71–78.
[25]
Agneta H. Fischer and Antony S. R. Manstead. 2008. Social functions of emotion and emotion regulation. In Handbook of Emotions. Guilford Press, 456–468.
[26]
Kerstin Fischer, Hanna Mareike Weigelin, and Leon Bodenhagen. 2018. Increasing trust in human-robot medical interactions: Effects of transparency and adaptability. Paladyn 9, 1 (2018), 95–109.
[27]
Michael W. Floyd and David W. Aha. 2016. Incorporating transparency during trust-guided behavior adaptation. In Case-Based Reasoning Research and Development. Lecture Notes in Computer Science, Vol. Springer, 124–138.
[28]
Michael W. Floyd and David W. Aha. 2017. Using explanations to provide transparency during trust-guided behavior adaptation 1. AI Communications 30, 3–4 (2017), 281–294.
[29]
Michael Georgeff, Barney Pell, Martha Pollack, Milind Tambe, and Michael Wooldridge. 1999. The belief-desire-intention model of agency. In Intelligent Agents V: Agents Theories, Architectures, and Languages. Lecture Notes in Computer Science, Vol. 1555. Springer, 1–10.
[30]
Ze Gong and Yu Zhang. 2018. Behavior explanation as intention signaling in human-robot teaming. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’18). IEEE, Los Alamitos, CA, 1005–1011.
[31]
Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea Thomaz. 2013. Policy shaping: Integrating human feedback with reinforcement learning. In Advances in Neural Information Processing Systems. 2625–2633. http://papers.nips.cc/paper/5187-policy-shaping-integrating-human-feedback-with-reinforcement-learning.pdf
[32]
Elena Corina Grigore, Alessandro Roncone, Olivier Mangin, and Brian Scassellati. 2018. Preference-based assistance prediction for human-robot collaboration tasks. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’18). IEEE, Los Alamitos, CA, 4441–4448.
[33]
Jonathan Grizou, Manuel Lopes, and Pierre Yves Oudeyer. 2013. Robot learning simultaneously a task and how to interpret human instructions. In Proceedings of the 2013 IEEE 3rd Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL’13). IEEE, Los Alamitos, CA, 1–8.
[34]
Bradley Hayes and Julie A. Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17). IEEE, Los Alamitos, CA, 303–312.
[35]
High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy AI. Retrieved April 2, 2021 from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
[36]
Sandy H. Huang, David Held, Pieter Abbeel, and Anca D. Dragan. 2019. Enabling robots to communicate their objectives. Autonomous Robots 43, 2 (July 2019), 309–326.
[37]
Xiangyang Huang, Cuihuan Du, Yan Peng, Xuren Wang, and Jie Liu. 2013. Goal-oriented action planning in partially observable stochastic domains. In Proceedings of the 2012 IEEE 2nd International Conference on Cloud Computing and Intelligence Systems (CCIS’12), Vol. 3. IEEE, Los Alamitos, CA, 1381–1385.
[38]
Giulio Jacucci, Anna Spagnolli, Jonathan Freeman, and Luciano Gamberini. 2014. Symbiotic interaction: A critical definition and comparison to other human-computer paradigms. In Symbiotic Interaction. Lecture Notes in Computer Science, Vol. 8820. Springer, 3–20.
[39]
Mahdi Khoramshahi and Aude Billard. 2019. A dynamical system approach to task-adaptation in physical human–robot interaction. Autonomous Robots 43, 4 (April 2019), 927–946.
[40]
W. Bradley Knox and Peter Stone. 2009. Interactively shaping agents via human reinforcement: The TAMER framework. In Proceedings of the 5th International Conference on Knowledge Capture (K-CAP’09). ACM, New York, NY, 9–16.
[41]
W. Bradley Knox, Peter Stone, and Cynthia Breazeal. 2013. Training a robot via human feedback: A case study. In Social Robotics. Lecture Notes in Computer Science, Vol. 8239. Springer, 460–470.
[42]
Alice Koller and John R. Searle. 1970. Speech acts: An essay in the philosophy of language. Language 46, 1 (1970), 217.
[43]
Anagha Kulkarni, Satya Gautam Vadlamudi, Yantian Zha, Yu Zhang, Tathagata Chakraborti, and Subbarao Kambhampati. 2019. Explicable planning as minimizing distance from expected behavior. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’19).2075–2077. https://dl.acm.org/doi/10.5555/3306127.3332015
[44]
Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing robot incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. IEEE, Los Alamitos, CA, 87–95.
[45]
Maurice Lamb, Riley Mayr, Tamara Lorenz, Ali A. Minai, and Michael J. Richardson. 2018. The paths we pick together: A behavioral dynamics algorithm for an HRI pick-and-place task. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. IEEE, Los Alamitos, CA, 165–166.
[46]
Jin Joo Lee, W. Bradley Knox, Jolie B. Wormwood, Cynthia Breazeal, and David DeSteno. 2013. Computationally modeling interpersonal trust. Frontiers in Psychology 4 (Dec. 2013), 893.
[47]
Min Kyung Lee, Sara Kielser, Jodi Forlizzi, Siddhartha Srinivasa, and Paul Rybski. 2010. Gracefully mitigating breakdowns in robotic services. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI’10). IEEE, Los Alamitos, CA, 203–210.
[48]
Michael S. Lee. 2019. Self-assessing and communicating manipulation proficiency through active uncertainty characterization. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2019-March. IEEE, Los Alamitos, CA, 724–726.
[49]
Phil Legg, Jim Smith, and Alexander Downing. 2019. Visual analytics for collaborative human-machine confidence in human-centric active learning tasks. Human-Centric Computing and Information Sciences 9, 5 (Dec. 2019), 1–25.
[50]
Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human Computer Studies 77 (2015), 23–37.
[51]
Zachary C. Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (June 2018), Article 30, 57 pages.
[52]
Ingo Lütkebohle, Julia Peltason, Lars Schillingmann, Britta Wrede, Sven Wachsmuth, Christof Elbrechter, and Robert Haschke. 2009. The curious robot—Structuring interactive robot learning. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA’09). IEEE, Los Alamitos, CA, 4156–4162.
[53]
Joseph B. Lyons. 2013. Being transparent about transparency: A model for human-robot interaction. In AAAI Spring Symposium Series. AAAI, Stanford, 48–53.
[54]
B. F. Malle. 2005. How the mind explains behavior: Folk explanations, meaning, and social interaction. Choice Reviews Online 42, 10 (2005), 42-6170.
[55]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. arxiv:1706.07269
[56]
Anis Najar, Olivier Sigaud, and Mohamed Chetouani. 2016. Training a robot with evaluative feedback and unlabeled guidance signals. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, Los Alamitos, CA, 261–266.
[57]
Anis Najar, Olivier Sigaud, and Mohamed Chetouani. 2020. Interactively shaping robot behaviour with unlabeled human instructions. Autonomous Agents and Multi-Agent Systems 34, 2 (2020), 1–35.
[58]
Scott Ososky, Tracy Sanders, Florian Jentsch, Peter Hancock, and Jessie Y. C. Chen. 2014. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In Unmanned Systems Technology XVI, Robert E. Karlsen, Douglas W. Gage, Charles M. Shoemaker, and Grant R. Gerhart (Eds.), Vol. 9084. SPIE, 90840E.
[59]
Victor Paléologue, Jocelyn Martin, Amit K. Pandey, Alexandre Coninx, and Mohamed Chetouani. 2017. Semantic-based interaction for teaching robot behavior compositions. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’17). IEEE, Los Alamitos, CA, 50–55.
[60]
Leah Perlmutter, Eric M. Kernfeld, and Maya Cakmak. 2016. Situated language understanding with human-like and visualization-based transparency. In Robotics: Science and Systems, Vol. 12. IEEE, Los Alamitos, CA, 40–50.
[61]
Adam Poulsen, Oliver K. Burmeister, and David Tien. 2018. Care robot transparency isn’t enough for trust. In Proceedings of the 2018 IEEE Region 10 Symposium (Tensymp’18). IEEE, Los Alamitos, CA, 293–297.
[62]
Alessandro Roncone, Olivier Mangin, and Brian Scassellati. 2017. Transparent role assignment and task allocation in human robot collaboration. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’17). IEEE, Los Alamitos, CA, 1014–1021.
[63]
Avi Rosenfeld and Ariella Richardson. 2019. Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems 33, 6 (Nov. 2019), 673–705.
[64]
Kristin E. Schaefer, Ralph W. Brewer, Joe Putney, Edward Mottern, Jeffrey Barghout, and Edward R. Straub. 2016. Relinquishing manual control collaboration requires the capability to understand robot intention. In Proceedings of the International Conference on Collaboration Technologies and Systems (CTS’16). IEEE, Los Alamitos, CA, 359–366.
[65]
Alessandra Sciutti, Laura Patanè, Francesco Nori, and Giulio Sandini. 2014. Understanding object weight from human and humanoid lifting actions. IEEE Transactions on Autonomous Mental Development 6, 2 (2014), 80–92.
[66]
A Carlisle Scott, William J. Clancey, Randall Davis, and Edward H. Shortliffe. 1977. Explanation capabilities of production-based consultation systems. American Journal of Computational Linguistics 14, J77-1 (1977), 1–50. https://www.aclweb.org/anthology/J77-1006
[67]
Sara Sheikholeslami, Justin W. Hart, Wesley P. Chan, Camilo P. Quintero, and Elizabeth A. Croft. 2018. Prediction and production of human reaching trajectories for human-robot interaction. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. IEEE, Los Alamitos, CA, 321–322.
[68]
Anna Spagnolli, Lily E. Frank, Pim Haselager, and David Kirsh. 2018. Transparency as an ethical safeguard. In Symbiotic Interaction. Lecture Notes in Computer Science, Vol. 10727. Springer, 1–6.
[69]
Halit B. Suay and Sonia Chernova. 2011. Effect of human guidance and state space size on interactive reinforcement learning. In Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’11). IEEE, Los Alamitos, CA, 1–6.
[70]
Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA.
[71]
Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. 2016. Why is my robot behaving like that? Designing transparency for real time inspection of autonomous robots. In Proceedings of the AISB Workshop on Principles of Robotics. 4. http://opus.bath.ac.uk/49713/.
[72]
Andrea L. Thomaz and Cynthia Breazeal. 2006. Transparency and socially guided machine learning. In Proceedings of the 5th International Conference on Development and Learning (ICDL’06). IEEE, Los Alamitos, CA, 1–146. https://dspace.mit.edu/handle/1721.1/36160.
[73]
Andrea L. Thomaz and Cynthia Breazeal. 2008. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence 172, 6 (2008), 716–737.
[74]
William van Melle. 1978. MYCIN: A knowledge-based consultation program for infectious disease diagnosis. International Journal of Man-Machine Studies 10, 3 (1978), 313–322.
[75]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. The impact of POMDP-generated explanations on trust and performance in human-robot teams. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’16). ACM, New York, NY, 997–1005. https://dl.acm.org/citation.cfm?id=2937071.
[76]
Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explanations. In Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI’16), Vol. 2016-April. IEEE, Los Alamitos, CA, 109–116.
[77]
Ning Wang, David V. Pynadath, Ericka Rovira, Michael J. Barnes, and Susan G. Hill. 2018. Is it my looks? Or something I said? The impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In Persuasive Technology. Lecture Notes in Computer Science, Vol. 10809. Springer, 56–69.
[78]
David Warren. 1977. Implementing Prolog-Compiling Predicate Logic Programs. Technical Report. Department of Artificial Intelligence, University of Edinburgh.
[79]
Michael R. Wick and William B. Thompson. 1992. Reconstructive expert system explanation. Artificial Intelligence 54, 1-2 (1992), 33–70.
[80]
A. William Evans, Matthew Marge, Ethan Stump, Garrett Warnell, Joseph Conroy, Douglas Summers-Stay, and David Baran. 2017. The future of human robot teams in the army: Factors affecting a model of human-system dialogue towards greater team collaboration. In Advances in Human Factors in Robots and Unmanned Systems. Advances in Intelligent Systems and Computing, Vol. 499. Springer, 197–210.
[81]
Robert H. Wortham, Andreas Theodorou, and Joanna J. Bryson. 2016. What does the robot think? Transparency as a fundamental design requirement for intelligent systems. In Proceedings of the Ethics for Artificial Intelligence Workshop (IJCAI’16). http://www.robwortham.com/instinct-planner/http://opus.bath.ac.uk/50294/1/WorthamTheodorouBryson_EFAI16.pdf.
[82]
Huidi Zhang and Shirong Liu. 2009. Design of autonomous navigation system based on affective cognitive learning and decision-making. In Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO’19). IEEE, Los Alamitos, CA, 2491–2496.
[83]
Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, and Anca D. Dragan. 2017. Expressive robot motion timing. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Vol. Part F1271. IEEE, Los Alamitos, CA, 22–31. arxiv:1802.01536

Cited By

View all
  • (2024)A Systematic Review on Extended Reality-Mediated Multi-User Social EngagementSystems10.3390/systems1210039612:10(396)Online publication date: 26-Sep-2024
  • (2024)Findings from Studies on English-Based Conversational AI Agents (including ChatGPT) Are Not UniversalProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665879(1-5)Online publication date: 8-Jul-2024
  • (2024)Speaking Transparently: Social Robots in Educational SettingsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640717(356-358)Online publication date: 11-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Human-Robot Interaction
ACM Transactions on Human-Robot Interaction  Volume 10, Issue 3
Special Issue on Explainable Robot Behavior
September 2021
248 pages
EISSN:2573-9522
DOI:10.1145/3474835
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2021
Accepted: 01 February 2021
Revised: 01 February 2021
Received: 01 December 2019
Published in THRI Volume 10, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Transparency
  2. accountability
  3. embodied social agents
  4. explainability
  5. explainable agency
  6. expressive behavior
  7. intelligibility
  8. interpretability
  9. legibility
  10. predictability
  11. robots

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • European Union's Horizon 2020 research and innovation programme
  • Fundação para a Ciência e a Tecnologia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)961
  • Downloads (Last 6 weeks)121
Reflects downloads up to 17 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Systematic Review on Extended Reality-Mediated Multi-User Social EngagementSystems10.3390/systems1210039612:10(396)Online publication date: 26-Sep-2024
  • (2024)Findings from Studies on English-Based Conversational AI Agents (including ChatGPT) Are Not UniversalProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665879(1-5)Online publication date: 8-Jul-2024
  • (2024)Speaking Transparently: Social Robots in Educational SettingsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640717(356-358)Online publication date: 11-Mar-2024
  • (2024)Interactively Explaining Robot Policies to Humans in Integrated Virtual and Physical Training EnvironmentsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640656(847-851)Online publication date: 11-Mar-2024
  • (2024)Explainability for Human-Robot CollaborationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3638154(1364-1366)Online publication date: 11-Mar-2024
  • (2024)What a Thing to Say! Which Linguistic Politeness Strategies Should Robots Use in Noncompliance Interactions?Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634943(501-510)Online publication date: 11-Mar-2024
  • (2024)Explainability of Brain Tumor Classification Based on Region2024 International Conference on Emerging Technologies in Computer Science for Interdisciplinary Applications (ICETCS)10.1109/ICETCS61022.2024.10544289(1-6)Online publication date: 22-Apr-2024
  • (2024)EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02084(22072-22086)Online publication date: 16-Jun-2024
  • (2024)A Taxonomy of Explanation Types and Need Indicators in Human–Agent CollaborationsInternational Journal of Social Robotics10.1007/s12369-024-01148-816:7(1681-1692)Online publication date: 5-Jun-2024
  • (2024)Explainable Human-Robot Interaction for Imitation Learning in Augmented RealityHuman-Friendly Robotics 202310.1007/978-3-031-55000-3_7(94-109)Online publication date: 10-Mar-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media