Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/RO-MAN53752.2022.9900589guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations

Published: 29 August 2022 Publication History

Abstract

As human–agent collaborations become more prevalent, it is increasingly important for an agent to be able to adapt to their collaborator and explain their own behavior. In order to do so, they need to be able to identify critical states during the interaction that call for proactive clarifications or behavioral adaptations. In this paper, we explore whether the agent could infer such states from the human’s eye gaze for which we compare gaze patterns across different situations in a collaborative task. Our findings show that the human’s gaze patterns significantly differ between times at which the user is confused about the task, times at which the agent makes an error, and times of normal workflow. During errors the amount of gaze towards the agent increases, while during confusion the amount towards the environment increases. We conclude that these signals could tell the agent what and when to explain.

References

[1]
A. Dafoe, E. Hughes, Y. Bachrach, T. Collins, K. R. McKee, J. Z. Leibo, K. Larson, and T. Graepel, “Open Problems in Cooperative AI,” arXiv:2012.08630 [cs], Dec. 2020.
[2]
A. Tabrez, M. B. Luebbers, and B. Hayes, “A Survey of Mental Modeling Techniques in Human–Robot Teaming,” Current Robotics Reports, vol. 1, no. 4, pp. 259–267, Dec. 2020.
[3]
N. Rabinowitz, F. Perbet, F. Song, C. Zhang, S. M. A. Eslami, and M. Botvinick, “Machine Theory of Mind,” in Proceedings of the 35th International Conference on Machine Learning, vol. 80, 2018.
[4]
S. Rossi, F. Ferland, and A. Tapus, “User profiling and behavioral adaptation for HRI: A survey,” Pattern Recognition Letters, vol. 99, pp. 3–12, Nov. 2017.
[5]
C.-M. Huang, S. Andrist, A. Sauppé, and B. Mutlu, “Using gaze patterns to predict task intent in collaboration,” Frontiers in Psychology, vol. 6, Jul. 2015.
[6]
H. Admoni and B. Scassellati, “Social Eye Gaze in Human-Robot Interaction: A Review,” Journal of Human-Robot Interaction, vol. 6, no. 1, p. 25, Mar. 2017.
[7]
R. M. Aronson and H. Admoni, “Gaze for Error Detection During Human-Robot Shared Manipulation,” Fundamentals of Joint Action workshop, Robotics: Science and Systems, 2018.
[8]
D. Kontogiorgos, S. van Waveren, O. Wallberg, A. Pereira, I. Leite, and J. Gustafson, “Embodiment Effects in Interactions with Failing Robots,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Apr. 2020, pp. 1–14.
[9]
D. E. Cahya, R. Ramakrishnan, and M. Giuliani, “Static and Temporal Differences in Social Signals Between Error-Free and Erroneous Situations in Human-Robot Collaboration,” in International Conference on Social Robotics. Springer International Publishing, 2019, vol. 11876.
[10]
U. Kurylo and J. R. Wilson, “Using Human Eye Gaze Patterns as Indicators of Need for Assistance from a Socially Assistive Robot,” in International Conference on Social Robotics, 2019, vol. 11876.
[11]
G. L. Davidson and N. S. Clayton, “New perspectives in gaze sensitivity research,” Learning & Behavior, vol. 44, no. 1, Mar. 2016.
[12]
M. Çetinçelik, C. F. Rowland, and T. M. Snijders, “Do the Eyes Have It? A Systematic Review on the Role of Eye Gaze in Infant Language Development,” Frontiers in Psychology, vol. 11, p. 589096, Jan. 2021.
[13]
O. Špakov, H. Istance, K.-J. Räihä, T. Viitanen, and H. Siirtola, “Eye gaze and head gaze in collaborative games,” in Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, 2019.
[14]
S. Honig and T. Oron-Gilad, “Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development,” Frontiers in Psychology, vol. 9, p. 861, Jun. 2018.
[15]
M. Stiber and C.-M. Huang, “Not All Errors Are Created Equal: Exploring Human Responses to Robot Errors with Varying Severity,” in Companion Publication of the 2020 International Conference on Multimodal Interaction. ACM, Oct. 2020, pp. 97–101.
[16]
P. Trung, M. Giuliani, M. Miksch, G. Stollnberger, S. Stadler, N. Mirnig, and M. Tscheligi, “Head and shoulders: Automatic error detection in human-robot interaction,” in Proceedings of the 19th ACM International Conference on Multimodal Interaction, Nov. 2017.
[17]
M. K. Lee, S. Kiesler, J. Forlizzi, S. Srinivasa, and P. Rybski, “Gracefully mitigating breakdowns in robotic services,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Osaka, Japan: IEEE, Mar. 2010, pp. 203–210.
[18]
N. Mirnig, G. Stollnberger, M. Miksch, S. Stadler, M. Giuliani, and M. Tscheligi, “To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot,” Frontiers in Robotics and AI, vol. 4, 2017.
[19]
P. de Vries, C. Midden, and D. Bouwhuis, “The effects of errors on system trust, self-confidence, and the allocation of control in route planning,” International Journal of Human-Computer Studies, vol. 58, no. 6, pp. 719–735, Jun. 2003.
[20]
J. Salminen, M. Nagpal, H. Kwak, J. An, S.-g. Jung, and B. J. Jansen, “Confusion Prediction from Eye-Tracking Data: Experiments with Machine Learning,” in Proceedings of the 9th International Conference on Information Systems and Technologies, Mar. 2019.
[21]
A. Saran and E. S. Short, “Understanding Teacher Gaze Patterns for Robot Learning,” in Conference on Robot Learning, 2020, p. 12.
[22]
Y. Cui, Q. Zhang, A. Allievi, P. Stone, S. Niekum, and W. B. Knox, “The EMPATHIC Framework for Task Learning from Implicit Human Feedback,” arXiv:2009.13649 [cs], Dec. 2020.
[23]
“Overcooked,” Ghost Town Games, 2016.
[24]
M. Carroll, T. L. Griffiths, R. Shah, M. K. Ho, S. A. Seshia, P. Abbeel, and A. Dragan, “On the Utility of Learning about Humans for Human-AI Coordination,” in Advances in Neural Information Processing Systems, vol. 32, 2019.
[25]
D. Strouse, K. R. McKee, M. Botvinick, E. Hughes, and R. Everett, “Collaborating with Humans without Human Data,” in Advances in Neural Information Processing Systems (NeurIPS), 2021, p. 28.
[26]
S. A. Wu, R. E. Wang, J. A. Evans, J. B. Tenenbaum, D. C. Parkes, and M. Kleiman-Weiner, “Too Many Cooks: Bayesian Inference for Coordinating Multi-Agent Collaboration,” Topics in Cognitive Science, vol. 13, no. 2, pp. 414–432, 2021.
[27]
X. Gao, R. Gong, Y. Zhao, S. Wang, T. Shu, and S.-C. Zhu, “Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks,” in IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020, pp. 1119–1126.
[28]
K. A. Ericsson and H. A. Simon, Protocol Analysis: Verbal Reports as Data.MIT Press, 1984.
[29]
M. Kassner, W. Patera, and A. Bulling, “Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, Sep. 2014.
[30]
J. R. Landis and G. G. Koch, “The Measurement of Observer Agreement for Categorical Data,” Biometrics, vol. 33, no. 1, 1977.

Cited By

View all
  • (2024)Learning User Embeddings from Human Gaze for Personalised Saliency PredictionProceedings of the ACM on Human-Computer Interaction10.1145/36556038:ETRA(1-16)Online publication date: 28-May-2024
  • (2024)When Do People Want an Explanation from a Robot?Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634990(752-761)Online publication date: 11-Mar-2024
  • (2023)Cooking Up Trust: Eye Gaze and Posture for Trust-Aware Action Selection in Human-Robot CollaborationProceedings of the First International Symposium on Trustworthy Autonomous Systems10.1145/3597512.3597518(1-5)Online publication date: 11-Jul-2023
  • Show More Cited By

Index Terms

  1. Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
        Aug 2022
        1654 pages

        Publisher

        IEEE Press

        Publication History

        Published: 29 August 2022

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 03 Oct 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Learning User Embeddings from Human Gaze for Personalised Saliency PredictionProceedings of the ACM on Human-Computer Interaction10.1145/36556038:ETRA(1-16)Online publication date: 28-May-2024
        • (2024)When Do People Want an Explanation from a Robot?Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634990(752-761)Online publication date: 11-Mar-2024
        • (2023)Cooking Up Trust: Eye Gaze and Posture for Trust-Aware Action Selection in Human-Robot CollaborationProceedings of the First International Symposium on Trustworthy Autonomous Systems10.1145/3597512.3597518(1-5)Online publication date: 11-Jul-2023
        • (2023)Feasibility Study on Eye Gazing in Socially Assistive Robotics: An Intensive Care Unit ScenarioSocial Robotics10.1007/978-981-99-8715-3_5(43-52)Online publication date: 3-Dec-2023
        • (2022)Effective Human-Robot Collaboration via Generalized Robot Error Management Using Natural Human ResponsesProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3557028(673-678)Online publication date: 7-Nov-2022

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media