Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3171221.3171287acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Eye-Hand Behavior in Human-Robot Shared Manipulation

Published: 26 February 2018 Publication History

Abstract

Shared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent systems succeed by first identifying their operators' intentions, typically by analyzing the user's joystick input. To enhance this recognition, it is useful to characterize people's behavior while performing such a task. Furthermore, eye gaze is a rich source of information for understanding operator intention. The goal of this paper is to provide novel insights into the dynamics of control behavior and eye gaze in human-robot shared manipulation tasks. To achieve this goal, we conduct a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance. We process the gaze signals from the study to extract gaze features like saccades, fixations, smooth pursuits, and scan paths. We analyze those features to identify novel patterns of gaze behaviors and highlight where these patterns are similar to and different from previous findings about eye gaze in human-only manipulation tasks. The work described in this paper lays a foundation for a model of natural human eye gaze in human-robot shared manipulation.

References

[1]
Henny Admoni and Brian Scassellati. 2014. Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions. In ACM International Conference on Multimodal Interaction (ICMI). 196--199.
[2]
Henny Admoni and Brian Scassellati. 2017. Social Eye Gaze in Human-Robot Interaction: A Review. Journal of Human-Robot Interaction 6, 1 (2017), 25--63.
[3]
Henny Admoni and Siddhartha S. Srinivasa. 2016. Predicting User Intent Through Eye Gaze for Shared Autonomy. In Proceedings of the AAAI Fall Symposium: Shared Autonomy in Research and Practice. 298--303.
[4]
Soichi Ando, Noriyuki Kida, and Shingo Oda. 2001. Central and Peripheral Visual Reaction Time of Soccer Players and Nonathletes. Perceptual and Motor Skills 92, 3 (2001), 786--794.
[5]
Brenna D Argall. 2015. Turning assistive machines into assistive robots. In Proc. SPIE 9370, Quantum Sensing and Nanophotonic Devices XII. International Society for Optics and Photonics.
[6]
Michael Argyle. 1972. Non-verbal communication in human social interaction. In Non-verbal communication, R. A. Hinde (Ed.). Cambirdge University Press, Oxford, England.
[7]
Michael Argyle and Mark Cook. 1976. Gaze and Mutual Gaze. Cambridge University Press, Oxford, England.
[8]
Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological bulletin 91, 2 (1982), 276.
[9]
Jean-David Boucher, Ugo Pattacini, Amelie Lelong, Gerrard Bailly, Frederic Elisei, Sascha Fagel, Peter Ford Dominey, and Jocelyne Ventre-Dominey. 2012. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation. Frontiers in neurorobotics 6, May (Jan. 2012), 1--11.
[10]
C. Braunagel, E. Kasneci, W. Stolzmann, and W. Rosenstiel. 2015. Driver-activity recognition in the context of conditionally autonomous driving. In IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, 1652--1657.
[11]
Andreas Bulling, Jamie A Ward, Hans Gellersen, and Gerhard Tröster. 2009. Eye movement analysis for activity recognition. In Ubicomp. ACM.
[12]
Tom Carlson and Yiannis Demiris. 2012. Collaborative control for a robotic wheelchair: evaluation of performance, attention, and workload. IEEE Transactions on Systems, Man, and Cybernetics (2012).
[13]
Mark Cook. 1977. Gaze and Mutual Gaze in Social Encounters: How long-and when- we look others "in the eye" is one of the main signals in nonverbal communication. American Scientist 65, 3 (1977), 328--333.
[14]
Navneet Dalal and Bill Triggs. 2005. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05). IEEE Computer Society, Washington, DC, USA, 886--893.
[15]
Anca Dragan and Siddhartha Srinivasa. 2013. A Policy Blending Formalism for Shared Control. The International Journal of Robotics Research (May 2013).
[16]
Exact Dynamics. 2016. Exact Dynamics. (2016). Retrieved Jan 5, 2018 from http://www.exactdynamics.nl
[17]
Wolfgang Einhäuser, James Stout, Christof Koch, and Olivia Carter. 2008. Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry. Proceedings of the National Academy of Sciences (2008).
[18]
Alireza Fathi, Yin Li, and James M Rehg. 2012. Learning to recognize daily actions using gaze. In ECCV. Springer.
[19]
J Randall Flanagan, Gerben Rotman, Andreas F Reichelt, and Roland S Johansson. 2013. The role of observers' gaze behaviour when watching object manipulation tasks: predicting and evaluating the consequences of action. Philosophical Transactions of the Royal Society of London B: Biological Sciences 368, 1628 (2013), 20130063.
[20]
Andreas Gegenfurtner, Erno Lehtinen, and Roger Säljö. 2011. Expertise differences in the comprehension of visualizations: A meta-analysis of eye-tracking research in professional domains. Educational Psychology Review 23, 4 (2011), 523--552.
[21]
Eric Granholm and Stuart R Steinhauer. 2004. Pupillometric measures of cognitive and emotional processes. International Journal of Psychophysiology 52, 1 (2004), 1--6.
[22]
Zenzi M. Griffin and Kathryn Bock. 2000. What the Eyes Say About Speaking. Psychological Science 11, 4 (July 2000), 274--279.
[23]
Elena Corina Grigore, Kerstin Eder, Anthony G. Pipe, Chris Melhuish, and Ute Leonards. 2013. Joint Action Understanding improves Robot-to-Human Object Handover. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Tokyo, Japan.
[24]
Joy E. Hanna and Susan E. Brennan. 2007. Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language 57 (2007), 596--615.
[25]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (2006), 904--908.
[26]
Mary Hayhoe and Dana Ballard. 2005. Eye movements in natural behavior. Trends in Cognitive Sciences 9, 4 (2005), 188--194.
[27]
Laura V Herlant, Rachel M Holladay, and Siddhartha S Srinivasa. 2016. Assistive Teleoperation of Robot Arms via Automatic Time-Optimal Mode Switching. In ACM/IEEE International Conference on Human-Robot Interaction (HRI). 35--42.
[28]
Rachel Holladay, Laura Herlant, Henny Admoni, and Siddhartha S. Srinivasa. 2016. Visibility Optimization in Manipulation Tasks for a Wheelchair-Mounted Robot Arm. In RO-MAN Workshop on Human-Oriented Approaches for Assistive and Rehabilitation Robotics (HUMORARR).
[29]
Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.
[30]
Chien-Ming Huang and Bilge Mutlu. 2016. Anticipatory robot control for efficient human-robot collaboration. In ACM/IEEE International Conference on HumanRobot Interaction (HRI). 83--90.
[31]
Laurent Itti and Christof Koch. 2001. Computational modelling of visual attention. Nature Reviews Neuroscience (March 2001), 194--203. Issue 2.
[32]
Shervin Javdani, Henny Admoni, Stefania Pellegrinelli, Siddhartha S. Srinivasa, and J. Andrew Bagnell. 2017. Shared Autonomy via Hindsight Optimization for Teleoperation and Teaming. arXiv preprint arXiv:1706.00155 (2017).
[33]
Shervin Javdani, Siddhartha S. Srinivasa, and J. Andrew Bagnell. 2015. Shared autonomy via hindsight optimization. In Robotics: Science and Systems (RSS).
[34]
Qiang Ji, Zhiwei Zhu, and Peilin Lan. 2004. Real-time nonintrusive monitoring and prediction of driver fatigue. IEEE transactions on vehicular technology 53, 4 (2004), 1052--1068.
[35]
R S Johansson, G Westling, A Bäckström, and J R Flanagan. 2001. Eye-hand coordination in object manipulation. The Journal of Neuroscience 21, 17 (Sept. 2001), 6917--6932.
[36]
Enkelejda Kasneci, Gjergji Kasneci, Thomas C Kübler, and Wolfgang Rosenstiel. 2015. Online recognition of fixations, saccades, and smooth pursuits for automated analysis of traffic hazard perception. In Artificial neural networks. Springer, 411--434.
[37]
Enkelejda Kasneci, Thomas Kübler, Klaus Broelemann, and Gjergji Kasneci. 2017. Aggregating physiological and eye tracking signals to predict perception in the absence of ground truth. Computers in Human Behavior (2017).
[38]
Adam Kendon. 1967. Some functions of gaze-direction in social interaction. Acta Psychologica 26, 1 (1967), 22--63.
[39]
Kinova Robotics, Inc. 2018. Robot arms. (2018). Retrieved Jan 5, 2018 from http://www.kinovarobotics.com/assistive-robotics/products/robot-arms/
[40]
Thomas Kinsman, Karen Evans, Glenn Sweeney, Tommy Keane, and Jeff Pelz. 2012. Ego-motion Compensation Improves Fixation Detection in Wearable Eye Tracking. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '12). ACM, New York, NY, USA, 221--224.
[41]
Chris L. Kleinke. 1986. Gaze and eye contact: A research review. Psychological Bulletin 100, 1 (July 1986), 78--100.
[42]
Niels A Kloosterman, Thomas Meindertsma, Anouk M Loon, Victor AF Lamme, Yoram S Bonneh, and Tobias H Donner. 2015. Pupil size tracks perceptual content and surprise. European Journal of Neuroscience (2015).
[43]
Thomas C Kübler, Colleen Rothe, Ulrich Schiefer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior research methods 49, 3 (2017), 1048-- 1064.
[44]
Thomas C Kübler, Katrin Sippel, Wolfgang Fuhl, Guilherme Schievelbein, Johanna Aufreiter, Raphael Rosenberg, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2015. Analysis of eye movements with Eyetrace. In International Joint Conference on Biomedical Engineering Systems and Technologies. Springer, 458--471.
[45]
Andrew L Kun, Oskar Palinko, and Ivan Razumenić. 2012. Exploring the effects of size and luminance of visual targets on the pupillary light reflex. In Proceedings of the International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, 183--186.
[46]
M F Land and M Hayhoe. 2001. In what ways do eye movements contribute to everyday activities? Vision Research 41, 25--26 (Jan. 2001), 3559--65.
[47]
Michael F Land, N Mennie, and J Rusted. 1999. Eye movements and the roles of vision in activities of daily living: making a cup of tea. Perception 28, 4 (1999), 1311--1328.
[48]
Veronique Maheu, Julie Frappier, Philippe S. Archambault, and Francois Routhier. 2011. Evaluation of the JACO robotic arm: Clinico-economic study for powered wheelchair users with upper-extremity disabilities. In IEEE/RAS-EMBS International Conference on Rehabilitation Robotics (ICORR). 1--5.
[49]
Sandra P Marshall. 2002. The index of cognitive activity: Measuring cognitive workload. In Proceedings of the IEEE Conference on Human Factors and Power Plants. IEEE.
[50]
MATLAB. 2016. version 9.1.0 (R2016b). The MathWorks, Inc., Natick, MA.
[51]
Gregor Mehlmann, Markus Häring, Kathrin Janowski, Tobias Baur, Patrick Gebhard, and Elisabeth André. 2014. Exploring a Model of Gaze for Grounding in Multimodal HRI. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI). ACM, New York, NY, USA, 247--254.
[52]
Oskar Palinko, Andrew L Kun, Alexander Shyrokov, and Peter Heeman. 2010. Estimating cognitive load using remote eye tracking in a driving simulator. In Proceedings of the 2010 symposium on eye-tracking research&applications. ACM, 141--144.
[53]
Pupil Labs, Inc. 2017. Pupil Labs - Pupil. (2017). Retrieved Jan 5, 2018 from https://pupil-labs.com/pupil/
[54]
Edward Rosten and Tom Drummond. 2005. Fusing points and lines for high performance tracking. In IEEE International Conference on Computer Vision, Vol. 2. 1508--1511.
[55]
Uta Sailer, J Randall Flanagan, and Roland S Johansson. 2005. Eye-hand coordination during learning of a novel visuomotor task. Journal of Neuroscience 25, 39 (2005), 8833--8842.
[56]
K. Sakita, K. Ogawara, S. Murakami, K. Kawamura, and K. Ikeuchi. 2004. Flexible cooperation between human and robot by interpreting human intention from gaze information. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Vol. 1.
[57]
Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research&applications. ACM, 71--78.
[58]
Thiago Santini, Wolfgang Fuhl, and Enkelejda Kasneci. 2017. CalibMe: Fast and Unsupervised Eye Tracker Calibration for Gaze-Based Pervasive HumanComputer Interaction. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2594--2605.
[59]
Thiago Santini, Wolfgang Fuhl, Thomas Kübler, and Enkelejda Kasneci. 2016. Bayesian identification of fixations, saccades, and smooth pursuits. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research&Applications. ACM, 163--170.
[60]
Emrah Akin Sisbot and Rachid Alami. 2012. A Human-Aware Manipulation Planner. IEEE Transactions on Robotics 28, 5 (Oct. 2012), 1045--1057.
[61]
Maria Staudte and Matthew W Crocker. 2011. Investigating joint attention mechanisms through spoken human-robot interaction. Cognition 120 (Aug. 2011), 268--291.
[62]
Kyle Strabala, Min Kyung Lee, Anca Dragan, Jodi Forlizzi, Siddhartha S. Srinivasa, Maya Cakmak, and Vincenzo Micelli. 2013. Toward Seamless Human-Robot Handovers. Journal of Human-Robot Interaction 2, 1 (2013), 112--132.
[63]
P.H.S. Torr and A. Zisserman. 2000. MLESAC. Comput. Vis. Image Underst. 78, 1 (April 2000), 138--156.
[64]
Matthew van der Zwan, Valeriu Codreanu, and Alexandru Telea. 2016. CUBu: universal real-time bundling for large graphs. IEEE transactions on visualization and computer graphics 22, 12 (2016), 2550--2563.
[65]
Roel Vertegaal, Robert Slagter, Gerrit van der Veer, and Anton Nijholt. 2001. Eye Gaze Patterns in Conversations: There is More to Conversational Agents Than Meets the Eyes. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2001) (CHI '01), Vol. 3. ACM, Acm, 301--308. Issue 1.
[66]
Joan N Vickers. 2009. Advances in coupling perception and action: the quiet eye as a bidirectional link between gaze, attention, and action. Progress in brain research 174 (2009), 279--288.
[67]
Tian Xu, Hui Zhang, and Chen Yu. 2013. Cooperative gazing behaviors in human multi-robot interaction. Interaction Studies 14, 3 (2013), 390--418.
[68]
Yuichiro Yoshikawa, Kazuhiko Shinozawa, Hiroshi Ishiguro, Norihiro Hagita, and Takanori Miyamoto. 2006. The effects of responsive eye movement and blinking behavior in a communication robot. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '06). 4564--4569.
[69]
Chen Yu, Paul Schermerhorn, and Matthias Scheutz. 2012. Adaptive Eye Gaze Patterns In Interactions with Human and Artificial Agents. ACM Transactions on Interactive Intelligent Systems 1, 2 (January 2012).
[70]
Lei Yu and Huan Liu. 2003. Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th international conference on machine learning (ICML-03). 856--863.

Cited By

View all
  • (2024)Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent CollaborationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663006(1463-1472)Online publication date: 6-May-2024
  • (2024)Gaze-Based Intention Estimation: Principles, Methodologies, and Applications in HRIACM Transactions on Human-Robot Interaction10.1145/365637613:3(1-30)Online publication date: 26-Sep-2024
  • (2024)SARI: Shared Autonomy across Repeated InteractionACM Transactions on Human-Robot Interaction10.1145/365199413:2(1-36)Online publication date: 14-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction
February 2018
468 pages
ISBN:9781450349536
DOI:10.1145/3171221
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 February 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eye gaze
  2. eye tracking
  3. human-robot interaction
  4. nonverbal communication
  5. shared autonomy

Qualifiers

  • Research-article

Conference

HRI '18
Sponsor:

Acceptance Rates

HRI '18 Paper Acceptance Rate 49 of 206 submissions, 24%;
Overall Acceptance Rate 268 of 1,124 submissions, 24%

Upcoming Conference

HRI '25
ACM/IEEE International Conference on Human-Robot Interaction
March 4 - 6, 2025
Melbourne , VIC , Australia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)168
  • Downloads (Last 6 weeks)21
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent CollaborationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663006(1463-1472)Online publication date: 6-May-2024
  • (2024)Gaze-Based Intention Estimation: Principles, Methodologies, and Applications in HRIACM Transactions on Human-Robot Interaction10.1145/365637613:3(1-30)Online publication date: 26-Sep-2024
  • (2024)SARI: Shared Autonomy across Repeated InteractionACM Transactions on Human-Robot Interaction10.1145/365199413:2(1-36)Online publication date: 14-Jun-2024
  • (2024)Gaze-Guided Graph Neural Network for Action Anticipation Conditioned on IntentionProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3653340(1-9)Online publication date: 4-Jun-2024
  • (2024)Towards an Adaptive System for Investigating Human-Agent Teaming: Let’s get Cooking!2024 Systems and Information Engineering Design Symposium (SIEDS)10.1109/SIEDS61124.2024.10534646(1-6)Online publication date: 3-May-2024
  • (2023)Recent Advancements in Augmented Reality for Robotic Applications: A SurveyActuators10.3390/act1208032312:8(323)Online publication date: 13-Aug-2023
  • (2023)Eye-Tracking in Physical Human–Robot Interaction: Mental Workload and Performance PredictionHuman Factors: The Journal of the Human Factors and Ergonomics Society10.1177/0018720823120470466:8(2104-2119)Online publication date: 4-Oct-2023
  • (2023)Assistance in Teleoperation of Redundant Robots through Predictive Joint ManeuveringACM Transactions on Human-Robot Interaction10.1145/3630265Online publication date: 3-Nov-2023
  • (2023)Illustrating Robot MovementsProceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568162.3576956(231-242)Online publication date: 13-Mar-2023
  • (2023)Improving Autonomous Vehicle Visual Perception by Fusing Human Gaze and Machine VisionIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.329001624:11(12716-12725)Online publication date: Nov-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media