Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/3523760.3523818acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Understanding Control Frames in Multi-Camera Robot Telemanipulation

Published: 07 March 2022 Publication History

Abstract

In telemanipulation, showing the user multiple views of the remote environment can offer many benefits, although such different views can also create a problem for control. Systems must either choose a single fixed control frame, aligned with at most one of the views or switch between view-aligned control frames, enabling view-aligned control at the expense of switching costs. In this paper, we explore the trade-off between these options. We study the feasibility, benefits, and drawbacks of switching the user's control frame to align with the actively used view during telemanipulation. We additionally explore the effectiveness of explicit and implicit methods for switching control frames. Our results show that switching between multiple view-specific control frames offers significant performance gains compared to a fixed control frame. We also find personal preferences for explicit or implicit switching based on how participants planned their movements. Our findings offer concrete design guidelines for future multi-camera interfaces.

References

[1]
J. Y. Chen, E. C. Haas, and M. J. Barnes, “Human performance issues and user interface design for teleoperated robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 6, pp. 1231--1245, 2007.
[2]
J. A. Macedo, D. B. Kaber, M. R. Endsley, P. Powanusorn, and S. Myung, “The effect of automated compensation for incongruent axes on teleoperator performance,” Human Factors, vol. 40, no. 4, pp. 541--553, 1998.
[3]
K. Chintamani, A. Cao, R. D. Ellis, and A. K. Pandya, “Improved telemanipulator navigation during display-control misalignments using augmented reality cues,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 40, no. 1, pp. 29--39, 2009.
[4]
B. DeJong, J. Colgate, and M. Peshkin, “Mental transformations in human-robot interaction,” in Mixed Reality and Human-Robot Interaction. hskip 1em plus 0.5em minus 0.4emrelax Springer, 2011, pp. 35--51.
[5]
M. Draelos, B. Keller, C. Toth, A. Kuo, K. Hauser, and J. Izatt, “Teleoperating robots from arbitrary viewpoints in surgical contexts,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2017, pp. 2549--2555.
[6]
D. Rakita, B. Mutlu, and M. Gleicher, “An autonomous dynamic camera method for effective remote teleoperation,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 325--333.
[7]
S. Hughes, J. Manojlovich, M. Lewis, and J. Gennari, “Camera control and decoupled motion for teleoperation,” in SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme-System Security and Assurance (Cat. No. 03CH37483), vol. 2.hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2003, pp. 1339--1344.
[8]
R. Sato, M. Kamezaki, S. Niuchi, S. Sugano, and H. Iwata, “Cognitive untunneling multi-view system for teleoperators of heavy machines based on visual momentum and saliency,” Automation in Construction, vol. 110, p. 103047, 2020.
[9]
L. M. Hiatt and R. Simmons, “Coordinate frames in robotic teleoperation,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2006, pp. 1712--1719.
[10]
D. H. Ballard, “Task frames in robot manipulation.” in AAAI, vol. 19, 1984, p. 109.
[11]
D. Rakita, B. Mutlu, and M. Gleicher, “A motion retargeting method for effective mimicry-based teleoperation of robot arms,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017, pp. 361--370.
[12]
O. Porges, M. Connan, B. Henze, A. Gigli, C. Castellini, and M. A. Roa Garzon, “A wearable, ultralight interface for bimanual teleoperation of a compliant, whole-body-controlled humanoid robot,” in IEEE International Conference on Robotics and Automation ICRA. hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2019.
[13]
L. Wu, F. Yu, J. Wang, and T. N. Do, “Camera frame misalignment in a teleoperated eye-in-hand robot: Effects and a simple correction method,” arXiv preprint arXiv:2105.08466, 2021.
[14]
C. W. Nielsen, M. A. Goodrich, and R. W. Ricks, “Ecological interfaces for improving mobile robot teleoperation,” IEEE Transactions on Robotics, vol. 23, no. 5, pp. 927--941, 2007.
[15]
S. H. Seo, D. J. Rea, J. Wiebe, and J. E. Young, “Monocle: interactive detail-in-context using two pan-and-tilt cameras to improve teleoperation effectiveness,” in 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN). hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2017, pp. 962--967.
[16]
H. Hedayati, M. Walker, and D. Szafir, “Improving collocated robot teleoperation with augmented reality,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 78--86.
[17]
L. Peppoloni, F. Brizzi, C. A. Avizzano, and E. Ruffaldi, “Immersive ros-integrated framework for robot teleoperation,” in 2015 IEEE Symposium on 3D User Interfaces (3DUI). hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2015, pp. 177--178.
[18]
J. I. Lipton, A. J. Fay, and D. Rus, “Baxter's homunculus: Virtual reality spaces for teleoperation in manufacturing,” IEEE Robotics and Automation Letters, vol. 3, no. 1, pp. 179--186, 2017.
[19]
D. Whitney, E. Rosen, D. Ullman, E. Phillips, and S. Tellex, “Ros reality: A virtual reality framework using consumer-grade hardware for ros-enabled robots,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2018, pp. 1--9.
[20]
X. Xiao, J. Dufek, and R. R. Murphy, “Autonomous visual assistance for robot operations using a tethered uav,” in Field and Service Robotics. hskip 1em plus 0.5em minus 0.4emrelax Springer, 2021, pp. 15--29.
[21]
B. Keyes, R. Casey, H. A. Yanco, B. A. Maxwell, and Y. Georgiev, “Camera placement and multi-camera fusion for remote robot operation,” in Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics. hskip 1em plus 0.5em minus 0.4emrelax National Institute of Standards and Technology Gaithersburg, MD, 2006, pp. 22--24.
[22]
P. R. DeLucia and J. A. Griswold, “Effects of camera arrangement on perceptual-motor performance in minimally invasive surgery.” Journal of Experimental Psychology: Applied, vol. 17, no. 3, p. 210, 2011.
[23]
D. D. Salvucci and J. R. Anderson, “Intelligent gaze-added interfaces,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 2000, pp. 273--280.
[24]
J. S. Shell, R. Vertegaal, and A. W. Skaburskis, “Eyepliances: attention-seeking devices that respond to visual attention,” in CHI'03 extended abstracts on Human factors in computing systems, 2003, pp. 770--771.
[25]
J. Hild, E. Peinsipp-Byma, M. Voit, and J. Beyerer, “Suggesting gaze-based selection for surveillance applications,” in 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). hskip 1em plus 0.5em minus 0.4emrelax IEEE, 2019, pp. 1--8.
[26]
K. B. Moorthy, M. Kumar, R. Subramanian, and V. Gandhi, “Gazed--gaze-guided cinematic editing of wide-angle monocular video recordings,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, pp. 1--11.
[27]
R. Jacob and S. Stellmach, “What you look at is what you get: gaze-based user interfaces,” interactions, vol. 23, no. 5, pp. 62--65, 2016.
[28]
D. Rakita, B. Mutlu, and M. Gleicher, “Relaxedik: Real-time synthesis of accurate and feasible robot arm motion,” in Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania, June 2018.
[29]
D. Whitney, E. Rosen, E. Phillips, G. Konidaris, and S. Tellex, “Comparing robot grasping teleoperation across desktop and virtual reality with ros reality,” in Robotics Research. hskip 1em plus 0.5em minus 0.4emrelax Springer, 2020, pp. 335--350.
[30]
E. Rohmer, S. P. N. Singh, and M. Freese, “Coppeliasim (formerly v-rep): a versatile and scalable robot simulation framework,” in Proc. of The International Conference on Intelligent Robots and Systems (IROS), 2013.
[31]
L. Wu, “Design and implementation of intuitive human-robot teleoperation interfaces,” Ph.D. dissertation, University of South Florida, 2020.
[32]
S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” in Advances in psychology. hskip 1em plus 0.5em minus 0.4emrelax Elsevier, 1988, vol. 52, pp. 139--183.
[33]
M. Kumar, A. Paepcke, and T. Winograd, “Eyepoint: practical pointing and selection using gaze and keyboard,” in Proceedings of the SIGCHI conference on Human factors in computing systems, 2007, pp. 421--430.
[34]
M. Kytö, B. Ens, T. Piumsomboon, G. A. Lee, and M. Billinghurst, “Pinpointing: Precise head-and eye-based target selection for augmented reality,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1--14.
[35]
S. Sindhwani, C. Lutteroth, and G. Weber, “Retype: Quick text editing with keyboard and gaze,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1--13.

Cited By

View all
  • (2024)Learning Autonomous Viewpoint Adjustment from Human Demonstrations for TelemanipulationACM Transactions on Human-Robot Interaction10.1145/366034813:3(1-23)Online publication date: 26-Sep-2024
  • (2023)Perception-Motion Coupling in Active Telepresence: Human Behavior and Teleoperation Interface DesignACM Transactions on Human-Robot Interaction10.1145/357159912:3(1-24)Online publication date: 27-Mar-2023
  • (2023)Designing Robotic Camera Systems to Enable Synchronous Remote CollaborationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3579974(751-753)Online publication date: 13-Mar-2023
  • Show More Cited By

Index Terms

  1. Understanding Control Frames in Multi-Camera Robot Telemanipulation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction
    March 2022
    1353 pages

    Sponsors

    Publisher

    IEEE Press

    Publication History

    Published: 07 March 2022

    Check for updates

    Author Tags

    1. awareness
    2. camera
    3. control frame
    4. multiple views
    5. operator interfaces
    6. telemanipulation

    Qualifiers

    • Research-article

    Funding Sources

    • NASA University Led Initiative
    • National Science Foundation

    Conference

    HRI '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Upcoming Conference

    HRI '25
    ACM/IEEE International Conference on Human-Robot Interaction
    March 4 - 6, 2025
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)20
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Learning Autonomous Viewpoint Adjustment from Human Demonstrations for TelemanipulationACM Transactions on Human-Robot Interaction10.1145/366034813:3(1-23)Online publication date: 26-Sep-2024
    • (2023)Perception-Motion Coupling in Active Telepresence: Human Behavior and Teleoperation Interface DesignACM Transactions on Human-Robot Interaction10.1145/357159912:3(1-24)Online publication date: 27-Mar-2023
    • (2023)Designing Robotic Camera Systems to Enable Synchronous Remote CollaborationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3579974(751-753)Online publication date: 13-Mar-2023
    • (2022)Mimic: In-Situ Recording and Re-Use of Demonstrations to Support Robot TeleoperationProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545639(1-13)Online publication date: 29-Oct-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media