Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3607822.3614513acmconferencesArticle/Chapter ViewAbstractPublication PagessuiConference Proceedingsconference-collections
research-article

Advantage of Gaze-Only Content Browsing in VR using Cumulative Dwell Time Compared to Hand Controller

Published: 13 October 2023 Publication History
  • Get Citation Alerts
  • Abstract

    Head-mounted displays(HMDs) are expected to be used as daily devices. Developing interfaces to control contents projected in a head-mounted display (HMD) is key to leading the spread of HMD usage. With the need for a new interface of the HMD, gaze interface has been researched. One of the main challenges has been detecting the user’s control intention from gaze movements. Considering short periods of gaze movement away, cumulative gaze dwell time is considered suitable for predicting operations from natural gaze movements while browsing contents. In this study, we evaluated a gaze-only contents browsing method using cumulative dwell time by comparing the hand controller method. The results showed that the proposed gaze method can achieve the same level of time and usability with less physical workload than the controller method. It was also found that more people prefer the proposed gaze-based method (16 in total; gaze: 3, rather gaze: 6, neutral: 1, rather controllers: 6). These results indicate the applicability of implicit interaction by gaze for browsing contents on HMDs.

    References

    [1]
    Inc. Apple. 2023. Apple Vision Pro.https://www.apple.com/apple-vision-pro/.
    [2]
    Aaron Bangor, Philip T Kortum, and James T Miller. 2008. An empirical evaluation of the system usability scale. Intl. Journal of Human–Computer Interaction 24, 6 (2008), 574–594.
    [3]
    Jonas Blattgerste, Patrick Renner, and Thies Pfeiffer. 2018. Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views. In Proceedings of the Workshop on Communication by Gaze Interaction (Warsaw, Poland) (COGAIN ’18). Association for Computing Machinery, New York, NY, USA, Article 1, 9 pages. https://doi.org/10.1145/3206343.3206349
    [4]
    Xiuli Chen, Aditya Acharya, Antti Oulasvirta, and Andrew Howes. 2021. An Adaptive Model of Gaze-Based Selection. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 288, 11 pages. https://doi.org/10.1145/3411764.3445177
    [5]
    Tan Gemicioglu, R. Michael Winters, Yu-Te Wang, Thomas M. Gable, Ann Paradiso, and Ivan J. Tashev. 2023. Gaze & Tongue: A Subtle, Hands-Free Interaction for Head-Worn Devices. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 456, 4 pages. https://doi.org/10.1145/3544549.3583930
    [6]
    John Paulin Hansen, Vijay Rajanna, I. Scott MacKenzie, and Per Bækgaard. 2018. A Fitts’ Law Study of Click and Dwell Interaction by Gaze, Head and Mouse with a Head-Mounted Display. In Proceedings of the Workshop on Communication by Gaze Interaction (Warsaw, Poland) (COGAIN ’18). Association for Computing Machinery, New York, NY, USA, Article 7, 5 pages. https://doi.org/10.1145/3206343.3206344
    [7]
    Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
    [8]
    Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada) (CHI ’14). Association for Computing Machinery, New York, NY, USA, 1063–1072. https://doi.org/10.1145/2556288.2557130
    [9]
    Shota Imamura, Jieun Lee, and Makoto Itoh. 2023. Control Prediction Based on Cumulative Gaze Dwell Time While Browsing Contents. In Proceedings of the 2023 Symposium on Eye Tracking Research and Applications (Tubingen, Germany) (ETRA ’23). Association for Computing Machinery, New York, NY, USA, Article 91, 6 pages. https://doi.org/10.1145/3588015.3589664
    [10]
    Yoshio Ishiguro and Jun Rekimoto. 2011. Peripheral Vision Annotation: Noninterference Information Presentation Method for Mobile Augmented Reality. In Proceedings of the 2nd Augmented Human International Conference (Tokyo, Japan) (AH ’11). Association for Computing Machinery, New York, NY, USA, Article 8, 5 pages. https://doi.org/10.1145/1959826.1959834
    [11]
    Toshiya Isomoto, Toshiyuki Ando, Buntarou Shizuki, and Shin Takahashi. 2018. Dwell Time Reduction Technique Using Fitts’ Law for Gaze-Based Target Acquisition. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw, Poland) (ETRA ’18). Association for Computing Machinery, New York, NY, USA, Article 26, 7 pages. https://doi.org/10.1145/3204493.3204532
    [12]
    Toshiya Isomoto, Shota Yamanaka, and Buntarou Shizuki. 2023. Exploring Dwell-Time from Human Cognitive Processes for Dwell Selection. Proc. ACM Hum.-Comput. Interact. 7, ETRA, Article 159 (may 2023), 15 pages. https://doi.org/10.1145/3591128
    [13]
    Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-Based Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’90). Association for Computing Machinery, New York, NY, USA, 11–18. https://doi.org/10.1145/97243.97246
    [14]
    Sujin Jang, Wolfgang Stuerzlinger, Satyajit Ambike, and Karthik Ramani. 2017. Modeling Cumulative Arm Fatigue in Mid-Air Interaction Based on Perceived Exertion and Kinetics of Arm Motion. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 3328–3339. https://doi.org/10.1145/3025453.3025523
    [15]
    Wen jun Hou and Xiao lin Chen. 2021. Comparison of Eye-Based and Controller-Based Selection in Virtual Reality. International Journal of Human–Computer Interaction 37, 5 (2021), 484–495. https://doi.org/10.1080/10447318.2020.1826190 arXiv:https://doi.org/10.1080/10447318.2020.1826190
    [16]
    Youngho Kim, Ahmed Hassan, Ryen W. White, and Imed Zitouni. 2014. Modeling Dwell Time to Predict Click-Level Satisfaction. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining (New York, New York, USA) (WSDM ’14). Association for Computing Machinery, New York, NY, USA, 193–202. https://doi.org/10.1145/2556195.2556220
    [17]
    Manu Kumar and Terry Winograd. 2007. Gaze-Enhanced Scrolling Techniques. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology (Newport, Rhode Island, USA) (UIST ’07). Association for Computing Machinery, New York, NY, USA, 213–216. https://doi.org/10.1145/1294211.1294249
    [18]
    Francisco Lopez Luro and Veronica Sundstedt. 2019. A Comparative Study of Eye Tracking and Hand Controller for Aiming Tasks in Virtual Reality. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (Denver, Colorado) (ETRA ’19). Association for Computing Machinery, New York, NY, USA, Article 68, 9 pages. https://doi.org/10.1145/3317956.3318153
    [19]
    Inc. Meta Platforms. 2022. Meta Quest Pro product overview page.https://www.meta.com/jp/quest/quest-pro/.
    [20]
    Aanand Nayyar, Utkarsh Dwivedi, Karan Ahuja, Nitendra Rajput, Seema Nagar, and Kuntal Dey. 2017. OptiDwell: Intelligent Adjustment of Dwell Click Time. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI ’17). Association for Computing Machinery, New York, NY, USA, 193–204. https://doi.org/10.1145/3025171.3025202
    [21]
    Japan Ministry of Internal Affairs and Communications.2022. 2022 Ministry of Internal Affairs and Communications Information and Communication White Paper, Data Collection (Chapter 3 related data) 18. Global AR/VR market size, shipment trends and forecasts.https://www.soumu.go.jp/johotsusintokei/whitepaper/ja/r04/html/nf3r1000.html#d03r1180.
    [22]
    Abdul Moiz Penkar, Christof Lutteroth, and Gerald Weber. 2012. Designing for the Eye: Design Parameters for Dwell in Gaze Interaction. In Proceedings of the 24th Australian Computer-Human Interaction Conference (Melbourne, Australia) (OzCHI ’12). Association for Computing Machinery, New York, NY, USA, 479–488. https://doi.org/10.1145/2414536.2414609
    [23]
    Ken Pfeuffer, Yasmeen Abdrabou, Augusto Esteves, Radiah Rivu, Yomna Abdelrahman, Stefanie Meitner, Amr Saadi, and Florian Alt. 2021. ARtention: A design space for gaze-adaptive user interfaces in augmented reality. Computers & Graphics 95 (2021), 1–12. https://doi.org/10.1016/j.cag.2021.01.001
    [24]
    Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, and Hans Gellersen. 2017. Gaze + Pinch Interaction in Virtual Reality. In Proceedings of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI ’17). Association for Computing Machinery, New York, NY, USA, 99–108. https://doi.org/10.1145/3131277.3132180
    [25]
    Ken Pfeuffer, Lukas Mecke, Sarah Delgado Rodriguez, Mariam Hassib, Hannah Maier, and Florian Alt. 2020. Empirical Evaluation of Gaze-Enhanced Menus in Virtual Reality. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology (Virtual Event, Canada) (VRST ’20). Association for Computing Machinery, New York, NY, USA, Article 20, 11 pages. https://doi.org/10.1145/3385956.3418962
    [26]
    Ivan Poupyrev, Tadao Ichikawa, Suzanne Weghorst, and Mark Billinghurst. 1998. Egocentric object manipulation in virtual environments: empirical evaluation of interaction techniques. In Computer graphics forum, Vol. 17. Wiley Online Library, 41–52.
    [27]
    Yuan Yuan Qian and Robert J. Teather. 2017. The Eyes Don’t Have It: An Empirical Comparison of Head-Based and Eye-Based Selection in Virtual Reality. In Proceedings of the 5th Symposium on Spatial User Interaction (Brighton, United Kingdom) (SUI ’17). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/3131277.3132182
    [28]
    Dominik Schön, Thomas Kosch, Florian Müller, Martin Schmitz, Sebastian Günther, Lukas Bommhardt, and Max Mühlhäuser. 2023. Tailor Twist: Assessing Rotational Mid-Air Interactions for Augmented Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 400, 14 pages. https://doi.org/10.1145/3544548.3581461
    [29]
    Selina Sharmin, Oleg Špakov, and Kari-Jouko Räihä. 2013. Reading On-Screen Text with Gaze-Based Auto-Scrolling. In Proceedings of the 2013 Conference on Eye Tracking South Africa (Cape Town, South Africa) (ETSA ’13). Association for Computing Machinery, New York, NY, USA, 24–31. https://doi.org/10.1145/2509315.2509319
    [30]
    Rongkai Shi, Yushi Wei, Xueying Qin, Pan Hui, and Hai-Ning Liang. 2023. Exploring Gaze-Assisted and Hand-Based Region Selection in Augmented Reality. Proc. ACM Hum.-Comput. Interact. 7, ETRA, Article 160 (may 2023), 19 pages. https://doi.org/10.1145/3591129
    [31]
    Ludwig Sidenmark and Hans Gellersen. 2019. Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality. ACM Trans. Comput.-Hum. Interact. 27, 1, Article 4 (dec 2019), 40 pages. https://doi.org/10.1145/3361218
    [32]
    Zixiong Su, Xinlei Zhang, Naoki Kimura, and Jun Rekimoto. 2021. Gaze+Lip: Rapid, Precise and Expressive Interactions Combining Gaze Input and Silent Speech Commands for Hands-Free Smart TV Control. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA ’21 Short Papers). Association for Computing Machinery, New York, NY, USA, Article 13, 6 pages. https://doi.org/10.1145/3448018.3458011
    [33]
    Jayson Turner, Shamsi Iqbal, and Susan Dumais. 2015. Understanding Gaze and Scrolling Strategies in Text Consumption Tasks. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (Osaka, Japan) (UbiComp/ISWC’15 Adjunct). Association for Computing Machinery, New York, NY, USA, 829–838. https://doi.org/10.1145/2800835.2804331
    [34]
    Uta Wagner, Mathias N. Lystbæk, Pavel Manakhov, Jens Emil Sloth Grønbæk, Ken Pfeuffer, and Hans Gellersen. 2023. A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 252, 15 pages. https://doi.org/10.1145/3544548.3581423
    [35]
    Xin Yi, Yiqin Lu, Ziyin Cai, Zihan Wu, Yuntao Wang, and Yuanchun Shi. 2022. GazeDock: Gaze-Only Menu Selection in Virtual Reality using Auto-Triggering Peripheral Menu. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 832–842. https://doi.org/10.1109/VR51125.2022.00105
    [36]
    Yanxia Zhang, Andreas Bulling, and Hans Gellersen. 2013. SideWays: A Gaze Interface for Spontaneous Interaction with Situated Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 851–860. https://doi.org/10.1145/2470654.2470775

    Index Terms

    1. Advantage of Gaze-Only Content Browsing in VR using Cumulative Dwell Time Compared to Hand Controller

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SUI '23: Proceedings of the 2023 ACM Symposium on Spatial User Interaction
        October 2023
        505 pages
        ISBN:9798400702815
        DOI:10.1145/3607822
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 13 October 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. augmented reality
        2. dwell time
        3. gaze input
        4. gaze interaction
        5. hands-free interaction
        6. head-mounted display
        7. virtual reality

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • JSPS KAKENHI
        • JST Moon- shot R&D
        • NICT Japan

        Conference

        SUI '23
        SUI '23: ACM Symposium on Spatial User Interaction
        October 13 - 15, 2023
        NSW, Sydney, Australia

        Acceptance Rates

        Overall Acceptance Rate 86 of 279 submissions, 31%

        Upcoming Conference

        SUI '24
        ACM Symposium on Spatial User Interaction
        October 7 - 8, 2024
        Trier , Germany

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 147
          Total Downloads
        • Downloads (Last 12 months)147
        • Downloads (Last 6 weeks)17
        Reflects downloads up to 09 Aug 2024

        Other Metrics

        Citations

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media