Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3581641.3584072acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article
Open access

Gaze Speedup: Eye Gaze Assisted Gesture Typing in Virtual Reality

Published: 27 March 2023 Publication History

Abstract

Mid-air text input in augmented or virtual reality (AR/VR) is an open problem. One proposed solution is gesture typing where the user performs a gesture trace over the keyboard. However, this requires the user to move their hands precisely and continuously, potentially causing arm fatigue. With eye tracking available on AR/VR devices, multiple works have proposed gaze-driven gesture typing techniques. However, such techniques require the explicit use of gaze which are prone to Midas touch problems, conflicting with other gaze activities in the same moment. In this work, the user is not made aware that their gaze is being used to improve the interaction, making the use of gaze completely implicit. We observed that a user’s implicit gaze fixation location during gesture typing is usually the gesture cursor’s target location if the gesture cursor is moving toward it. Based on this observation, we propose the Speedup method in which we speed up the gesture cursor toward the user’s gaze fixation location, the speedup rate depends on how well the gesture cursor’s moving direction aligns with the gaze fixation. To reduce the overshooting near the target in the Speedup method, we further proposed the Gaussian Speedup method in which the speedup rate is dynamically reduced with a Gaussian function when the gesture cursor gets nearer to the gaze fixation. Using a wrist IMU as input, a 12-person study demonstrated that the Speedup method and Gaussian Speedup method reduced users’ hand movement by and respectively without any loss of typing speed or accuracy.

Supplementary Material

MP4 File (GS_TAPS_video_comp.mp4)
Video figure

References

[1]
Sunggeun Ahn, Seongkook Heo, and Geehyuk Lee. 2017. Typing on a smartwatch for smart glasses. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces. 201–209.
[2]
Doug A. Bowman, Christopher J. Rhoton, and Marcio S. Pinho. 2002. Text Input Techniques for Immersive Virtual Environments: An Empirical Comparison. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, 26 (sep 2002), 2154–2158. https://doi.org/10.1177/154193120204602611
[3]
Géry Casiez, Nicolas Roussel, and Daniel Vogel. 2012. 1€ filter: a simple speed-based low-pass filter for noisy input in interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2527–2530.
[4]
Alexander De Luca, Roman Weiss, and Heiko Drewes. 2007. Evaluation of eye-gaze interaction methods for security enhanced PIN-entry. In Proceedings of the 19th australasian conference on computer-human interaction: Entertaining user interfaces. 199–202.
[5]
Heiko Drewes and Albrecht Schmidt. 2009. The MAGIC touch: Combining MAGIC-pointing with a touch-sensitive mouse. In IFIP Conference on Human-Computer Interaction. Springer, 415–428.
[6]
Tafadzwa Joseph Dube, Kevin Johnson, and Ahmed Sabbir Arif. 2022. Shapeshifter: Gesture Typing in Virtual Reality with a Force-Based Digital Thimble. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 230, 9 pages. https://doi.org/10.1145/3491101.3519679
[7]
Jun Gong, Zheer Xu, Qifan Guo, Teddy Seyed, Xiang ’Anthony’ Chen, Xiaojun Bi, and Xing-Dong Yang. 2018. WrisText. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, New York, New York, USA, 1–14. https://doi.org/10.1145/3173574.3173755
[8]
Gabriel González, José P. Molina, Arturo S. García, Diego Martínez, and Pascual González. 2009. Evaluation of Text Input Techniques in Immersive Virtual Environments. In New Trends on Human-Computer Interaction. Springer London, London, 1–10. https://doi.org/10.1007/978-1-84882-352-5_11
[9]
Mitchell Gordon, Tom Ouyang, and Shumin Zhai. 2016. WatchWriter. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16. ACM Press, New York, New York, USA, 3817–3821. https://doi.org/10.1145/2858036.2858242
[10]
Tovi Grossman, Xiang Anthony Chen, and George Fitzmaurice. 2015. Typing on glasses: Adapting text entry to smart eyewear. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services. 144–152.
[11]
Aakar Gupta, Cheng Ji, Hui-Shyong Yeo, Aaron Quigley, and Daniel Vogel. 2019. RotoSwype: Word-Gesture Typing using a Ring. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19. ACM Press, New York, New York, USA, 1–12. https://doi.org/10.1145/3290605.3300244
[12]
Aakar Gupta, Thomas Pietrzak, Nicolas Roussel, and Ravin Balakrishnan. 2016. Direct Manipulation in Tactile Displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3683–3693. https://doi.org/10.1145/2858036.2858161
[13]
John Paulin Hansen, Anders Sewerin Johansen, Dan Witzner Hansen, Kenji Ito, and Satoru Mashino. 2003. Command Without a Click: Dwell Time Typing by Mouse and Gaze Selections. In Interact, Vol. 3. Citeseer, 121–128.
[14]
HTC. 2019. Vive. https://www.vive.com
[15]
Anke Huckauf and Mario Urbina. 2007. Gazing with pEYE: new concepts in eye typing. In Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization. 141–141.
[16]
Robert J. K. Jacob. 1991. The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What You Get. ACM Trans. Inf. Syst. 9, 2 (apr 1991), 152–169. https://doi.org/10.1145/123078.128728
[17]
Haiyan Jiang, Dongdong Weng, Zhenliang Zhang, Feng Chen, Haiyan Jiang, Dongdong Weng, Zhenliang Zhang, and Feng Chen. 2019. HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers. Sensors 19, 14 (jul 2019), 3063. https://doi.org/10.3390/s19143063
[18]
Youngwon R. Kim and Gerard J. Kim. 2016. HoVR-type: smartphone as a typing interface in VR using hovering. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology - VRST ’16. ACM Press, New York, New York, USA, 333–334. https://doi.org/10.1145/2993369.2996330
[19]
Pascal Knierim, Valentin Schwind, Anna Maria Feit, Florian Nieuwenhuizen, and Niels Henze. 2018. Physical keyboards in virtual reality: Analysis of typing performance and effects of avatar hands. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–9.
[20]
Per-Ola Kristensson and Shumin Zhai. 2004. SHARK2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th annual ACM symposium on User interface software and technology. 43–52.
[21]
Per Ola Kristensson and Shumin Zhai. 2007. Command Strokes with and Without Preview: Using Pen Gestures on Keyboard for Command Selection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). ACM, New York, NY, USA, 1137–1146. https://doi.org/10.1145/1240624.1240797
[22]
Chandan Kumar, Ramin Hedeshy, I Scott MacKenzie, and Steffen Staab. 2020. TAGSwipe: Touch Assisted Gaze Swipe for Text Entry. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[23]
Manu Kumar, Andreas Paepcke, and Terry Winograd. 2007. EyePoint: practical pointing and selection using gaze and keyboard. In Proceedings of the SIGCHI conference on Human factors in computing systems. 421–430.
[24]
Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-free text entry using gaze paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 1952–1956.
[25]
I. Scott MacKenzie. 2015. A Note on Calculating Text Entry Speed. http://www.yorku.ca/mack/RN-TextEntrySpeed.html. [Online; Accessed: 2022-09-23].
[26]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI EA ’03). Association for Computing Machinery, New York, NY, USA, 754–755. https://doi.org/10.1145/765891.765971
[27]
Päivi Majaranta. 2012. Communication and text entry by gaze. In Gaze interaction and applications of eye tracking: Advances in assistive technologies. IGI Global, 63–77.
[28]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 357–360.
[29]
Anders Markussen, Mikkel Rønne Jakobsen, Kasper Hornbæk, Anders Markussen, Mikkel Rønne Jakobsen, and Kasper Hornbæk. 2014. Vulture. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14. ACM Press, New York, New York, USA, 1073–1082. https://doi.org/10.1145/2556288.2556964
[30]
Yogesh Kumar Meena, Hubert Cecotti, K Wong-Lin, and Girijesh Prasad. 2016. A novel multimodal gaze-controlled hindi virtual keyboard for disabled users. In 2016 ieee international conference on systems, man, and cybernetics (smc). IEEE, 003688–003693.
[31]
Raphael Menges, Chandan Kumar, and Steffen Staab. 2019. Improving user experience of eye tracking-based interaction: Introspecting and adapting interfaces. ACM Transactions on Computer-Human Interaction (TOCHI) 26, 6(2019), 1–46.
[32]
Carlos H Morimoto and Arnon Amir. 2010. Context switching for fast key selection in text entry applications. In Proceedings of the 2010 symposium on eye-tracking research & applications. 271–274.
[33]
Martez E Mott, Shane Williams, Jacob O Wobbrock, and Meredith Ringel Morris. 2017. Improving dwell-based gaze typing with dynamic, cascading dwell times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2558–2570.
[34]
Taihei Ogitani, Yoshitaka Arahori, Yusuke Shinyama, and Katsuhiko Gondow. 2018. Space Saving Text Input Method for Head Mounted Display with Virtual 12-key Keyboard. In 2018 IEEE 32nd International Conference on Advanced Information Networking and Applications (AINA). IEEE, 342–349. https://doi.org/10.1109/AINA.2018.00059
[35]
Diogo Pedrosa, Maria da Graça Pimentel, and Khai N Truong. 2015. Filteryedping: A dwell-free eye typing technique. In Proceedings of the 33rd annual acm conference extended abstracts on human factors in computing systems. 303–306.
[36]
Manuel Prätorius, Dimitar Valkov, Ulrich Burgbacher, and Klaus Hinrichs. 2014. DigiTap: an eyes-free VR/AR symbolic input device. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST ’14. ACM Press, New York, New York, USA, 9–18. https://doi.org/10.1145/2671015.2671029
[37]
Tobii Pro. 2016. The Tobii Pro fixation filters (eye movement classification). https://tobii.23video.com/the-tobii-pro-fixation-filters-eye-movement. [Online; Accessed: 2022-10-14].
[38]
Sayan Sarcar, Prateek Panwar, and Tuhin Chakraborty. 2013. EyeK: an efficient dwell-free eye gaze-based text entry system. In Proceedings of the 11th asia pacific conference on computer human interaction. 215–220.
[39]
Korok Sengupta, Raphael Menges, Chandan Kumar, and Steffen Staab. 2019. Impact of variable positioning of text prediction in gaze-based text entry. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–9.
[40]
Sheng Shen, He Wang, and Romit Roy Choudhury. 2016. I Am a Smartwatch and I Can Track My User’s Arm. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services (Singapore, Singapore) (MobiSys ’16). Association for Computing Machinery, New York, NY, USA, 85–96. https://doi.org/10.1145/2906388.2906407
[41]
Jeongmin Son, Sunggeun Ahn, Sunbum Kim, and Geehyuk Lee. 2019. Improving Two-Thumb Touchpad Typing in Virtual Reality. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems - CHI EA ’19. ACM Press, New York, New York, USA, 1–6. https://doi.org/10.1145/3290607.3312926
[42]
Zhipeng Song, Zhichao Cao, Zhenjiang Li, Jiliang Wang, and Yunhao Liu. 2021. Inertial motion tracking on mobile and wearable devices: Recent advancements and challenges. Tsinghua Science and Technology 26, 5 (2021), 692–705. https://doi.org/10.26599/TST.2021.9010017
[43]
Marco Speicher, Anna Maria Feit, Pascal Ziegler, and Antonio Krüger. 2018. Selection-based Text Entry in Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, New York, New York, USA, 1–13. https://doi.org/10.1145/3173574.3174221
[44]
Outi Tuisku, Päivi Majaranta, Poika Isokoski, and Kari-Jouko Räihä. 2008. Now Dasher! Dash away! Longitudinal study of fast text entry by eye gaze. In Proceedings of the 2008 symposium on Eye tracking research & applications. 19–26.
[45]
UploadVR. 2019. Oculus Claims Breakthrough in Hand-tracking Accuracy. https://www.roadtovr.com/oculus-claims-breakthrough-in-hand-tracking-accuracy/
[46]
Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y Chen. 2015. PalmType: Using palms as keyboards for smart glasses. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services. 153–160.
[47]
Jacob O Wobbrock, James Rubinstein, Michael W Sawyer, and Andrew T Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications. 11–18.
[48]
Zheer Xu, Weihao Chen, Dongyang Zhao, Jiehui Luo, Te-Yen Wu, Jun Gong, Sicheng Yin, Jialun Zhai, and Xing-Dong Yang. 2020. BiTipText: Bimanual Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376306
[49]
Zheer Xu, Pui Chung Wong, Jun Gong, Te-Yen Wu, Aditya Shekhar Nittala, Xiaojun Bi, Jürgen Steimle, Hongbo Fu, Kening Zhu, and Xing-Dong Yang. 2019. TipText: Eyes-Free Text Entry on a Fingertip Keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 883–899. https://doi.org/10.1145/3332165.3347865
[50]
Naoki Yanagihara and Buntarou Shizuki. 2018. Cubic Keyboard for Virtual Reality. In Proceedings of the Symposium on Spatial User Interaction - SUI ’18. ACM Press, New York, New York, USA, 170–170. https://doi.org/10.1145/3267782.3274687
[51]
Xin Yi, Chun Yu, Weijie Xu, Xiaojun Bi, and Yuanchun Shi. 2017. COMPASS: Rotational Keyboard on Non-Touch Smartwatches. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 705–715. https://doi.org/10.1145/3025453.3025454
[52]
Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, and Yuanchun Shi. 2017. Tap, dwell or gesture? exploring head-based text entry techniques for hmds. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 4479–4488.
[53]
Difeng Yu, Kaixuan Fan, Heng Zhang, Diego Monteiro, Wenge Xu, and Hai-Ning Liang. 2018. PizzaText: Text Entry for Virtual Reality Systems Using Dual Thumbsticks. IEEE Transactions on Visualization and Computer Graphics 24, 11 (nov 2018), 2927–2935. https://doi.org/10.1109/TVCG.2018.2868581
[54]
Mingrui Ray Zhang, Shumin Zhai, and Jacob O. Wobbrock. 2022. TypeAnywhere: A QWERTY-Based Text Entry Solution for Ubiquitous Computing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 339, 16 pages. https://doi.org/10.1145/3491102.3517686

Cited By

View all
  • (2024)OnArmQWERTY: An Empirical Evaluation of On-Arm Tap Typing for AR HMDsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682084(1-12)Online publication date: 7-Oct-2024
  • (2024)GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR DevicesProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690285(1731-1745)Online publication date: 2-Dec-2024
  • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
March 2023
972 pages
ISBN:9798400701061
DOI:10.1145/3581641
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 March 2023

Check for updates

Author Tags

  1. gesture input
  2. implicit eye gaze
  3. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

IUI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)913
  • Downloads (Last 6 weeks)78
Reflects downloads up to 16 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)OnArmQWERTY: An Empirical Evaluation of On-Arm Tap Typing for AR HMDsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682084(1-12)Online publication date: 7-Oct-2024
  • (2024)GAZEploit: Remote Keystroke Inference Attack by Gaze Estimation from Avatar Views in VR/MR DevicesProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security10.1145/3658644.3690285(1731-1745)Online publication date: 2-Dec-2024
  • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
  • (2024)TouchEditorProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314547:4(1-29)Online publication date: 12-Jan-2024
  • (2024)GEARS: Generalizable Multi-Purpose Embeddings for Gaze and Hand Data in VR InteractionsProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659551(279-289)Online publication date: 22-Jun-2024
  • (2024)Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-TrainingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345619830:11(7118-7128)Online publication date: 1-Nov-2024
  • (2024)RingGesture: A Ring-Based Mid-Air Gesture Typing System Powered by a Deep-Learning Word Prediction FrameworkIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345617930:11(7441-7451)Online publication date: 1-Nov-2024
  • (2024)The Guided Evaluation MethodInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103317190:COnline publication date: 1-Oct-2024
  • (2024)Bending the keyboard can improve bare-hand typing in virtual realityMultimedia Tools and Applications10.1007/s11042-024-19903-4Online publication date: 27-Jul-2024
  • (2023)Does One Keyboard Fit All? Comparison and Evaluation of Device-Free Augmented Reality Keyboard DesignsProceedings of the 29th ACM Symposium on Virtual Reality Software and Technology10.1145/3611659.3615692(1-11)Online publication date: 9-Oct-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media