Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3448017.3457379acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
research-article
Open access

HGaze Typing: Head-Gesture Assisted Gaze Typing

Published: 25 May 2021 Publication History

Abstract

This paper introduces a bi-modal typing interface, HGaze Typing, which combines the simplicity of head gestures with the speed of gaze inputs to provide efficient and comfortable dwell-free text entry. HGaze Typing uses gaze path information to compute candidate words and allows explicit activation of common text entry commands, such as selection, deletion, and revision, by using head gestures (nodding, shaking, and tilting). By adding a head-based input channel, HGaze Typing reduces the size of the screen regions for cancel/deletion buttons and the word candidate list, which are required by most eye-typing interfaces. A user study finds HGaze Typing outperforms a dwell-time-based keyboard in efficacy and user satisfaction. The results demonstrate that the proposed method of integrating gaze and head-movement inputs can serve as an effective interface for text entry and is robust to unintended selections.

References

[1]
R. Bates and H.O. Istance. 2003. Why are eye mice unpopular? A detailed comparison of head and eye controlled assistive technology pointing devices. Universal Access in the Information Society 2, 3 (01 Oct 2003), 280–290. https://doi.org/10.1007/s10209-003-0053-y
[2]
Margrit Betke, James Gips, and Peter Fleming. 2002. The Camera Mouse: Visual tracking of body features to provide computer access for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering 10, 1(2002), 1–10. https://doi.org/10.1109/TNSRE.2002.1021581
[3]
Mark Davies. 2008. The Corpus of Contemporary American English (COCA): 520 million words, 1990-present. https://corpus.byu.edu/coca/. Accessed: 2017-11-22.
[4]
Fabien Despinoy, David Bouget, Germain Forestier, Cédric Penet, Nabil Zemiti, Philippe Poignet, and Pierre Jannin. 2016. Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Transactions on Biomedical Engineering 63, 6 (2016), 1280–1291.
[5]
Antonio Diaz-Tula, Filipe M. S. de Campos, and Carlos H. Morimoto. 2012. Dynamic Context Switching for Gaze Based Interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). ACM, New York, NY, USA, 353–356. https://doi.org/10.1145/2168556.2168635
[6]
Antonio Diaz-Tula and Carlos H. Morimoto. 2016. AugKey: Increasing Foveal Throughput in Eye Typing with Augmented Keys. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). ACM, New York, NY, USA, 3533–3544. https://doi.org/10.1145/2858036.2858517
[7]
Wenxin Feng, Mehrnoosh Sameki, and Margrit Betke. 2018. Exploration of Assistive Technologies Used by People with Quadriplegia Caused by Degenerative Neurological Diseases. International Journal of Human–Computer Interaction 34, 9(2018), 834–844. https://doi.org/10.1080/10447318.2017.1395572 arXiv:https://doi.org/10.1080/10447318.2017.1395572
[8]
Maurice Fréchet. 1906. Sur quelques points de calcul fonctionnel. Ph.D. Dissertation. Rendiconti del Circolo Matematico di Palermo. 22:1–74.
[9]
Yulia Gizatdinova, Oleg Špakov, and Veikko Surakka. 2012. Comparison of Video-based Pointing and Selection Techniques for Hands-free Text Entry. In Proceedings of the International Working Conference on Advanced Visual Interfaces (Capri Island, Italy) (AVI ’12). ACM, New York, NY, USA, 132–139. https://doi.org/10.1145/2254556.2254582
[10]
Yulia Gizatdinova, Oleg Špakov, Outi Tuisku, Matthew Turk, and Veikko Surakka. 2018. Gaze and Head Pointing for Hands-free Text Entry: Applicability to Ultra-small Virtual Keyboards. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (Warsaw, Poland) (ETRA ’18). ACM, New York, NY, USA, Article 14, 9 pages. https://doi.org/10.1145/3204493.3204539
[11]
K. Grauman, M. Betke, J. Lombardi, J. Gips, and G.R. Bradski. 2003. Communication via eye blinks and eyebrow raises: Video-based human-computer interfaces. Universal Access in the Information Society 2, 4 (01 Nov 2003), 359–373. https://doi.org/10.1007/s10209-003-0062-x
[12]
John Paulin Hansen, Anders Sewerin Johansen, Dan Witzner Hansen, Kenji Itoh, and Satoru Mashino. 2003. Command Without a Click: Dwell Time Typing by Mouse and Gaze Selections. In Human-Computer Interaction(INTERACT ’03), M. Rauterberg et al. (Ed.). IOS Press, 121–128.
[13]
John Paulin Hansen, Kristian Tørning, Anders Sewerin Johansen, Kenji Itoh, and Hirotaka Aoki. 2004. Gaze Typing Compared with Input by Head and Hand. In Proceedings of the 2004 Symposium on Eye Tracking Research & Applications (San Antonio, Texas) (ETRA ’04). ACM, New York, NY, USA, 131–138. https://doi.org/10.1145/968363.968389
[14]
Anke Huckauf and Mario Urbina. 2007. Gazing with pEYE: New Concepts in Eye Typing. In Proceedings of the 4th Symposium on Applied Perception in Graphics and Visualization (Tubingen, Germany) (APGV ’07). ACM, New York, NY, USA, 141–141. https://doi.org/10.1145/1272582.1272618
[15]
Poika Isokoski. 2000. Text Input Methods for Eye Trackers Using Off-screen Targets. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications (Palm Beach Gardens, Florida, USA) (ETRA ’00). ACM, New York, NY, USA, 15–21. https://doi.org/10.1145/355017.355020
[16]
Robert J. K. Jacob. 1990. What You Look at is What You Get: Eye Movement-based Interaction Techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’90). ACM, New York, NY, USA, 11–18. https://doi.org/10.1145/97243.97246
[17]
Josh Kaufman. 2015. Google 10000 English. https://github.com/first20hours/google-10000-english. Accessed: 2015-09-22.
[18]
Luka Krapic, Kristijan Lenac, and Sandi Ljubic. 2015. Integrating Blink Click interaction into a head tracking system: Implementation and usability issues. Universal Access in the Information Society 14, 2 (2015), 247–264. https://doi.org/10.1007/s10209-013-0343-y
[19]
Per Ola Kristensson and Keith Vertanen. 2012. The Potential of Dwell-free Eye-typing for Fast Assistive Gaze Communication. In Proceedings of the Symposium on Eye Tracking Research and Applications (Santa Barbara, California) (ETRA ’12). ACM, New York, NY, USA, 241–244. https://doi.org/10.1145/2168556.2168605
[20]
Per-Ola Kristensson and Shumin Zhai. 2004. SHARK2: A Large Vocabulary Shorthand Writing System for Pen-based Computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology(Santa Fe, NM, USA) (UIST ’04). ACM, New York, NY, USA, 43–52. https://doi.org/10.1145/1029632.1029640
[21]
Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-free Text Entry Using Gaze Paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). ACM, New York, NY, USA, 1952–1956. https://doi.org/10.1145/2858036.2858335
[22]
Andrew Kurauchi, Wenxin Feng, Carlos Morimoto, and Margrit Betke. 2015. HMAGIC: Head Movement and Gaze Input Cascaded Pointing. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Corfu, Greece) (PETRA ’15). ACM, New York, NY, USA, Article 47, 4 pages. https://doi.org/10.1145/2769493.2769550
[23]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise head- and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal, QC, Canada) (CHI ’18). ACM, New York, NY, USA, Article 81, 14 pages. https://doi.org/10.1145/3173574.3173655
[24]
J Lombardi and M Betke. 2002. A camera-based eyebrow tracker for hands-free computer control via a binary switch. In Proceedings of the 7th ERCIM Workshop, User Interfaces For All. 199–200.
[25]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI EA ’03). ACM, New York, NY, USA, 754–755.
[26]
Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast Gaze Typing with an Adjustable Dwell Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). ACM, New York, NY, USA, 357–360. https://doi.org/10.1145/1518701.1518758
[27]
Diako Mardanbegi, Dan Witzner Hansen, and Thomas Pederson. 2012. Eye-Based Head Gestures. In Proceedings of ETRA’2012 (Santa Barbara, CA). ACM, New York, NY, 139–146. https://doi.org/10.1145/2168556.2168578
[28]
Eric Missimer and Margrit Betke. 2010. Blink and wink detection for mouse pointer control. In Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments(Samos, Greece) (PETRA ’10). ACM, New York, NY, USA, Article 23, 8 pages. https://doi.org/10.1145/1839294.1839322
[29]
Carlos H. Morimoto and Arnon Amir. 2010. Context Switching for Fast Key Selection in Text Entry Applications. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (Austin, Texas) (ETRA ’10). ACM, New York, NY, USA, 271–274. https://doi.org/10.1145/1743666.1743730
[30]
Martez E. Mott, Shane Williams, Jacob O. Wobbrock, and Meredith Ringel Morris. 2017. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). ACM, New York, NY, USA, 2558–2570. https://doi.org/10.1145/3025453.3025517
[31]
Diogo Pedrosa, Maria Da Graça Pimentel, Amy Wright, and Khai N. Truong. 2015. Filteryedping: Design Challenges and User Performance of Dwell-Free Eye Typing. ACM Transactions on Accessible Computing 6, 1, Article 3 (March 2015), 37 pages. https://doi.org/10.1145/2724728
[32]
Marco Porta and Matteo Turina. 2008. Eye-S: A Full-screen Input Modality for Pure Eye-based Communication. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (Savannah, Georgia) (ETRA ’08). ACM, New York, NY, USA, 27–34. https://doi.org/10.1145/1344471.1344477
[33]
Kari-Jouko Räihä and Saila Ovaska. 2012. An Exploratory Study of Eye Typing Fundamentals: Dwell Time, Text Entry Rate, Errors, and Workload. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). ACM, New York, NY, USA, 3001–3010. https://doi.org/10.1145/2207676.2208711
[34]
Daniel Rough, Keith Vertanen, and Per Ola Kristensson. 2014. An Evaluation of Dasher with a High-performance Language Model As a Gaze Communication Method. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (Como, Italy) (AVI ’14). ACM, New York, NY, USA, 169–176. https://doi.org/10.1145/2598153.2598157
[35]
Sayan Sarcar, Prateek Panwar, and Tuhin Chakraborty. 2013. EyeK: An Efficient Dwell-free Eye Gaze-based Text Entry System. In Proceedings of the 11th Asia Pacific Conference on Computer Human Interaction (Bangalore, India) (APCHI ’13). ACM, New York, NY, USA, 215–220. https://doi.org/10.1145/2525194.2525288
[36]
Ludwig Sidenmark, Diako Mardanbegi, Argenis Ramirez Gomez, Christopher Clarke, and Hans Gellersen. 2020. BimodalGaze: Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement(ETRA ’20 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 8, 9 pages. https://doi.org/10.1145/3379155.3391312
[37]
E Sriraghavendra, K Karthik, and Chiranjib Bhattacharyya. 2007. Fréchet distance based approach for searching online handwritten documents. In Ninth International Conference on Document Analysis and Recognition (ICDAR), Vol. 1. IEEE, 461–465.
[38]
Keith Trnka, John McCaw, Debra Yarrington, Kathleen F McCoy, and Christopher Pennington. 2009. User interaction with word prediction: The effects of prediction quality. ACM Transactions on Accessible Computing 1, 3, Article 17 (Feb. 2009), 34 pages. https://doi.org/10.1145/1497302.1497307
[39]
Oleg Špakov, Poika Isokoski, and Päivi Majaranta. 2014. Look and Lean: Accurate Head-assisted Eye Pointing. In Proceedings of the Symposium on Eye Tracking Research and Applications (Safety Harbor, Florida) (ETRA ’14). ACM, New York, NY, USA, 35–42. https://doi.org/10.1145/2578153.2578157
[40]
Oleg Špakov and Päivi Majaranta. 2012. Enhanced Gaze Interaction Using Simple Head Gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (Pittsburgh, Pennsylvania) (UbiComp ’12). ACM, New York, NY, USA, 705–710. https://doi.org/10.1145/2370216.2370369
[41]
David J Ward and David JC MacKay. 2002. Fast hands-free writing by gaze direction. Nature 418 (22 Aug 2002), 838. https://doi.org/10.1038/418838a
[42]
Jacob O. Wobbrock, James Rubinstein, Michael W. Sawyer, and Andrew T. Duchowski. 2008. Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (Savannah, Georgia) (ETRA ’08). ACM, New York, NY, USA, 11–18. https://doi.org/10.1145/1344471.1344475
[43]
Shumin Zhai and Per-Ola Kristensson. 2003. Shorthand Writing on Stylus Keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI ’03). ACM, New York, NY, USA, 97–104. https://doi.org/10.1145/642611.642630
[44]
Shumin Zhai and Per Ola Kristensson. 2012. The Word-gesture Keyboard: Reimagining Keyboard Interaction. Commun. ACM 55, 9 (Sept. 2012), 91–101.
[45]
Xiaoyi Zhang, Harish Kulkarni, and Meredith Ringel Morris. 2017. Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). ACM, New York, NY, USA, 2878–2889. https://doi.org/10.1145/3025453.3025790
[46]
Xi Zhao, Tao Feng, and Weidong Shi. 2013. Continuous mobile authentication using a novel graphic touch gesture feature. In 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS). IEEE, 1–6. https://doi.org/10.1109/BTAS.2013.6712747

Cited By

View all
  • (2024)OnArmQWERTY: An Empirical Evaluation of On-Arm Tap Typing for AR HMDsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682084(1-12)Online publication date: 7-Oct-2024
  • (2024)Demonstration of CameraMouseAI: A Head-Based Mouse-Control System for People with Severe Motor DisabilitiesProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688499(1-6)Online publication date: 27-Oct-2024
  • (2024)Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00588(5973-5982)Online publication date: 3-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ETRA '21 Full Papers: ACM Symposium on Eye Tracking Research and Applications
May 2021
122 pages
ISBN:9781450383448
DOI:10.1145/3448017
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 May 2021

Permissions

Request permissions for this article.

Check for updates

Badges

  • Best Paper

Author Tags

  1. Text entry
  2. dwell-free typing
  3. eye tracking
  4. head gestures
  5. multi-modal text entry

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ETRA '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 69 of 137 submissions, 50%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)447
  • Downloads (Last 6 weeks)49
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)OnArmQWERTY: An Empirical Evaluation of On-Arm Tap Typing for AR HMDsProceedings of the 2024 ACM Symposium on Spatial User Interaction10.1145/3677386.3682084(1-12)Online publication date: 7-Oct-2024
  • (2024)Demonstration of CameraMouseAI: A Head-Based Mouse-Control System for People with Severe Motor DisabilitiesProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688499(1-6)Online publication date: 27-Oct-2024
  • (2024)Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00588(5973-5982)Online publication date: 3-Jan-2024
  • (2024)SkiMR: Dwell-free Eye Typing in Mixed Reality2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00065(439-449)Online publication date: 16-Mar-2024
  • (2024)Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in ARIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337210630:5(2496-2506)Online publication date: 19-Mar-2024
  • (2024)Evaluating Target Expansion for Eye Pointing TasksInteracting with Computers10.1093/iwc/iwae00436:4(209-223)Online publication date: 27-Feb-2024
  • (2024)The Guided Evaluation MethodInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103317190:COnline publication date: 1-Oct-2024
  • (2024)MASTER-XR: Mixed Reality Ecosystem for Teaching Robotics in ManufacturingIntegrated Systems: Data Driven Engineering10.1007/978-3-031-53652-6_10(167-182)Online publication date: 17-Sep-2024
  • (2023)Vision-Based Interfaces for Character-Based Text EntryAdvances in Human-Computer Interaction10.1155/2023/88557642023Online publication date: 1-Jan-2023
  • (2023)Exploring Gaze-assisted and Hand-based Region Selection in Augmented RealityProceedings of the ACM on Human-Computer Interaction10.1145/35911297:ETRA(1-19)Online publication date: 18-May-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media