Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Vision-Based Interfaces for Character-Based Text Entry: : Comparison of Errors and Error Correction Properties of Eye Typing and Head Typing

Published: 01 January 2023 Publication History

Abstract

We examined two vision-based interfaces (VBIs) for performance and user experience during character-based text entry using an on-screen virtual keyboard. Head-based VBI uses head motion to steer the computer pointer and mouth-opening gestures to select the keyboard keys. Gaze-based VBI utilizes gaze for pointing at the keys and an adjustable dwell for key selection. The results showed that after three sessions (45 min of typing in total), able-bodied novice participants (N = 34) typed significantly slower yet yielded significantly more accurate text with head-based VBI with gaze-based VBIs. The analysis of errors and corrective actions relative to the spatial layout of the keyboard revealed a difference in the error correction behavior of the participants when typing using both interfaces. We estimated the error correction cost for both interfaces and suggested implications for the future use and improvement of VBIs for hands-free text entry.

References

[1]
M. Porta, “Vision-based user interfaces: methods and applications,” International Journal of Human-Computer Studies, vol. 57, no. 1, pp. 27–73, 2002.
[2]
M. Turk and M. Kölsch, “Perceptual interfaces,” Emerging Topics in Computer Vision, Prentice Hall, Hoboken, NJ, USA, 2004.
[3]
O. Tuisku, V. Surakka, T. Vanhala, V. Rantanen, and J. Lekkala, “Wireless Face Interface: using voluntary gaze direction and facial muscle activations for human–computer interaction,” Interacting with Computers, vol. 24, no. 1, pp. 1–9, 2012.
[4]
A. Sears, M. Young, and J. Feng, “Physical disabilities and computing technologies: an analysis of impairments,” Human-Computer Interaction: Designing for Diverse Users and Domains, pp. 87–110, 2009.
[5]
E. LoPresti, D. M. Brienza, J. Angelo, L. Gilbertson, and J. Sakai, “Neck range of motion and use of computer head controls,” in Proceedings of the fourth international ACM conference on Assistive technologies (Assets '00), pp. 121–128, Association for Computing Machinery, New York, NY, USA, July 2000.
[6]
W. Feng, M. Sameki, and M. Betke, “Exploration of assistive technologies used by people with quadriplegia caused by degenerative neurological diseases,” International Journal of Human-Computer Interaction, vol. 34, no. 9, pp. 834–844, 2018.
[7]
H. H. Koester and S. Arthanat, “Text entry rate of access interfaces used by people with physical disabilities: a systematic review,” Assistive Technology, vol. 30, no. 3, pp. 151–163, 2018.
[8]
R. C. Simpson, Computer Access for People with Disabilities: A Human Factors Approach, CRC Press, FL, USA, 2013.
[9]
P. Majaranta and K.-J. Räihä, “Twenty years of eye typing: systems and design issues,” Proceedings of the symposium on Eye tracking research and applications-ETRA '02, vol. 15, 2002.
[10]
K. Grauman, M. Betke, J. Lombardi, J. Gips, and G. R. Bradski, “Communication via eye blinks and eyebrow raises: video-based human-computer interfaces,” Universal Access in the Information Society, vol. 2, no. 4, pp. 359–373, 2003.
[11]
M. H. Urbina and A. Huckauf, “Dwell-time free eye typing approaches,” in Proceedings of the 3rd Conference on Communication by Gaze Interaction (COGAIN 2007), pp. 65–70, Leicester, UK, September 2007.
[12]
O. Tuisku, V. Surakka, V. Rantanen, T. Vanhala, and J. Lekkala, “Text entry by gazing and smiling,” Advances in Human-Computer Interaction, vol. 2013, 13 pages, 2013.
[13]
H. Venesvirta, O. Špakov, Y. Gizatdinova, O. Tuisku, V. Rantanen, J. Verho, A. Vetek, J. Lekkala, and V. Surakka, “Smile to save it-facial expressions for lifelogging,” in Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia, pp. 441–448, Stuttgart Germany, November 2017.
[14]
M. Yildiz and H. Ö. Ülkütaş, “A new PC-based text entry system based on EOG coding,” Advances in Human-Computer Interaction, vol. 2018, 8 pages, 2018.
[15]
P. Majaranta and K.-J. Räihä, “Text entry by gaze: utilizing eye tracking,” in Text Entry Systems, pp. 175–187, University of Tampere, Tampere, Finland, 2007.
[16]
K. J. Räihä, “Life in the fast lane: effect of language and calibration accuracy on the speed of text entry by gaze,” in Proceedings of the Human-Computer Interaction–INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, September 2015.
[17]
C. Kumar, R. Hedeshy, I. MacKenzie, and S. Staab, “TAGSwipe: touch assisted gaze swipe for text entry,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 2020.
[18]
Y. Gizatdinova, O. Spakov, and V. Surakka, “Face typing: vision-based perceptual interface for hands-free text entry with a scrollable virtual keyboard,” in Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision (WACV), Breckenridge, CO, USA, January 2012.
[19]
K.-J. Räihä and S. Ovaska, “An exploratory study of eye typing fundamentals: dwell time, text entry rate, errors, and workload,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3001–3010, Austin TX USA, May 2012.
[20]
M. Betke, J. Gips, and P. Fleming, “The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 10, no. 1, pp. 1–10, 2002.
[21]
G. J. Capilouto, “Movement variability and speed of performance using a head-operated device and expanded membrane cursor keys,” Lecture Notes in Computer Science, vol. 3118, pp. 820–826, 2004.
[22]
D. K. Anson, M. Glodek, R. M. Peiffer, C. G. Rubino, and P. T. Schwartz, “Long-term speed and accuracy of Morse code vs. head-pointer interface for text generation,” in Proceedings of the RESNA 27th International Annual Conference, Orlando, Florida, June 2023.
[23]
M. C. Su, S. Y. Su, and G. D. Chen, “A low-cost vision-based human-computer interface for people with severe disabilities,” Biomedical Engineering: Applications, Basis, and Communications, vol. 17, no. 6, pp. 284–292, 2005.
[24]
R. Kjeldsen, “Improvements in vision-based pointer control,” in Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 189–196, Portland Oregon USA, October 2006.
[25]
R. Kjeldsen, “An on-screen keyboard for users with poor pointer control,” Lecture Notes in Computer Science, vol. 4556, no. 3, pp. 339–348, 2007.
[26]
Y. Shin, J. S. Ju, and E. Y. Kim, “Welfare interface implementation using multiple facial features tracking for the disabled people,” Pattern Recognition Letters, vol. 29, no. 13, pp. 1784–1796, 2008.
[27]
Y. Gizatdinova, O. Špakov, and V. Surakka, “Comparison of video-based pointing and selection techniques for hands-free text entry,” in Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 132–139, Capri Island Italy, May 2012.
[28]
Y. Gizatdinova, O. Špakov, O. Tuisku, M. Turk, and V. Surakka, “Gaze and head pointing for hands-free text entry: applicability to ultra-small virtual keyboards,” in Proceedings of the 2018 ACM Symposium on Eye Tracking Research and Applications, pp. 1–9, Warsaw Poland, June 2018.
[29]
D. Sawicki and P. Kowalczyk, “Head movement based interaction in mobility,” International Journal of Human-Computer Interaction, vol. 34, no. 7, pp. 653–665, 2018.
[30]
A. Nowosielski, “Text entry by rotary head movements,” Image Processing and Communications Challenges 10, vol. 10, pp. 71–78, 2018.
[31]
W. Feng, J. Zou, A. Kurauchi, C. H. Morimoto, and M. Betke, “HGaze typing: head-gesture assisted gaze typing,” in Proceedings of the Eye Tracking Research and Applications Symposium (ETRA), Germany, May 2021.
[32]
R. L. Cloud, M. Betke, and J. Gips, “Experiments with a camera-based human-computer interface system,” in Proceedings of the 7th ERCIM Workshop User Interfaces for All, pp. 103–110, UI4ALL, Paris, France, October 2002.
[33]
G. C. D. Silva, M. J. Lyons, S. Kawato, and N. Tetsutani, “Human factors evaluation of a vision-based facial gesture interface,” in Proceedings of the 2003 Conference on Computer Vision and Pattern Recognition Workshop, Madison, WI, USA, June 2003.
[34]
M. Lyons, C.-H. Chan, and N. Tetsutani, “MouthType: text entry by hand and mouth,” CHI ’04 Extended Abstracts on Human Factors in Computing Systems, vol. 1383, 2004.
[35]
J. Hansen, K. Tørning, A. Johansen, K. Itoh, and H. Aoki, “Gaze typing compared with input by head and hand,” Eye Tracking Research and Application: Proceedings of the 2004 Symposium on Eye Tracking Research and Applications, vol. 22, 2004.
[36]
E. Perini, S. Soria, A. Prati, and R. Cucchiara, “FaceMouse: a human-computer interface for tetraplegic people,” Computer Vision in Human-Computer Interaction, vol. 3979, pp. 99–108, 2006.
[37]
M. C. Su, C. Yeh, S. Lin, P. Wang, and S. Hou, “An implementation of an eye-blink-based communication aid for people with severe disabilities,” in Proceedings of the 2008 International Conference on Audio, Language and Image Processing, pp. 351–356, Shanghai, China, July 2008.
[38]
B. Ashtiani and I. MacKenzie, “BlinkWrite2: an improved text entry method using eye blinks,” in Proceedings of the 2010 Symposium on Eye-Tracking Research and Applications-ETRA '10, pp. 339–345, Austin Texas, March 2010.
[39]
L. R. Sapaico and M. Sato, “Analysis of vision-based Text Entry using morse code generated by tongue gestures,” in Proceedings of the 2011 4th International Conference on Human System Interactions, HSI 2011, Yokohama, Japan, May 2011.
[40]
A. Królak and P. Strumillo, “Eye-blink detection system for human–computer interaction,” Universal Access in the Information Society, vol. 11, no. 4, pp. 409–419, 2012.
[41]
R. Das and B. ShivaKumar, “Headspeak: morse code based head gesture to speech conversion using intel Realsense™ technology,” International Journal of Recent Technology and Engineering, vol. 8, no. 2, pp. 2866–2874, 2019.
[42]
A. Nowosielski and P. Forczmański, “Touchless typing with head movements captured in thermal spectrum,” Pattern Analysis and Applications, vol. 22, no. 3, pp. 841–855, 2019.
[43]
P. Kar, K. Mishra, S. Ghosh, S. Chakraborty, and S. Chattopadhyay, “Exploratory analysis of nose-gesture for smartphone aided typing for users with clinical conditions,” in Proceedings of the 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and Other Affiliated Events (PerCom Workshops), pp. 380–383, Kassel, Germany, March 2021.
[44]
M. O. Taş and H. S. Yavuz, “A human-computer interaction system based on eye, eyebrow and head movements,” Pamukkale University Journal of Engineering Sciences, vol. 28, no. 5, pp. 632–642, 2022.
[45]
O. Poláček, A. Sporka, and P. Slavík, “Text input for motor-impaired people,” Universal Access in the Information Society, vol. 16, pp. 51–72, 2017.
[46]
A. S. Arif and W. Stuerzlinger, “Predicting the cost of error correction in character-based text entry technologies,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, vol. 1, pp. 5–14, 2010.
[47]
P. Majaranta, N. Majaranta, G. Daunys, and O. Špakov, “Text editing by gaze: static vs. dynamic menus,” in Proceedings of the 5th Conference on Communication by Gaze Interaction (COGAIN), Lyngby, Denmark, May 2009.
[48]
R. J. Jagacinski and D. L. Monk, “Fitts’ law in two dimensions with hand and head movements movements,” Journal of Motor Behavior, vol. 17, no. 1, pp. 77–95, 1985.
[49]
R. G. Radwin, G. C. Vanderheiden, and M.-L. Lin, “A method for evaluating head-controlled computer input devices using fitts’ law,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 32, no. 4, pp. 423–438, 1990.
[50]
S. Zhai, J. Kong, and X. Ren, “Speed–accuracy tradeoff in Fitts’ law tasks—on the equivalency of actual and nominal pointing precision,” International Journal of Human-Computer Studies, vol. 61, no. 6, pp. 823–856, 2004.
[51]
P. Kristensson and K. Vertanen, “The potential of dwell-free eye-typing for fast assistive gaze communication,” in Proceedings of the Symposium on Eye Tracking Research and Applications, p. 241, Santa Barbara CA, USA, March 2012.
[52]
P. Majaranta, U.-K. Ahola, and O. Špakov, “Fast gaze typing with an adjustable dwell time,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 357–360, Boston MA USA, April 2009.
[53]
O. Tuisku, P. Majaranta, P. Isokoski, and K.-J. Räihä, “Now Dasher! Dash away: longitudinal study of fast text entry by Eye Gaze,” in Proceedings of the 2008 Symposium on Eye Tracking Research and Applications, Savannah Georgia, March 2008.
[54]
M. Ilves, Y. Gizatdinova, V. Surakka, and E. Vankka, “Head movement and facial expressions as game input,” Entertainment Computing, vol. 5, no. 3, pp. 147–156, 2014.
[55]
Y. Gizatdinova and V. Surakka, “Automatic edge-based localization of facial features from images with complex facial expressions,” Pattern Recognition Letters, vol. 31, no. 15, pp. 2436–2446, 2010.
[56]
W. K. Purves, D. E. Sadava, G. H. Orians, and H. C. Heller, Life, the Science of Biology, W. H. Freeman, Madison Ave, NY, USA, 2003.
[57]
I. S. MacKenzie and R. W. Soukoreff, “Phrase sets for evaluating text entry techniques,” in CHI ’03 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’03), pp. 754–755, ACM, New York, NY, USA, 2003.
[58]
I. S. MacKenzie and W. R. Soukoreff, “Phrase sets for evaluating text entry techniques,” in CHI '03 Extended Abstracts on Human Factors in Computing Systems (CHI EA '03), pp. 754–755, Association for Computing Machinery, New York, NY, USA, 2003.
[59]
P. Isokoski and T. Linden, “Effect of foreign language on text transcription performance: Finns writing English,” Proceedings of the third Nordic conference on Human-computer interaction, vol. 82, pp. 109–112, 2004.
[60]
R. Raissi, E. Dimara, J. H. Berry, W. D. Gray, and G. Bailly, “Retroactive transfer phenomena in alternating user interfaces,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), Association for Computing Machinery, New York, NY, USA, April 2020.
[61]
D. Miniotas, O. Špakov, and I. S. MacKenzie, “Eye gaze interaction with expanding targets,” in Proceedings of the CHI’04 Extended Abstracts on Human Factors in Computing Systems, pp. 1255–1258, Vienna Austria, April 2004.
[62]
P. Majaranta, I. S. MacKenzie, A. Aula, and K.-J. Räihä, “Effects of feedback and dwell time on eye typing speed and accuracy,” Universal Access in the Information Society, vol. 5, no. 2, pp. 199–208, 2006.
[63]
R. W. Soukoreff and I. S. MacKenzie, “Measuring errors in text entry tasks: an application of the Levenshtein string distance statistic,” in Proceedings of the CHI ’01 Extended Abstracts on Human Factors in Computing Systems, pp. 319–320, Seattle, Washington, DC, USA, March 2001.
[64]
M. Zhang and J. Wobbrock, “Beyond the input stream: making text entry evaluations more flexible with transcription sequences,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 831–842, New Orleans LA USA, October 2019.
[65]
I. S. Mackenzie, “KSPC (Keystrokes per character) as a characteristic of text entry techniques,” Lecture Notes in Computer Science, vol. 2411, pp. 195–210, 2002.
[66]
F. E. Grubbs, “Sample criteria for testing outlying observations,” The Annals of Mathematical Statistics, vol. 21, no. 1, pp. 27–58, 1950.
[67]
J. Cohen, “Statistical power analysis for the behavioral sciences,” L. Erlbaum Associates, Routledge, NY, USA, 1988.
[68]
R. J. K. Jacob, “Hot topics-eye-gaze computer interfaces: what you look at is what you get,” Computer, vol. 26, no. 7, pp. 65–66, 1993.
[69]
J. J. Darragh and I. H. Witten, The Reactive Keyboard, Cambridge University Press, Cambridge, UK, 1992.
[70]
R. C. De Vries, J. Deitz, and D. Anson, “A comparison of two computer access systems for functional text entry,” American Journal of Occupational Therapy, vol. 52, no. 8, pp. 656–665, 1998.
[71]
O. Špakov, P. Isokoski, and P. Majaranta, “Look and lean: accurate head-assisted eye pointing,” in Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 35–42, Safety Harbor FL, USA, March 2014.
[72]
A. Kurauchi, W. Feng, C. Morimoto, and M. Betke, “HMAGIC: head movement and gaze input cascaded pointing,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA 2015, Corfu Greece, July 2015.
[73]
L. Sidenmark, D. Mardanbegi, A. R. Gomez, C. Clarke, and H. Gellersen, “BimodalGaze: seamlessly refined pointing with gaze and filtered gestural head movement,” in ACM Symposium on Eye Tracking Research and Applications (ETRA '20 Full Papers), Association for Computing Machinery, New York, NY, USA, 2020.
[74]
A. Diaz-Tula and C. H. Morimoto, “Augkey: increasing foveal throughput in eye typing with augmented keys,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3533–3544, San Jose CA, USA, May 2016.
[75]
A. Nowosielski, “Minimal interaction touchless text input with head movements and stereo vision,” Computer Vision and Graphics, pp. 233–243, Springer International Publishing, Berlin, Germany, 2016.
[76]
C. Yu, Y. Gu, Z. Yang, X. Yi, H. Luo, and Y. Shi, “Tap, dwell or gesture?: exploring head-based text entry techniques for HMDs,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4479–4488, Denver Colorado USA, May 2017.
[77]
W. Xu, H.-N. Liang, A. He, and Z. Wang, “Pointing and selection methods for text entry in augmented reality head mounted displays,” in Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, October 2019.
[78]
W. Xu, H.-N. Liang, Y. Zhao, T. Zhang, D. Yu, and D. Monteiro, “RingText: dwell-free and hands-free text entry for mobile head-mounted displays using head motions,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 5, pp. 1991–2001, 2019b.

Index Terms

  1. Vision-Based Interfaces for Character-Based Text Entry: Comparison of Errors and Error Correction Properties of Eye Typing and Head Typing
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Advances in Human-Computer Interaction
          Advances in Human-Computer Interaction  Volume 2023, Issue
          2023
          162 pages
          ISSN:1687-5893
          EISSN:1687-5907
          Issue’s Table of Contents
          This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

          Publisher

          Hindawi Limited

          London, United Kingdom

          Publication History

          Published: 01 January 2023

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 21 Sep 2024

          Other Metrics

          Citations

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media