Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2465958.2465966acmotherconferencesArticle/Chapter ViewAbstractPublication PageseuroitvConference Proceedingsconference-collections
research-article

Freehand gestural text entry for interactive TV

Published: 24 June 2013 Publication History

Abstract

Users increasingly expect more interactive experiences with TV. Combined with the recent development of freehand gestural interaction enabled by inexpensive sensors, interactive television has the potential to offer a highly usable and engaging experience. However, common interaction tasks such as text input are still challenging with such systems. In this paper, we investigate text entry using freehand gestures captured with a low-cost sensor system. Two virtual keyboard layouts and three selection techniques were designed and evaluated. Results show that a text entry method with dual circle layout and an expanding target selection technique offers ease of use and error tolerance, key features if we are to increase the use and enhance the experience of interactive TV in the living room.

References

[1]
J. Accot and S. Zhai. More than dotting the i's -- foundations for crossing-based interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '02, 73--80.
[2]
R. Aoki, B. Chan, M. Ihara, T. Kobayashi, M. Kobayashi, and S. Kagami. A gesture recognition algorithm for vision-based unicursal gesture interfaces. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 53--56, 2012. ACM.
[3]
H. Benko. Beyond flat surface computing: challenges of depth-aware and curved interfaces. In Proceedings of Multimedia 2012, MM '12, 935--944.
[4]
X. Bi, B. A. Smith, and S. Zhai. Quasi-qwerty soft keyboard optimization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, 283--286, 2010. ACM.
[5]
J. Bobeth, S. Schmehl, E. Kruijff, S. Deutsch, and M. Tscheligi. Evaluating performance and acceptance of older adults using freehand gestures for tv menu control. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 35--44, 2012. ACM.
[6]
S. J. Castellucci and I. S. MacKenzie. Gathering text entry metrics on android devices. In CHI '11 Extended Abstracts on Human Factors in Computing Systems, CHI EA '11, 1507--1512, 2011. ACM.
[7]
D. Chatterjee, A. Sinha, A. Pal, and A. Basu. An iterative methodolgy to improve TV onscreen keyboard layout design through evaluation of user studies. Advances in Computing, 2(5):81--91, 2012.
[8]
M. D. Dunlop, N. Durga, S. Motaparti, P. Dona, and V. Medapuram. Qwerth: an optimized semi-ambiguous keyboard design. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile devices and Services, MobileHCI '12, 23--28, 2012. ACM.
[9]
G. Geleijnse, D. Aliakseyeu, and E. Sarroukh. Comparing text entry methods for interactive television applications. In Proceedings of the 7th European Conference on Interactive TV and Video, EuroiTV '09, 145--148, 2009. ACM.
[10]
T. Grossman and R. Balakrishnan. Pointing at trivariate targets in 3d environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, 447--454, 2004. ACM.
[11]
F. Guimbretiére and T. Winograd. Flowmenu: combining command, text, and data entry. In Proceedings of the 13rd Annual ACM Symposium on User Interface Software and Technology, UIST '00, 213--216.
[12]
L. Hoste, B. Dumas, and B. Signer. Speeg: a multimodal speech- and gesture-based text input solution. In Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI '12, 156--163, 2012. ACM.
[13]
A. Iatrino and S. Modeo. Text editing in digital terrestrial television: a comparison of three interfaces. In Proceedings of the 4th European Conference on Interactive TV and Video, EuroiTV '06.
[14]
P. Isokoski and R. Raisamo. Quikwriting as a multi-device text entry method. In Proceedings of the Third Nordic Conference on Human-Computer Interaction, NordiCHI '04, 105--108, 2004. ACM.
[15]
H.-C. Jetter, J. Gerken, and H. Reiterer. Why we need better modelworlds, not better gestures. In Natural User Interfaces (Workshop CHI 2010), 1--4.
[16]
E. Jones, J. Alexander, A. Andreou, P. Irani, and S. Subramanian. Gestext: accelerometer-based gestural text-entry systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, 2173--2182.
[17]
M. Karam and m. c. schraefel. A taxonomy of gestures in human computer interactions. Technical report, University of Southampton, 2005.
[18]
P. O. Kristensson, T. Nicholson, and A. Quigley. Continuous recognition of one-handed and two-handed gestures using 3d full-body motion tracking sensors. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces, IUI '12, 89--92, 2012. ACM.
[19]
P. O. Kristensson and S. Zhai. Shark2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST '04, 43--52.
[20]
F. C. Y. Li, R. T. Guy, K. Yatani, and K. N. Truong. The 1line keyboard: a qwerty layout in a single line. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, 461--470, 2011. ACM.
[21]
I. S. MacKenzie and R. W. Soukoreff. Phrase sets for evaluating text entry techniques. In CHI '03 Extended Abstracts on Human Factors in Computing Systems, CHI EA '03, 754--755, 2003. ACM.
[22]
P. Majaranta, U.-K. Ahola, and O. Spakov. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, 357--360, 2009. ACM.
[23]
M. J. McGuffin and R. Balakrishnan. Fitts' law and expanding targets: experimental studies and designs for user interfaces. ACM TOCHI, 12(4):388--422, 2005.
[24]
T. Ni, D. Bowman, and C. North. Airstroke: bringing unistroke text entry to freehand gesture interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, 2473--2476.
[25]
D. A. Norman. Natural user interfaces are not natural. Interactions, 17(3):6--10, May 2010.
[26]
D. A. Norman and J. Nielsen. Gestural interfaces: a step backward in usability. Interactions, 17(5):46--49, September 2010.
[27]
K. Perlin. Quikwriting: continuous stylus-based text entry. In Proceedings of the 11th Annual ACM Symposium on User Interface Software and Technology, UIST '98, 215--216, 1998. ACM.
[28]
O. Polacek, M. Klima, A. J. Sporka, P. Zak, M. Hradis, P. Zemcik, and V. Prochazka. A comparative study on distant free-hand pointing. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 139--142, 2012. ACM.
[29]
O. Polacek, M. Klima, A. J. Sporka, P. Zak, M. Hradis, P. Zemcik, and V. Prochazka. A comparative study on distant free-hand pointing. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 139--142, 2012. ACM.
[30]
Primesense. The primesense 3D awareness sensor.
[31]
G. Ren and E. O'Neill. 3D marking menu selection with freehand gestures. In IEEE Symposium on 3D User Interfaces, 3DUI '12, 61--68.
[32]
G. Ren and E. O'Neill. 3D selection with freehand gesture. Computers & Graphics, 2013.
[33]
J. Rick. Performance optimizations of virtual keyboards for stroke-based text entry on a touch-based tabletop. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10, 77--86, 2010. ACM.
[34]
G. Shoemaker, L. Findlater, J. Q. Dawson, and K. S. Booth. Mid-air text input techniques for very large wall displays. In Proceedings of Graphics Interface 2009, GI '09, 231--238.
[35]
A. J. Sporka, O. Polacek, and P. Slavik. Comparison of two text entry methods on interactive tv. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 49--52, 2012. ACM.
[36]
R. J. Teather and W. Stuerzlinger. Assessing the effects of orientation and device on (constrained) 3d movement techniques. In IEEE Symposium on 3D User Interfaces, 3DUI '08, 43--50.
[37]
R.-D. Vatavu. Interfaces that should feel right: natural interaction with multimedia information. In M. Grgic, K. Delac, and M. Ghanbari, editors, Recent advances in multimedia signal processing and communications, volume 231, page 145--170. Springer Berlin / Heidelberg, 2009.
[38]
R.-D. Vatavu. User-defined gestures for free-hand tv control. In Proceedings of the 10th European Conference on Interactive TV and Video, EuroiTV '12, 45--48, 2012. ACM.
[39]
A. Wexelblat. Research challenges in gesture: open issues and unsolved problems. In I. Wachsmuth and M. Fröhlich, editors, Gesture and sign language in human-computer interaction, volume 1371, page 1--11. Springer Berlin / Heidelberg, 1998.
[40]
A. D. Wilson. Using a depth camera as a touch sensor. In Proceedings of ACM International Conference on Interactive Tabletops and Surfaces, ITS 2010, 69--72.
[41]
J. O. Wobbrock, B. A. Myers, and J. A. Kembel. Edgewrite: a stylus-based text entry method designed for high accuracy and stability of motion. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, UIST '03, 61--70, 2003. ACM.
[42]
S. Zhai and P. O. Kristensson. Interlaced qwerty: accommodating ease of visual search and input flexibility in shape writing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, 593--596, 2008. ACM.
[43]
S. Zhai and P.-O. Kristensson. Shorthand writing on stylus keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, 97--104.

Cited By

View all
  • (2023)A Comparative Study of the Typing Performance of Two Mid-Air Text Input Methods in Virtual EnvironmentsSensors10.3390/s2315698823:15(6988)Online publication date: 6-Aug-2023
  • (2023)Embodied Interaction on Constrained Interfaces for Augmented RealitySpringer Handbook of Augmented Reality10.1007/978-3-030-67822-7_10(239-271)Online publication date: 1-Jan-2023
  • (2021)Investigating the Performance of Gesture-Based Input for Mid-Air Text Entry in a Virtual Environment: A Comparison of Hand-Up versus Hand-Down PosturesSensors10.3390/s2105158221:5(1582)Online publication date: 24-Feb-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroITV '13: Proceedings of the 11th European Conference on Interactive TV and Video
June 2013
188 pages
ISBN:9781450319515
DOI:10.1145/2465958
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Politecnico di Milano: Politecnico di Milano

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 June 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. expanding target
  2. freehand gesture
  3. text entry

Qualifiers

  • Research-article

Conference

EuroITV '13
Sponsor:
  • Politecnico di Milano

Acceptance Rates

EuroITV '13 Paper Acceptance Rate 21 of 58 submissions, 36%;
Overall Acceptance Rate 52 of 149 submissions, 35%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)A Comparative Study of the Typing Performance of Two Mid-Air Text Input Methods in Virtual EnvironmentsSensors10.3390/s2315698823:15(6988)Online publication date: 6-Aug-2023
  • (2023)Embodied Interaction on Constrained Interfaces for Augmented RealitySpringer Handbook of Augmented Reality10.1007/978-3-030-67822-7_10(239-271)Online publication date: 1-Jan-2023
  • (2021)Investigating the Performance of Gesture-Based Input for Mid-Air Text Entry in a Virtual Environment: A Comparison of Hand-Up versus Hand-Down PosturesSensors10.3390/s2105158221:5(1582)Online publication date: 24-Feb-2021
  • (2021)Coping, Hacking, and DIY: Reframing the Accessibility of Interactions with Television for People with Motor ImpairmentsProceedings of the 2021 ACM International Conference on Interactive Media Experiences10.1145/3452918.3458802(37-49)Online publication date: 21-Jun-2021
  • (2021)Designing a Sensor Glove Using Deep LearningProceedings of the 26th International Conference on Intelligent User Interfaces10.1145/3397481.3450665(150-160)Online publication date: 14-Apr-2021
  • (2019)Dual-Cursor: Improving User Performance and Perceived Usability for Cursor-Based Text Entry on TV Using Remote ControlInteracting with Computers10.1093/iwc/iwz01731:3(263-281)Online publication date: 3-Sep-2019
  • (2018)Selection-based Text Entry in Virtual RealityProceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174221(1-13)Online publication date: 21-Apr-2018
  • (2018)Interaction Methods for Smart Glasses: A SurveyIEEE Access10.1109/ACCESS.2018.28310816(28712-28732)Online publication date: 2018
  • (2017)A User-Defined Gesture Set for Music Interaction in Immersive Virtual EnvironmentProceedings of the 3rd International Conference on Human-Computer Interaction and User Experience in Indonesia10.1145/3077343.3077348(44-51)Online publication date: 18-Apr-2017
  • (2017)TiTANProceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems10.1145/3027063.3053228(3041-3049)Online publication date: 6-May-2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media