Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1518701.1519050acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

GestureBar: improving the approachability of gesture-based interfaces

Published: 04 April 2009 Publication History

Abstract

GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly discloses how to execute commands with gestures, through animated images, detail tips and an out-of-document practice area. GestureBar's simple design is also general enough for use with any recognition technique and for integration with standard, non-gestural UI components. We evaluate GestureBar in a formal experiment showing that users can perform complex, ecologically valid tasks in a purely gestural system without training, introduction, or prior gesture experience when using GestureBar, discovering and learning a high percentage of the gestures needed to perform the tasks optimally, and significantly outperforming a state of the art crib sheet. The relative contribution of the major design elements of GestureBar is also explored. A second experiment shows that GestureBar is preferred to a basic crib sheet and two enhanced crib sheet variations.

Supplementary Material

JPG File (20.jpg)
JPG File (p2269.jpg)
FLV File (20.flv)
MOV File (p2269.mov)

References

[1]
Alvarado, C. Sketch Recognition User Interfaces: Guidelines for Design and Development. In Proc. of AAAI Fall Symposium on Intelligent Pen-based Interfaces, (2004).
[2]
Alvarado, C. and Davis, R. SketchREAD: a multi-domain sketch recognition engine. In Proc. of UIST'04, 23--32.
[3]
Bau, O., and Mackay, W. OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets. UIST'08, 37--46.
[4]
Buxton, W. Chunking and phrasing and the design of human-computer dialogues. IFIP World Computer Congress'86, 475--480.
[5]
Buxton, W., Fiume, E., Hill, R., Lee, A., Woo, C. Continuous Hand-Gesture Driven Input. Graphics Interface '83, 191--195.
[6]
Forsberg, A, Holden, L., Miller, T., and Zeleznik, R. The Music Notepad, Brown University (2005).
[7]
Grossman, T., Dragicevic, P. and Balakrishnan, R. Strategies for Accelerating Online Learning of Hotkeys. CHI'07, 137--144.
[8]
Grossman, T., Hinckley, K., Baudisch, P., Agrawala, M., Balakrishnan, R. Hover Widgets: Using the Tracking State to Extend the Capabilities of Pen-Operated Devices. CHI'06, 861--870.
[9]
Hinckley, K., Baudisch, P., Ramos, G., Guimbretiere, F. Design and Analysis of Delimiters for Selection-Action Pen Gesture Phrases in Scriboli. In Proc. of CHI'05, 453--460.
[10]
Hinckley, K., Zhao, S., Sarin, R., Baudisch, P., Cutrell, Ed., Shilman, M., Tan, D. InkSeine: In Situ Search for Active Note Taking. In Proc. of CHI (2007), 251--260.
[11]
Holm, S. A Simple Sequentially Rejective Multiple Test Procedure. Scandinavian Journal of Statistics, 6 (1979), 60--65.
[12]
Hong, J. and Landay, J. SATIN: A Toolkit for Informal Ink-Based Applications. In Proc. of UIST (2000), 63--72.
[13]
Hse, H. and Newton, A. Recognition and beautification of multi-stroke symbols in digital ink. Computers&Graphics, 29, 4 (August 2005), 533--546.
[14]
Igarashi, T., Matsuoka, S., Kawachiya, S., and Tanaka, H. Interactive Beautification: A Technique for Rapid Geometric Design. In Proc. of UIST (2007), 105--114.
[15]
Kurtenbach, G., Moran, T. P. and Buxton, W. Contextual Animation of Gestural Commands. Graphics Interface '94, 83--90.
[16]
Long, C., Landay, J., Rowe, L., and Michiels, J. Visual Similarity of Pen Gestures. In Proc. of CHI'00, 360--367.
[17]
Mankoff, J., Hudson, S. and Abowd, G. Providing Integrated Toolkit-level Suport for Ambiguity in Recognition-based Interfaces. CHI'00, 368--375.
[18]
Microsoft, Inc. Office 2007. 2006.
[19]
Microsoft, Inc. Windows Vista. 2006.
[20]
Mouse Gestures Add-on for the Mozilla Firefox Web Browser. http://www.mousegestures.org/.
[21]
Palm, Inc. Graffiti character recognizer.
[22]
Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191--220.
[23]
Rubine, D. Specifying Gestures by Example. In Proc. of SIGGRAPH'91, 329--337.
[24]
Tapia, M., and Kurtenbach, G. Some Design Refinements and Principles on the Appearance and Behavior of Marking Menus. UIST'95, 189--195.25.
[25]
Wobbrock, J., Wilson, A., and Li, Y. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. In Proc. of UIST (2007), 159--168.
[26]
Zeleznik, R., Bragdon, A., Liu, C., and Forsberg, A. Lineogrammer: Creating Diagrams by Drawing. UIST '08, 161--170.
[27]
Zeleznik, R., and Miller, T. Fluid Inking: Augmenting the Medium of Free-Form Inking with Gestures. In Proceedings of Graphics Interface (2006), 155--162.
[28]
Zhao, S., Agrawala, M. and Hinckley, K. Zone and Polygon Menus: Using Relative Position to Increase the Breadth of Multi-stroke Marking Menus. CHI (2006), 1077--1086.

Cited By

View all
  • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
  • (2022)Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI LiteratureACM Transactions on Computer-Human Interaction10.1145/350353729:4(1-54)Online publication date: 5-May-2022
  • (2022)Command SelectionHandbook of Human Computer Interaction10.1007/978-3-319-27648-9_19-1(1-35)Online publication date: 12-Mar-2022
  • Show More Cited By

Index Terms

  1. GestureBar: improving the approachability of gesture-based interfaces

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2009
      2426 pages
      ISBN:9781605582467
      DOI:10.1145/1518701
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 April 2009

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. approachability
      2. disclosure
      3. gestures
      4. learning
      5. pen

      Qualifiers

      • Research-article

      Conference

      CHI '09
      Sponsor:

      Acceptance Rates

      CHI '09 Paper Acceptance Rate 277 of 1,130 submissions, 25%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI '25
      CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)30
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 09 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
      • (2022)Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI LiteratureACM Transactions on Computer-Human Interaction10.1145/350353729:4(1-54)Online publication date: 5-May-2022
      • (2022)Command SelectionHandbook of Human Computer Interaction10.1007/978-3-319-27648-9_19-1(1-35)Online publication date: 12-Mar-2022
      • (2021)EdgeMark Menu: A 1D Menu for Curved Edge Display SmartphonesProceedings of the 23rd International Conference on Mobile Human-Computer Interaction10.1145/3447526.3472017(1-9)Online publication date: 27-Sep-2021
      • (2021)OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.310185427:12(4425-4438)Online publication date: 1-Dec-2021
      • (2020)Introducing the NEMO-Lowlands iconic gesture dataset, collected through a gameful human–robot interactionBehavior Research Methods10.3758/s13428-020-01487-0Online publication date: 19-Oct-2020
      • (2020)Efficient human-machine control with asymmetric marginal reliability input devicesPLOS ONE10.1371/journal.pone.023360315:6(e0233603)Online publication date: 1-Jun-2020
      • (2019)Exploring Cross-Modal Training via Touch to Learn a Mid-Air Marking Menu Gesture SetProceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services10.1145/3338286.3340119(1-9)Online publication date: 1-Oct-2019
      • (2018)!FTL, an Articulation-Invariant Stroke Gesture Recognizer with Controllable Position, Scale, and Rotation InvariancesProceedings of the 20th ACM International Conference on Multimodal Interaction10.1145/3242969.3243032(125-134)Online publication date: 2-Oct-2018
      • (2018)User-Driven Design Principles for Gesture RepresentationsProceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174121(1-13)Online publication date: 21-Apr-2018
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media