Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3173574.3173681acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes

Published: 19 April 2018 Publication History

Abstract

Designers and researchers often rely on simple gesture recognizers like Wobbrock et al.'s $1 for rapid user interface prototypes. However, most existing recognizers are limited to a particular input modality and/or pre-trained set of gestures, and cannot be easily combined with other recognizers. In particular, creating prototypes that employ advanced touch and mid-air gestures still requires significant technical experience and programming skill. Inspired by $1's easy, cheap, and flexible design, we present the GestureWiz prototyping environment that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner. We present a series of experiments with designers and crowds to show that GestureWiz can perform with reasonable accuracy and latency. We demonstrate advantages of GestureWiz when recreating gesture-based interfaces from the literature and conducting a study with 12 interaction designers that prototyped a multimodal interface with support for a wide range of novel gestures in about 45 minutes.

Supplementary Material

suppl.mov (pn1721-file5.mp4)
Supplemental video

References

[1]
Amini, S., and Li, Y. CrowdLearner: Rapidly Creating Mobile Recognizers using Crowdsourcing. In Proc. UIST (2013).
[2]
Ashbrook, D., and Starner, T. MAGIC: A Motion Gesture Design Tool. In Proc. CHI (2010).
[3]
Bernstein, M. S., Brandt, J., Miller, R. C., and Karger, D. R. Crowds in Two Seconds: Enabling Realtime Crowd-Powered Interfaces. In Proc. UIST (2011).
[4]
Bigham, J. P., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R. C., Miller, R., Tatarowicz, A., White, B., White, S., and Yeh, T. VizWiz: Nearly Real-time Answers to Visual Questions. In Proc. UIST (2010).
[5]
Chen, X. A., Grossman, T., Wigdor, D. J., and Fitzmaurice, G. W. Duet: Exploring Joint Interactions on a Smart Phone and a Smart Watch. In Proc. CHI (2014).
[6]
Dow, S., Lee, J., Oezbek, C., MacIntyre, B., Bolter, J. D., and Gandy, M. Wizard of oz interfaces for mixed reality applications. In Proc. CHI Extended Abstracts (2005), 1339--1342.
[7]
Dow, S., MacIntyre, B., Lee, J., Oezbek, C., Bolter, J. D., and Gandy, M. Wizard of oz support throughout an iterative design process. IEEE Pervasive Computing 4, 4 (2005), 18--26.
[8]
Jang, S., Elmqvist, N., and Ramani, K. GestureAnalyzer: Visual Analytics for Pattern Analysis of Mid-Air Hand Gestures. In Proc. SUI (2014).
[9]
Kato, J., McDirmid, S., and Cao, X. DejaVu: Integrated Support for Developing Interactive Camera-Based Programs. In Proc. UIST (2012).
[10]
Kittur, A., Smus, B., Khamkar, S., and Kraut, R. E. Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, ACM (2011), 43--52.
[11]
Kulkarni, A., Can, M., and Hartmann, B. Collaboratively crowdsourcing workflows with turkomatic. In Proc. CSCW (2012).
[12]
Laput, G., Lasecki, W. S., Wiese, J., Xiao, R., Bigham, J. P., and Harrison, C. Zensors: Adaptive, rapidly deployable, human-intelligent sensor feeds. In Proc. CHI (2015).
[13]
Lasecki, W. S., Gordon, M., Koutra, D., Jung, M. F., Dow, S. P., and Bigham, J. P. Glance: Rapidly Coding Behavioral Video with the Crowd. In Proc. UIST (2014).
[14]
Lasecki, W. S., Kim, J., Rafter, N., Sen, O., Bigham, J. P., and Bernstein, M. S. Apparition: Crowdsourced User Interfaces that Come to Life as You Sketch Them. In Proc. CHI (2015).
[15]
Lasecki, W. S., Murray, K. I., White, S., Miller, R. C., and Bigham, J. P. Real-time crowd control of existing interfaces. In Proc. UIST (2011), 23--32.
[16]
Lee, S.-S., Chae, J., Kim, H., Lim, Y.-K., and Lee, K.-P. Towards more Natural Digital Content Manipulation via User Freehand Gestural Interaction in a Living Room. In Proc. UbiComp (2013).
[17]
L¨ u, H., and Li, Y. Gesture Coder: A Tool for Programming Multi-touch Gestures By Demonstration. In Proc. CHI (2012).
[18]
L¨ u, H., and Li, Y. Gesture Studio: Authoring Multi-touch Interactions through Demonstration and Declaration. In Proc. CHI (2013).
[19]
MacIntyre, B., Gandy, M., Dow, S., and Bolter, J. D. DART: a toolkit for rapid design exploration of augmented reality experiences. In Proc. UIST (2004).
[20]
Morris, M. R. Web on the Wall: Insights from a Multimodal Interaction Elicitation Study. In Proc. ITS (2012).
[21]
Morris, M. R., Danielescu, A., Drucker, S. M., Fisher, D., Lee, B., m. c. schraefel, and Wobbrock, J. O. Reducing Legacy Bias in Gesture Elicitation Studies. Interactions 21, 3 (2014).
[22]
Morris, M. R., Wobbrock, J. O., and Wilson, A. D. Understanding Users' Preferences for Surface Gestures. In Proc. GI (2010).
[23]
Nacenta, M. A., Kamber, Y., Qiang, Y., and Kristensson, P. O. Memorability of Pre-designed and User-defined Gesture Sets. In Proc. CHI (2013).
[24]
Nebeling, M., and Dey, A. K. XDBrowser: User-Defined Cross-Device Web Page Designs. In Proc. CHI (2016).
[25]
Nebeling, M., Huber, A., Ott, D., and Norrie, M. C. Web on the Wall Reloaded: Implementation, Replication and Refinement of User-Defined Interaction Sets. In Proc. ITS (2014).
[26]
Oh, U., and Findlater, L. The Challenges and Potential of End-User Gesture Customization. In Proc. CHI (2013).
[27]
Ouyang, T., and Li, Y. Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition. In Proc. CHI (2012).
[28]
Piumsomboon, T., Clark, A. J., Billinghurst, M., and Cockburn, A. User-defined gestures for augmented reality. In Proc. INTERACT (2013).
[29]
Song, J., S¨ or¨ os, G., Pece, F., Fanello, S. R., Izadi, S., Keskin, C., and Hilliges, O. In-air gestures around unmodified mobile devices. In Proc. UIST (2014).
[30]
Taranta II, E. M., Samiei, A., Maghoumi, M., Khaloo, P., Pittman, C. R., and LaViola Jr., J. J. Jackknife: A reliable recognizer with few samples and many modalities. In Proc. CHI (2017).
[31]
Vatavu, R. User-Defined Gestures for Free-Hand TV Control. In Proc. EuroITV (2012).
[32]
Wobbrock, J. O., Morris, M. R., and Wilson, A. D. User-Defined Gestures for Surface Computing. In Proc. CHI (2009).
[33]
Wobbrock, J. O., Wilson, A. D., and Li, Y. Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes. In Proc. UIST (2007).

Cited By

View all
  • (2024)How New Developers Approach Augmented Reality Development Using Simplified Creation Tools: An Observational StudyMultimodal Technologies and Interaction10.3390/mti80400358:4(35)Online publication date: 22-Apr-2024
  • (2024)GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model AgentsProceedings of the ACM on Human-Computer Interaction10.1145/36981458:ISS(462-499)Online publication date: 24-Oct-2024
  • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
  • Show More Cited By

Index Terms

  1. GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
    April 2018
    8489 pages
    ISBN:9781450356206
    DOI:10.1145/3173574
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. gesture-based interfaces
    3. rapid prototyping
    4. wizard of oz

    Qualifiers

    • Research-article

    Conference

    CHI '18
    Sponsor:

    Acceptance Rates

    CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)60
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 01 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)How New Developers Approach Augmented Reality Development Using Simplified Creation Tools: An Observational StudyMultimodal Technologies and Interaction10.3390/mti80400358:4(35)Online publication date: 22-Apr-2024
    • (2024)GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model AgentsProceedings of the ACM on Human-Computer Interaction10.1145/36981458:ISS(462-499)Online publication date: 24-Oct-2024
    • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
    • (2023)Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation StudiesACM Computing Surveys10.1145/363645856:5(1-55)Online publication date: 7-Dec-2023
    • (2023)Reframe: An Augmented Reality Storyboarding Tool for Character-Driven Analysis of Security & Privacy ConcernsProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606750(1-15)Online publication date: 29-Oct-2023
    • (2023)GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VRProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606736(1-17)Online publication date: 29-Oct-2023
    • (2023)Eliciting Security & Privacy-Informed Sharing Techniques for Multi-User Augmented RealityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581089(1-17)Online publication date: 19-Apr-2023
    • (2022)AnyGesture: Arbitrary One-Handed Gestures for Augmented, Virtual, and Mixed Reality ApplicationsApplied Sciences10.3390/app1204188812:4(1888)Online publication date: 11-Feb-2022
    • (2022)PONI: A Personalized Onboarding Interface for Getting Inspiration and Learning About AR/VR CreationNordic Human-Computer Interaction Conference10.1145/3546155.3546642(1-14)Online publication date: 8-Oct-2022
    • (2022)The Gesture Authoring Space: Authoring Customised Hand Gestures for Grasping Virtual Objects in Immersive Virtual EnvironmentsProceedings of Mensch und Computer 202210.1145/3543758.3543766(85-95)Online publication date: 4-Sep-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media