Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2858036.2858436acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses

Published: 07 May 2016 Publication History

Abstract

Smart glasses suffer from obtrusive or cumbersome interaction techniques. Studies show that people are not willing to publicly use, for example, voice control or mid-air gestures in front of the face. Some techniques also hamper the high degree of freedom of the glasses. In this paper, we derive design principles for socially acceptable, yet versatile, interaction techniques for smart glasses based on a survey of related work. We propose an exemplary design, based on a haptic glove integrated with smart glasses, as an embodiment of the design principles. The design is further refined into three interaction scenarios: text entry, scrolling, and point-and-select. Through a user study conducted in a public space we show that the interaction technique is considered unobtrusive and socially acceptable. Furthermore, the performance of the technique in text entry is comparable to state-of-the-art techniques. We conclude by reflecting on the advantages of the proposed design.

Supplementary Material

suppl.mov (pn2001-file3.mp4)
Supplemental video
MP4 File (p4203-hsieh.mp4)

References

[1]
Ahmed Sabbir Arif and Wolfgang Stuerzlinger. 2009. Analysis of text entry performance metrics. In Proceedings of the Toronto International Conference on Science and Technology for Humanity (TIC-STH). 100-105.
[2]
Gilles Bailly, Jorg Muller, Michael Rohs, Daniel Wigdor, and Sven Kratz. 2012. ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). 1239-1248.
[3]
Stephen Brewster, Roderick Murray-Smith, Andrew Crossan, Yolanda Vasquez-Alvarez, and Julie Rico. 2009. The GAIME project: Gestural and Auditory Interactions for Mobile Environments. In Whole Body Interaction Workshop, CHI'09 (2009).
[4]
Liwei Chan, Chi-Hao Hsieh, Yi-Ling Chen, Shuo Yang, Da-Yuan Huang, Rong-Hao Liang, and Bing-Yu Chen. 2015. Cyclopss: Wearable and Single-Piece Full-Body Gesture Input Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 3001-3009.
[5]
Seungmoon Choi and K.J. Kuchenbecker. 2013. Vibrotactile Display: Perception, Technology, and Applications. Proc. IEEE 101, 9 (Sept 2013), 2093-2104.
[6]
Andrea Colaco. 2013. Sensor design and interaction techniques for gestural input to smart glasses and mobile devices. In Adjunct Publication of the Symposium on User Interface Software and Technology (UIST'13 Adjunct). 49-52.
[7]
Tamara Denning, Zakariya Dehlawi, and Tadayoshi Kohno. 2014. In Situ with Bystanders of Augmented Reality Glasses: Perspectives on Recording and Privacy-mediating Technologies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). 2377-2386.
[8]
David Dobbelstein, Philipp Hock, and Enrico Rukzio. 2015. Belt: An Unobtrusive Touch Input Device for Head-worn Displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 2135-2138.
[9]
Jacques Foottit, Dave Brown, Stefan Marks, and Andy M. Connor. 2014. An Intuitive Tangible Game Controller. In Proceedings of the Conference on Interactive Entertainment (IE'14). 1-7.
[10]
Clifton Forlines and Ravin Balakrishnan. 2008. Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks. In Proceeding of the SIGCHI Conference on Human Factors in Computing Systems (CHI'08). 1563-1572.
[11]
Paulo Gallotti, Alberto Raposo, and Luciano Soares. 2011. v-Glove: A 3D Virtual Touch Interface. In Proceedings of the Symposium on Virtual Reality (SVR '11). 242-251.
[12]
Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N. Patel. 2015. Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 255-258.
[13]
Tovi Grossman, Xiang Anthony Chen, and George Fitzmaurice. 2015. Typing on Glasses: Adapting Text Entry to Smart Eyewear. In Proceedings of the SIGCHI Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'15). 144-152.
[14]
Faizan Haque, Mathieu Nancel, and Daniel Vogel. 2015. Myopoint: Pointing and Clicking Using Forearm Mounted Electromyography and Inertial Motion Sensors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 3653-3656.
[15]
Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: Appropriating the Body As an Input Surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). 453-462.
[16]
Juan David Hincapie-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). 1063-1072.
[17]
Yi-Ta Hsieh, Antti Jylha, and Giulio Jacucci. 2014. Pointing and Selecting with Tactile Glove in 3D Environment. In Proceedings of the Workshop on Symbiotic Interaction (Springer LNCS 8820). 133-137.
[18]
Shoya Ishimaru, Kai Kunze, Koichi Kise, Jens Weppner, Andreas Dengel, Paul Lukowicz, and Andreas Bulling. 2014. In the blink of an eye Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. In Proceedings of the International Conference on Augmented Human (AH'14). 1-4.
[19]
Lei Jing, Zixue Cheng, Yinghui Zhou, Junbo Wang, and Tongjun Huang. 2013. Magic Ring: a self-contained gesture input device on finger. In Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM'13). 1-4.
[20]
Antti Jylha, Yi-Ta Hsieh, Valeria Orso, Salvatore Andolina, Luciano Gamberini, and Giulio Jacucci. 2015. A Wearable Multimodal Interface for Exploring Urban Points of Interest. In Proceedings of the International Conference on Multimodal Interaction (ICMI '15). 175-182.
[21]
David Kim, Otmar Hilliges, Shahram Izadi, Alex D Butler, Jiawen Chen, Iason Oikonomidis, and Patrick Olivier. 2012. Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In Proceedings of the Symposium on User Interface Software and Technology (UIST'12). 167-176.
[22]
Jungsoo Kim, Jiasheng He, Kent Lyons, and Thad Starner. 2007. The Gesture Watch: A Wireless Contact-free Gesture based Wrist Interface. In Proceedings of the International Symposium on Wearable Computers (ISWC'07). 1-8.
[23]
Marion Koelle, Matthias Kranz, and Andreas Möller. 2015. Don't Look at Me That Way!: Understanding User Attitudes Towards Data Glasses Usage. In Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). 362-372.
[24]
Barry Kollee, Sven Kratz, and Anthony Dunnigan. 2014. Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing. In Proceedings of the Symposium on Spatial User Interaction (SUI'14). 40-49.
[25]
Andres Lucero and Akos Vetek. 2014. NotifEye: Using Interactive Glasses to Deal with Notifications While Walking in Public. In Proceedings of the Conference on Advances in Computer Entertainment Technology (ACE '14). Article 17, 10 pages.
[26]
Zhihan Lv, Alaa Halawani, Shengzhong Feng, Shafiq Ur Rehman, and Haibo Li. 2015. Touch-less Interactive Augmented Reality Game on Vision-based Wearable Device. Personal Ubiquitous Comput. 19, 3--4 (July 2015), 551-567.
[27]
K. Lyons, D. Plaisted, and T. Starner. 2004. Expert chording text entry on the Twiddler one-handed keyboard. In Proceedings of the International Symposium on Wearable Computers (ISWC'04), Vol. 1. 94-101.
[28]
I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In Extended Abstracts of SIGCHI Conference on Human Factors in Computing Systems (CHI EA '03). 754-755.
[29]
Anders Markussen, Mikkel Rø nne Jakobsen, and Kasper Hornbæk. 2014. Vulture: A Mid-Air Word-Gesture Keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). 1073-1082.
[30]
Tiago Martins, Christa Sommerer, Laurent Mignonneau, and Nuno Correia. 2008. Gauntlet: A Wearable Interface for Ubiquitous Gaming. In Proceedings of the International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '08). 367-370.
[31]
Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian. 2010. Would You Do That?: Understanding Social Acceptance of Gestural Interfaces. In Proceedings of the International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '10). 275-278.
[32]
Anja Naumann, Jorn Hurtienne, Johann Habakuk Israel, Carsten Mohs, Martin Christof Kindsmuller, Herbert A. Meyer, and Steffi Husslein. 2007. Intuitive Use of User Interfaces: Defining a Vague Concept. In Proceedings of the International Conference on Engineering Psychology and Cognitive Ergonomics (EPCE'07). 128-136. http://dl.acm.org/citation.cfm?id=1784197.1784212
[33]
Tao Ni, Doug Bowman, and Chris North. 2011. AirStroke: Bringing Unistroke Text Entry to Freehand Gesture Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). 2473-2476.
[34]
Taiwoo Park, Jinwon Lee, Inseok Hwang, Chungkuk Yoo, Lama Nachman, and Junehwa Song. 2011. E-Gesture: A Collaborative Architecture for Energy-efficient Gesture Recognition with Hand-worn Sensor and Mobile Devices. In Proceedings of the Conference on Embedded Networked Sensor Systems (SenSys '11). 260-273.
[35]
Halley P. Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-Luen Do. 2013. Don't Mind Me Touching My Wrist: A Case Study of Interacting with On-body Technology in Public. In Proceedings of the International Symposium on Wearable Computers (ISWC '13). 89-96.
[36]
Gang Ren and Eamonn O'Neill. 2013. Freehand Gestural Text Entry for Interactive TV. In Proceedings of the European Conference on Interactive TV and Video (EuroITV '13). 121-130.
[37]
Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). 887-896.
[38]
R. Rosenberg and M. Slater. 1999. The chording glove: a glove-based text input device. IEEE Trans. Systems, Man and Cybernetics, Part C (Applications and Reviews) 29, 2 (May 1999), 186-191.
[39]
Tobias Schuchert, Sascha Voth, and Judith Baumgarten. 2012. Sensing visual attention using an interactive bidirectional HMD. In Proceedings of the Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In'12). 1-3.
[40]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). 3181-3190.
[41]
Garth Shoemaker, Leah Findlater, Jessica Q. Dawson, and Kellogg S. Booth. 2009. Mid-air text input techniques for very large wall displays. In Proceedings of Graphics Interface. 231-238. http://dl.acm.org/citation.cfm?id=1555880.1555931
[42]
Srinath Sridhar, Anna Maria Feit, Christian Theobalt, and Antti Oulasvirta. 2015. Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 3643-3652.
[43]
Takumi Toyama, Daniel Sonntag, Andreas Dengel, Takahiro Matsuda, Masakazu Iwamura, and Koichi Kise. 2014. A mixed reality head-mounted text translation system using eye gaze input. In Proceedings of the International Conference on Intelligent User Interfaces (IUI'14). 329-334.
[44]
Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 3327-3336.
[45]
Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. PalmType: Using palms as keyboards for smart glasses. In Proceedings of the SIGCHI Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'15). 153-160.
[46]
Martin Weigel, Tong Lu, Gilles Bailly, Antti Oulasvirta, Carmel Majidi, and Jurgen Steimle. 2015. iSkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 2991-3000.
[47]
Anusha Withana, Roshan Peiris, Nipuna Samarasekara, and Suranga Nanayakkara. 2015. zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on SmartWearables. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 3661-3670.
[48]
Sang Ho Yoon, Ke Huo, Vinh P Nguyen, and Karthik Ramani. 2015. TIMMi: Finger-worn Textile Input Device with Multimodal Sensing in Mobile Interaction. In Proceedings of the International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15). 269-272.
[49]
Xianjun Sam Zheng, Cedric Foucault, Patrik Matos da Silva, Siddharth Dasari, Tao Yang, and Stuart Goose. 2015. Eye-Wearable Technology for Machine Maintenance: Effects of Display Position and Hands-free Operation Xianjun. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15). 2125-2134.

Cited By

View all
  • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
  • (2024)Understanding Gesture and Microgesture Inputs for Augmented Reality MapsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661630(409-423)Online publication date: 1-Jul-2024
  • (2024)Exploring the Design Space of Input Modalities for Working in Mixed Reality on Long-haul FlightsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661560(2267-2285)Online publication date: 1-Jul-2024
  • Show More Cited By

Index Terms

  1. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
    May 2016
    6108 pages
    ISBN:9781450333627
    DOI:10.1145/2858036
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. head-mounted displays
    2. multimodal interaction
    3. social acceptability
    4. tactile feedback
    5. wearable computing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CHI'16
    Sponsor:
    CHI'16: CHI Conference on Human Factors in Computing Systems
    May 7 - 12, 2016
    California, San Jose, USA

    Acceptance Rates

    CHI '16 Paper Acceptance Rate 565 of 2,435 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)112
    • Downloads (Last 6 weeks)17
    Reflects downloads up to 06 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR GlassesProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652941(63-71)Online publication date: 4-Apr-2024
    • (2024)Understanding Gesture and Microgesture Inputs for Augmented Reality MapsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661630(409-423)Online publication date: 1-Jul-2024
    • (2024)Exploring the Design Space of Input Modalities for Working in Mixed Reality on Long-haul FlightsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661560(2267-2285)Online publication date: 1-Jul-2024
    • (2024)TouchEditorProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314547:4(1-29)Online publication date: 12-Jan-2024
    • (2024)Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory AugmentationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642450(1-18)Online publication date: 11-May-2024
    • (2024)TriPad: Touch Input in AR on Ordinary Surfaces with Hand Tracking OnlyProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642323(1-18)Online publication date: 11-May-2024
    • (2024)Make Interaction Situated: Designing User Acceptable Interaction for Situated Visualization in Public EnvironmentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642049(1-21)Online publication date: 11-May-2024
    • (2024)Beyond Acceptance Models: The Role of Social Perceptions in Autonomous Public Transportation AcceptanceHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-60480-5_2(26-39)Online publication date: 29-Jun-2024
    • (2023)The Social Perception of Autonomous Delivery Vehicles Based on the Stereotype Content ModelSustainability10.3390/su1506519415:6(5194)Online publication date: 15-Mar-2023
    • (2023)Simulating Wearable Urban Augmented Reality Experiences in VR: Lessons Learnt from Designing Two Future Urban InterfacesMultimodal Technologies and Interaction10.3390/mti70200217:2(21)Online publication date: 16-Feb-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media