Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3242587.3242642acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Designing Socially Acceptable Hand-to-Face Input

Published: 11 October 2018 Publication History

Abstract

Wearable head-mounted displays combine rich graphical output with an impoverished input space. Hand-to-face gestures have been proposed as a way to add input expressivity while keeping control movements unobtrusive. To better understand how to design such techniques, we describe an elicitation study conducted in a busy public space in which pairs of users were asked to generate unobtrusive, socially acceptable hand-to-face input actions. Based on the results, we describe five design strategies: miniaturizing, obfuscating, screening, camouflaging and re-purposing. We instantiate these strategies in two hand-to-face input prototypes, one based on touches to the ear and the other based on touches of the thumbnail to the chin or cheek. Performance assessments characterize time and error rates with these devices. The paper closes with a validation study in which pairs of users experience the prototypes in a public setting and we gather data on the social acceptability of the designs and reflect on the effectiveness of the different strategies.

Supplementary Material

suppl.mov (ufp1385p.mp4)
Supplemental video
MP4 File (p711-lee.mp4)

References

[1]
David Ahlström, Khalad Hasan, and Pourang Irani. 2014. Are You Comfortable Doing That?: Acceptance Studies of Around-device Gestures in and for Public Settings. In Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services (MobileHCI '14). ACM, New York, NY, USA, 193--202.
[2]
Fraser Anderson, Tovi Grossman, Daniel Wigdor, and George Fitzmaurice. 2015. Supporting Subtlety with Deceptive Devices and Illusory Interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1489--1498.
[3]
Ronald T. Azuma. 1997. A Survey of Augmented Reality. Presence: Teleoper. Virtual Environ. 6, 4 (Aug. 1997), 355--385.
[4]
Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. 2015. CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 549--556.
[5]
Enrico Costanza, Samuel A. Inverso, and Rebecca Allen. 2005. Toward Subtle Intimate Interfaces for Mobile Devices Using an EMG Controller. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '05). ACM, New York, NY, USA, 481--489.
[6]
Christine Dierk, Sarah Sterman, Molly Jane Pearce Nicholas, and Eric Paulos. 2018. H"airIÖ: Human Hair As Interactive Material. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '18). ACM, New York, NY, USA, 148--157.
[7]
Stuart Dimond and Rashida Harries. 1984. Face touching in monkeys, apes and man: Evolutionary origins and cerebral asymmetry. Neuropsychologia 22, 2 (1984), 227 -- 233.
[8]
David Dobbelstein, Philipp Hock, and Enrico Rukzio. 2015. Belt: An Unobtrusive Touch Input Device for Head-worn Displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2135--2138.
[9]
Barrett Ens, Ahmad Byagowi, Teng Han, Juan David Hincapié-Ramos, and Pourang Irani. 2016. Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces. In Proceedings of the 2016 Symposium on Spatial User Interaction (SUI '16). ACM, New York, NY, USA, 99--102.
[10]
Jan Gugenheimer, David Dobbelstein, Christian Winkler, Gabriel Haas, and Enrico Rukzio. 2016. FaceTouch: Enabling Touch Interaction in Display Fixed UIs for Mobile Virtual Reality. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 49--60.
[11]
Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: Appropriating the Body As an Input Surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 453--462.
[12]
Takeshi Hatta and Stuart J. Dimond. 1984. Differences in face touching by Japanese and British people. Neuropsychologia 22, 4 (1984), 531 -- 534.
[13]
Yi-Ta Hsieh, Antti Jylh"a, Valeria Orso, Luciano Gamberini, and Giulio Jacucci. 2016. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4203--4215.
[14]
Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E. Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 241--250.
[15]
Hsin-Liu (Cindy) Kao, Artem Dementyev, Joseph A. Paradiso, and Chris Schmandt. 2015. NailO: Fingernails As an Input Surface. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3015--3018.
[16]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1151--1160.
[17]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning the Ear into an Input Surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '17). ACM, New York, NY, USA, Article 27, 6 pages.
[18]
Yen Lee Angela Kwok, Jan Gralton, and Mary-Louise McLaws. 2015. Face touching: A frequent habit that has implications for hand hygiene. American Journal of Infection Control 43, 2 (2015), 112 -- 114.
[19]
Juyoung Lee, Hui-Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley, Woontack Woo, and Kai Kunze. 2017. Itchy Nose: Discreet Gesture Interaction Using EOG Sensors in Smart Eyewear. In Proceedings of the 2017 ACM International Symposium on Wearable Computers (ISWC '17). ACM, New York, NY, USA, 94--97.
[20]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, and Max Mühlhauser. 2014. EarPut: Augmenting Ear-worn Devices for Ear-based Interaction. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI '14). ACM, New York, NY, USA, 300--307.
[21]
Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian. 2010. Would You Do That?: Understanding Social Acceptance of Gestural Interfaces. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '10). ACM, New York, NY, USA, 275--278.
[22]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, m. c. schraefel, and Jacob O. Wobbrock. 2014. Reducing Legacy Bias in Gesture Elicitation Studies. interactions 21, 3 (May 2014), 40--45.
[23]
Christian Müller-Tomfelde. 2007. Dwell-based Pointing in Applications of Human Computer Interaction. In Proceedings of the 11th IFIP TC 13 International Conference on Human-computer Interaction (INTERACT'07). Springer-Verlag, Berlin, Heidelberg, 560--573. http://dl.acm.org/citation.cfm?id=1776994.1777067
[24]
Mark Nicas and Daniel Best. 2008. A Study Quantifying the Hand-to-Face Contact Rate and Its Potential Application to Predicting Respiratory Tract Infection. Journal of Occupational and Environmental Hygiene 5, 6 (2008), 347--352. 18357546.
[25]
Ian Oakley, Carina Lindahl, Khanh Le, DoYoung Lee, and MD. Rasel Islam. 2016. The Flat Finger: Exploring Area Touches on Smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4238--4249.
[26]
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2013. SenSkin: Adapting Skin As a Soft Interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST '13). ACM, New York, NY, USA, 539--544.
[27]
Iulian Radu. 2014. Augmented Reality in Education: A Meta-review and Cross-media Analysis. Personal Ubiquitous Comput. 18, 6 (Aug. 2014), 1533--1543.
[28]
Jun Rekimoto. 2001. GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices. In Proceedings of the 5th IEEE International Symposium on Wearable Computers (ISWC '01). IEEE Computer Society, Washington, DC, USA, 21--. http://dl.acm.org/citation.cfm?id=580581.856565
[29]
Julie Rico and Stephen Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 887--896.
[30]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined Motion Gestures for Mobile Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 197--206.
[31]
Marcos Serrano, Barrett M. Ens, and Pourang P. Irani. 2014. Exploring the Use of Hand-to-face Input for Interacting with Head-worn Displays. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3181--3190.
[32]
Outi Tuisku, Ville Rantanen, Oleg Apakov, Veikko Surakka, and Jukka Lekkala. 2016. Pointing and Selecting with Facial Activity. Interacting with Computers 28, 1 (2016), 1--12.
[33]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1325--1334.
[34]
G. Stewart Von Itzstein, Mark Billinghurst, Ross T. Smith, and Bruce H. Thomas. 2017. Augmented Reality Entertainment: Taking Gaming Out of the Box. Springer International Publishing, Cham, 1--9.
[35]
Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. 2015. PalmType: Using Palms As Keyboards for Smart Glasses. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '15). ACM, New York, NY, USA, 153--160.
[36]
Feng Wang and Xiangshi Ren. 2009. Empirical Evaluation for Finger Input Properties in Multi-touch Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 1063--1072.
[37]
J. Nancy Wei, Bryn Dougherty, Aundria Myers, and M. Sherif Badawy. 2018. Using Google Glass in Surgical Settings: Systematic Review. JMIR Mhealth Uhealth 6, 3 (06 Mar 2018), e54.
[38]
Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More Than Touch: Understanding How People Use Skin As an Input Surface for Mobile Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 179--188.
[39]
Martin Weigel, Aditya Shekhar Nittala, Alex Olwal, and Jürgen Steimle. 2017. SkinMarks: Enabling Interactions on Body Landmarks Using Conformal Skin Electronics. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 3095--3105.
[40]
Julie R. Williamson and John Williamson. 2017. Understanding Public Evaluation: Quantifying Experimenter Intervention. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 3414--3425.
[41]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined Gestures for Surface Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 1083--1092.
[42]
Robert Xiao, Julia Schwarz, and Chris Harrison. 2015. Estimating 3D Finger Angle on Commodity Touchscreens. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (ITS '15). ACM, New York, NY, USA, 47--50.
[43]
Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, and Yuta Sugiura. 2017. CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-mounted Display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17). ACM, New York, NY, USA, Article 19, 8 pages.

Cited By

View all
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • (2024)Designing More Private and Socially Acceptable Hand-to-Face Gestures for Heads-Up ComputingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678994(637-639)Online publication date: 5-Oct-2024
  • (2024)Understanding Novice Users' Mental Models of Gesture Discoverability and Designing Effective OnboardingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678370(290-295)Online publication date: 5-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '18: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
October 2018
1016 pages
ISBN:9781450359481
DOI:10.1145/3242587
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 October 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. augmented reality
  2. hand-to-face input
  3. head mounted display
  4. social acceptability
  5. user elicitation

Qualifiers

  • Research-article

Funding Sources

Conference

UIST '18

Acceptance Rates

UIST '18 Paper Acceptance Rate 80 of 375 submissions, 21%;
Overall Acceptance Rate 561 of 2,567 submissions, 22%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)74
  • Downloads (Last 6 weeks)6
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • (2024)Designing More Private and Socially Acceptable Hand-to-Face Gestures for Heads-Up ComputingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678994(637-639)Online publication date: 5-Oct-2024
  • (2024)Understanding Novice Users' Mental Models of Gesture Discoverability and Designing Effective OnboardingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678370(290-295)Online publication date: 5-Oct-2024
  • (2024)Exploring the Design Space of Input Modalities for Working in Mixed Reality on Long-haul FlightsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661560(2267-2285)Online publication date: 1-Jul-2024
  • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
  • (2024)MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture InteractionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642437(1-20)Online publication date: 11-May-2024
  • (2024)Make Interaction Situated: Designing User Acceptable Interaction for Situated Visualization in Public EnvironmentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642049(1-21)Online publication date: 11-May-2024
  • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
  • (2023)Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation StudiesACM Computing Surveys10.1145/363645856:5(1-55)Online publication date: 7-Dec-2023
  • (2023)Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite VideosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36109237:3(1-25)Online publication date: 27-Sep-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media