Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642935acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Exploring Visualizations for Precisely Guiding Bare Hand Gestures in Virtual Reality

Published: 11 May 2024 Publication History

Abstract

Bare hand interaction in augmented or virtual reality (AR/VR) systems, while intuitive, often results in errors and frustration. However, existing methods, such as a static icon or a dynamic tutorial, can only inform simple and coarse hand gestures and lack corrective feedback. This paper explores various visualizations for enhancing precise hand interaction in VR. Through a comprehensive two-part formative study with 11 participants, we identified four types of essential information for visual guidance and designed different visualizations that manifest these information types. We further distilled four visual designs and conducted a controlled lab study with 15 participants to assess their effectiveness for various single- and double-handed gestures. Our results demonstrate that visual guidance significantly improved users’ gesture performance, reducing time and workload while increasing confidence. Moreover, we found that the visualization did not disrupt most users’ immersive VR experience or their perceptions of hand tracking and gesture recognition reliability.

Supplemental Material

MP4 File - Video Preview
Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
MP4 File - Video Figure
Video figure (~3 minute length) with captions.
Transcript for: Video Figure

References

[1]
Fraser Anderson, Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2013. YouMove: Enhancing Movement Training with an Augmented Reality Mirror. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery, New York, NY, USA, 311–320. https://doi.org/10.1145/2501988.2502045
[2]
Emilie Maria Nybo Arendttorp, Kasper Rodil, Heike Winschiers-Theophilus, and Christof Magoath. 2022. Overcoming Legacy Bias: Re-Designing Gesture Interactions in Virtual Reality With a San Community in Namibia. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 555, 18 pages. https://doi.org/10.1145/3491102.3517549
[3]
Emilie Maria Nybo Arendttorp, Heike Winschiers-Theophilus, Kasper Rodil, Freja B. K. Johansen, Mads Rosengreen Jørgensen, Thomas K. K. Kjeldsen, and Samkao Magot. 2023. Grab It, While You Can: A VR Gesture Evaluation of a Co-Designed Traditional Narrative by Indigenous People. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 308, 13 pages. https://doi.org/10.1145/3544548.3580894
[4]
Rahul Arora, Rubaiat Habib Kazi, Danny M. Kaufman, Wilmot Li, and Karan Singh. 2019. MagicalHands: Mid-Air Hand Gestures for Animating in VR. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 463–477. https://doi.org/10.1145/3332165.3347942
[5]
Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology (Monterey, CA, USA) (UIST ’08). Association for Computing Machinery, New York, NY, USA, 37–46. https://doi.org/10.1145/1449715.1449724
[6]
Idil Bostan, Oğuz Turan Buruk, Mert Canat, Mustafa Ozan Tezcan, Celalettin Yurdakul, Tilbe Göksun, and Oğuzhan Özcan. 2017. Hands as a Controller: User Preferences for Hand Specific On-Skin Gestures. In Proceedings of the 2017 Conference on Designing Interactive Systems (Edinburgh, United Kingdom) (DIS ’17). Association for Computing Machinery, New York, NY, USA, 1123–1134. https://doi.org/10.1145/3064663.3064766
[7]
Gavin Buckingham. 2021. Hand Tracking for Immersive Virtual Reality: Opportunities and Challenges. Frontiers in Virtual Reality 2 (2021). https://doi.org/10.3389/frvir.2021.728461
[8]
Edwige Chauvergne, Martin Hachet, and Arnaud Prouzeau. 2023. User Onboarding in Virtual Reality: An Investigation of Current Practices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 711, 15 pages. https://doi.org/10.1145/3544548.3581211
[9]
William Delamare, Thomas Janssoone, Céline Coutrix, and Laurence Nigay. 2016. Designing 3D Gesture Guidance: Visual Feedback and Feedforward Design Options. In Proceedings of the International Working Conference on Advanced Visual Interfaces (Bari, Italy) (AVI ’16). Association for Computing Machinery, New York, NY, USA, 152–159. https://doi.org/10.1145/2909132.2909260
[10]
Florian Diller, Gerik Scheuermann, and Alexander Wiebel. 2022. Visual Cue Based Corrective Feedback for Motor Skill Training in Mixed Reality: A Survey. IEEE Transactions on Visualization and Computer Graphics (2022), 1–14. https://doi.org/10.1109/TVCG.2022.3227999
[11]
Maximilian Dürr, Rebecca Weber, Ulrike Pfeil, and Harald Reiterer. 2020. EGuide: Investigating different Visual Appearances and Guidance Techniques for Egocentric Guidance Visualizations. In International Conference on Tangible, Embedded, and Embodied Interaction. https://api.semanticscholar.org/CorpusID:211041359
[12]
Mehrad Faridan, Bheesha Kumari, and Ryo Suzuki. 2023. ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany, ) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 203, 13 pages. https://doi.org/10.1145/3544548.3581381
[13]
Charles Faure, Annabelle Limballe, Benoit Bideau, and Richard Kulpa. 2020. Virtual reality to assess and train team ball sports performance: A scoping review. Journal of Sports Sciences 38, 2 (2020), 192–205. https://doi.org/10.1080/02640414.2019.1689807
[14]
Katherine Fennedy, Jeremy Hartmann, Quentin Roy, Simon Tangi Perrault, and Daniel Vogel. 2021. OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 27, 12 (2021), 4425–4438. https://doi.org/10.1109/TVCG.2021.3101854
[15]
P.M. Fitts and M.I. Posner. 1967. Human Performance. Brooks/Cole Publishing Company. https://books.google.ca/books?id=XtFOAAAAMAAJ
[16]
Dustin Freeman, Hrvoje Benko, Meredith Ringel Morris, and Daniel Wigdor. 2009. ShadowGuides: Visualizations for in-Situ Learning of Multi-Touch and Whole-Hand Gestures. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (Banff, Alberta, Canada) (ITS ’09). Association for Computing Machinery, New York, NY, USA, 165–172. https://doi.org/10.1145/1731903.1731935
[17]
Kazunobu Fukuhara, Hirofumi Ida, Takahiro Ogata, Motonobu Ishii, and Takahiro Higuchi. 2017. The role of proximal body information on anticipatory judgment in tennis using graphical information richness. PLoS ONE 12, 7 (2017), 1–11. https://doi.org/10.1371/journal.pone.0180985
[18]
Ping-Hsuan Han, Kuan-Wen Chen, Chen-Hsin Hsieh, Yu-Jie Huang, and Yi-Ping Hung. 2016. AR-Arm: Augmented Visualization for Guiding Arm Movement in the First-Person Perspective. In Proceedings of the 7th Augmented Human International Conference 2016 (Geneva, Switzerland) (AH ’16). Association for Computing Machinery, New York, NY, USA, Article 31, 4 pages. https://doi.org/10.1145/2875194.2875237
[19]
Jeffrey T. Hansberger, Chao Peng, Shannon L. Mathis, Vaidyanath Areyur Shanthakumar, Sarah C. Meacham, Lizhou Cao, and Victoria R. Blakely. 2017. Dispelling the Gorilla Arm Syndrome: The Viability of Prolonged Gesture Interactions. In Virtual, Augmented and Mixed Reality, Stephanie Lackey and Jessie Chen (Eds.). Springer International Publishing, Cham, 505–520.
[20]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (2006), 904–908. https://doi.org/10.1177/154193120605000909 arXiv:https://doi.org/10.1177/154193120605000909
[21]
Jay Henderson, Tanya R. Jonker, Edward Lank, Daniel Wigdor, and Ben Lafreniere. 2022. Investigating Cross-Modal Approaches for Evaluating Error Acceptability of a Recognition-Based Input Technique. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 1, Article 15 (mar 2022), 24 pages. https://doi.org/10.1145/3517262
[22]
Masoumehsadat Hosseini, Tjado Ihmels, Ziqian Chen, Marion Koelle, Heiko Müller, and Susanne Boll. 2023. Towards a Consensus Gesture Set: A Survey of Mid-Air Gestures in HCI for Maximized Agreement Across Domains. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 311, 24 pages. https://doi.org/10.1145/3544548.3581420
[23]
Fotis P. Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Ioannis Kompatsiaris, and Nikos A. Laskaris. 2018. An error-aware gaze-based keyboard by means of a hybrid BCI system. Scientific Reports 8, 1 (2018), 1–11. https://doi.org/10.1038/s41598-018-31425-2
[24]
Keiko Katsuragawa, Ankit Kamal, Qi Feng Liu, Matei Negulescu, and Edward Lank. 2019. Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input. ACM Trans. Interact. Intell. Syst. 9, 2–3, Article 15 (apr 2019), 30 pages. https://doi.org/10.1145/3181672
[25]
Ben Lafreniere, Tanya R. Jonker, Stephanie Santosa, Mark Parent, Michael Glueck, Tovi Grossman, Hrvoje Benko, and Daniel Wigdor. 2021. False Positives vs. False Negatives: The Effects of Recovery Time and Cognitive Costs on Input Error Preference. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 54–68. https://doi.org/10.1145/3472749.3474735
[26]
Jolanta Lapiak. [n. d.]. Signs for make. https://www.handspeak.com/word/1331/
[27]
Benjamin Lee, Maxime Cordeil, Arnaud Prouzeau, Bernhard Jenny, and Tim Dwyer. 2022. A Design Space For Data Visualisation Transformations Between 2D And 3D In Mixed-Reality Environments. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 25, 14 pages. https://doi.org/10.1145/3491102.3501859
[28]
Karen B. Lewis and Roxanne Henderson. 2001. Sign language made simple. Three Rivers Press.
[29]
Klemen Lilija, Søren Kyllingsbæk, and Kasper Hornbæk. 2021. Correction of Avatar Hand Movements Supports Learning of a Motor Skill. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). IEEE, 1–8. https://doi.org/10.1109/VR50410.2021.00069
[30]
Tica Lin, Rishi Singh, Yalong Yang, Carolina Nobre, Johanna Beyer, Maurice A. Smith, and Hanspeter Pfister. 2021. Towards an Understanding of Situated AR Visualization for Basketball Free-Throw Training. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 461, 13 pages. https://doi.org/10.1145/3411764.3445649
[31]
Weizhou Luo, Zhongyuan Yu, Rufat Rzayev, Marc Satkowski, Stefan Gumhold, Matthew McGinity, and Raimund Dachselt. 2023. Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis. Conference on Human Factors in Computing Systems - Proceedings (2023). https://doi.org/10.1145/3544548.3580715
[32]
Meta. 2022. Introducing ‘First Hand,’ Our Official Hand Tracking Demo Built With Presence Platform’s Interaction SDK. https://developer.oculus.com/blog/introducing-first-hand/
[33]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, m. c. schraefel, and Jacob O. Wobbrock. 2014. Reducing Legacy Bias in Gesture Elicitation Studies. Interactions 21, 3 (may 2014), 40–45. https://doi.org/10.1145/2591689
[34]
Michael Nebeling, Maximilian Speicher, Xizi Wang, Shwetha Rajaram, Brian D. Hall, Zijian Xie, Alexander R. E. Raistrick, Michelle Aebersold, Edward G. Happ, Jiayin Wang, Yanan Sun, Lotus Zhang, Leah E. Ramsier, and Rhea Kulkarni. 2020. MRAT: The Mixed Reality Analytics Toolkit. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (, Honolulu, HI, USA, ) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376330
[35]
Matei Negulescu, Jaime Ruiz, and Edward Lank. 2012. A Recognition Safety Net: Bi-Level Threshold Recognition for Mobile Motion Gestures. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services (San Francisco, California, USA) (MobileHCI ’12). Association for Computing Machinery, New York, NY, USA, 147–150. https://doi.org/10.1145/2371574.2371598
[36]
Candace E. Peacock, Ben Lafreniere, Ting Zhang, Stephanie Santosa, Hrvoje Benko, and Tanya R. Jonker. 2022. Gaze as an Indicator of Input Recognition Errors. Proc. ACM Hum.-Comput. Interact. 6, ETRA, Article 142 (may 2022), 18 pages. https://doi.org/10.1145/3530883
[37]
Siyou Pei, Alexander Chen, Jaewook Lee, and Yang Zhang. 2022. Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 429, 16 pages. https://doi.org/10.1145/3491102.3501898
[38]
Tran Pham, Jo Vermeulen, Anthony Tang, and Lindsay MacDonald Vermeulen. 2018. Scale Impacts Elicited Gestures for Manipulating Holograms: Implications for AR Gesture Design. In Proceedings of the 2018 Designing Interactive Systems Conference (Hong Kong, China) (DIS ’18). Association for Computing Machinery, New York, NY, USA, 227–240. https://doi.org/10.1145/3196709.3196719
[39]
Alexander Schäfer, Gerd Reis, and Didier Stricker. 2022. AnyGesture: Arbitrary One-Handed Gestures for Augmented, Virtual, and Mixed Reality Applications. Applied Sciences 12, 4 (2022). https://doi.org/10.3390/app12041888
[40]
Naveen Sendhilnathan, Ting Zhang, Ben Lafreniere, Tovi Grossman, and Tanya R. Jonker. 2022. Detecting Input Recognition Errors and User Errors Using Gaze Dynamics in Virtual Reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22). Association for Computing Machinery, New York, NY, USA, Article 38, 19 pages. https://doi.org/10.1145/3526113.3545628
[41]
Xinyu Shi, Ziqi Zhou, Jing Wen Zhang, Ali Neshati, Anjul Kumar Tyagi, Ryan Rossi, Shunan Guo, Fan Du, and Jian Zhao. 2023. De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 122, 19 pages. https://doi.org/10.1145/3544548.3581070
[42]
Jungpil Shin, Akitaka Matsuoka, Md. Al Mehedi Hasan, and Azmain Yakin Srizon. 2021. American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation. Sensors 21, 17 (2021). https://doi.org/10.3390/s21175856
[43]
Rajinder Sodhi, Hrvoje Benko, and Andrew Wilson. 2012. LightGuide: Projected Visualizations for Hand Movement Guidance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 179–188. https://doi.org/10.1145/2207676.2207702
[44]
Chase Stokes, Vidya Setlur, Bridget Cogley, Arvind Satyanarayan, and Marti A. Hearst. 2023. Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2023), 1233–1243. https://doi.org/10.1109/TVCG.2022.3209383
[45]
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2014. Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations. In Proceedings of the 16th International Conference on Multimodal Interaction (Istanbul, Turkey) (ICMI ’14). Association for Computing Machinery, New York, NY, USA, 172–179. https://doi.org/10.1145/2663204.2663256
[46]
Nicolas Vignais, Benoit Bideau, Cathy Craig, Sébastien Brault, Franck Multon, Paul Delamarche, and Richard Kulpa1. 2009. Does the Level of Graphical Detail of a Virtual Handball Thrower Influence a Goalkeeper’s Motor Response?Journal of sports science & medicine4 (2009).
[47]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2020. Mid-Air Gesture Control of Multiple Home Devices in Spatial Augmented Reality Prototype. Multimodal Technologies and Interaction 4, 3 (2020). https://doi.org/10.3390/mti4030061
[48]
Robert Wang, Sylvain Paris, and Jovan Popović. 2011. 6D Hands: Markerless Hand-Tracking for Computer Aided Design. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (Santa Barbara, California, USA) (UIST ’11). Association for Computing Machinery, New York, NY, USA, 549–558. https://doi.org/10.1145/2047196.2047269
[49]
Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Yuanzhi Cao, and Karthik Ramani. 2021. GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 552–567. https://doi.org/10.1145/3472749.3474769
[50]
Xiaoying Wei, Xiaofu Jin, and Mingming Fan. 2022. Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years’ Studies. arxiv:2210.01365 [cs.HC]
[51]
Xingyao Yu, Katrin Angerbauer, Peter Mohr, Denis Kalkofen, and Michael Sedlmair. 2020. Perspective Matters: Design Implications for Motion Guidance in Mixed Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 577–587. https://doi.org/10.1109/ISMAR50242.2020.00085

Cited By

View all
  • (2024)Next-Gen Dynamic Hand Gesture Recognition: MediaPipe, Inception-v3 and LSTM-Based Enhanced Deep Learning ModelElectronics10.3390/electronics1316323313:16(3233)Online publication date: 15-Aug-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Virtual reality
  2. error visualization
  3. hand gesture recognition.
  4. visual guidance

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Meta
  • Natural Sciences and Engineering Research Council of Canada

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,259
  • Downloads (Last 6 weeks)114
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Next-Gen Dynamic Hand Gesture Recognition: MediaPipe, Inception-v3 and LSTM-Based Enhanced Deep Learning ModelElectronics10.3390/electronics1316323313:16(3233)Online publication date: 15-Aug-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media