Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3643834.3660691acmconferencesArticle/Chapter ViewAbstractPublication PagesdisConference Proceedingsconference-collections
research-article

Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User Interfaces

Published: 01 July 2024 Publication History

Abstract

With the progress in Large Language Models (LLMs) and rapid development of wearable smart devices like smart glasses, there is a growing opportunity for users to interact with on-device virtual assistants through voice and gestures with ease. Although voice user interfaces (VUIs) have been widely studied, the potential uses of full-body gestures in VUIs that can fully understand users’ surroundings and gestures are relatively unexplored. In this two-phase research using a Wizard-of-Oz approach, we aim to investigate the role of gestures in VUI interactions and explore their design space. In an initial exploratory user study with six participants, we identify influential factors for VUI gestures and establish an initial design space. In the second phase, we conducted a user study with 12 participants to validate and refine our initial findings. Our results showed that users are open and ready to adopt and utilize gestures to interact with multi-modal VUIs, especially in scenarios with poor voice capture quality. The study also highlighted three key categories of gesture functions for enhancing multi-modal VUI interactions: context reference, alternative input, and flow control. Finally, we present a design space for multi-modal VUI gestures along with demonstrations to enlighten future design for coupling multi-modal VUIs with gestures.

References

[1]
[n. d.]. GPT-4. https://openai.com/gpt-4
[2]
[n. d.]. Llama 2. https://ai.meta.com/llama-project
[3]
[n. d.]. LLaVA. https://llava-vl.github.io/
[4]
Fulya Acikgoz and Rodrigo Perez Vega. 2022. The Role of Privacy Cynicism in Consumer Habits with Voice Assistants: A Technology Acceptance Model Perspective. International Journal of Human–Computer Interaction 38, 12 (July 2022), 1138–1152. https://doi.org/10.1080/10447318.2021.1987677 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2021.1987677.
[5]
Tawfiq Ammari, Jofish Kaye, Janice Y. Tsai, and Frank Bentley. 2019. Music, Search, and IoT: How People (Really) Use Voice Assistants. ACM Transactions on Computer-Human Interaction 26, 3 (April 2019), 17:1–17:28. https://doi.org/10.1145/3311956
[6]
Farkhandah Aziz, Chris Creed, Sayan Sarcar, Maite Frutos-Pascual, and Ian Williams. 2022. Voice Snapping: Inclusive Speech Interaction Techniques for Creative Object Manipulation. In Proceedings of the 2022 ACM Designing Interactive Systems Conference(DIS ’22). Association for Computing Machinery, New York, NY, USA, 1486–1496. https://doi.org/10.1145/3532106.3533452
[7]
Daniel G. Bobrow, Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, and Terry Winograd. 1977. GUS, a frame-driven dialog system. Artificial Intelligence 8, 2 (April 1977), 155–173. https://doi.org/10.1016/0004-3702(77)90018-2
[8]
Richard A. Bolt. 1980. “Put-that-there”: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics 14, 3 (July 1980), 262–270. https://doi.org/10.1145/965105.807503
[9]
Michael Braun, Anja Mainz, Ronee Chadowitz, Bastian Pfleging, and Florian Alt. 2019. At Your Service: Designing Voice Assistant Personalities to Improve Automotive User Interfaces. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300270
[10]
Virginia Braun and Victoria Clarke. 2012. Thematic analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. American Psychological Association, Washington, DC, US, 57–71. https://doi.org/10.1037/13620-004
[11]
Di Laura Chen, Ravin Balakrishnan, and Tovi Grossman. 2020. Disambiguation Techniques for Freehand Object Manipulations in Virtual Reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 285–292. https://doi.org/10.1109/VR46266.2020.00048 ISSN: 2642-5254.
[12]
Victor Chen, Xuhai Xu, Richard Li, Yuanchun Shi, Shwetak Patel, and Yuntao Wang. 2021. Understanding the Design Space of Mouth Microgestures. In Proceedings of the 2021 ACM Designing Interactive Systems Conference(DIS ’21). Association for Computing Machinery, New York, NY, USA, 1068–1081. https://doi.org/10.1145/3461778.3462004
[13]
Myungguen Choi, Daisuke Sakamoto, and Tetsuo Ono. 2022. Kuiper Belt: Utilizing the “Out-of-natural Angle” Region in the Eye-gaze Interaction for Virtual Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3517725
[14]
Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen, and Josh Clow. 1997. QuickSet: multimodal interaction for distributed applications. In Proceedings of the fifth ACM international conference on Multimedia(MULTIMEDIA ’97). Association for Computing Machinery, New York, NY, USA, 31–40. https://doi.org/10.1145/266180.266328
[15]
Aarthi Easwara Moorthy and Kim-Phuong L. Vu. 2014. Voice Activated Personal Assistant: Acceptability of Use in the Public Space. In Human Interface and the Management of Information. Information and Knowledge in Applications and Services, Sakae Yamamoto (Ed.). Springer International Publishing, Cham, 324–334. https://doi.org/10.1007/978-3-319-07863-2_32
[16]
Aarthi Easwara Moorthy and Kim-Phuong L. Vu. 2015. Privacy Concerns for Use of Voice Activated Personal Assistant in the Public Space. International Journal of Human–Computer Interaction 31, 4 (April 2015), 307–335. https://doi.org/10.1080/10447318.2014.986642 Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10447318.2014.986642.
[17]
PAUL EKMAN and WALLACE V. FRIESEN. 1969. The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding. Semiotica 1, 1 (1969), 49–98. https://doi.org/10.1515/semi.1969.1.1.49 Publisher: De Gruyter Mouton.
[18]
Augusto Esteves, David Verweij, Liza Suraiya, Rasel Islam, Youryang Lee, and Ian Oakley. 2017. SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology(UIST ’17). Association for Computing Machinery, New York, NY, USA, 167–178. https://doi.org/10.1145/3126594.3126616
[19]
John D. Gould, John Conti, and Todd Hovanyecz. 1983. Composing letters with a simulated listening typewriter. Commun. ACM 26, 4 (April 1983), 295–308. https://doi.org/10.1145/2163.358100
[20]
Edward Twitchell Hall. 1966. The hidden dimension, 1st ed. Doubleday & Co, New York, NY, US. Pages: xii, 201.
[21]
Teresa Hirzle, Florian Müller, Fiona Draxler, Martin Schmitz, Pascal Knierim, and Kasper Hornbæk. 2023. When XR and AI Meet - A Scoping Review on Extended Reality and Artificial Intelligence. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–45. https://doi.org/10.1145/3544548.3581072
[22]
Fabian Hoffmann, Miriam-Ida Tyroller, Felix Wende, and Niels Henze. 2019. User-defined interaction for smart homes: voice, touch, or mid-air gestures?. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia(MUM ’19). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3365610.3365624
[23]
Stephen Jolly. 2000. Understanding body language: Birdwhistell’s theory of kinesics. Corporate Communications 5, 3 (2000), 133–139. https://doi.org/10.1108/13563280010377518 Num Pages: 0 Place: Bradford, United Kingdom Publisher: Emerald Group Publishing Limited.
[24]
Runchang Kang, Anhong Guo, Gierad Laput, Yang Li, and Xiang ’Anthony’ Chen. 2019. Minuet: Multimodal Interaction with an Internet of Things. In Symposium on Spatial User Interaction(SUI ’19). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3357251.3357581
[25]
J. F. Kelley. 1983. An empirical methodology for writing user-friendly natural language computer applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’83). Association for Computing Machinery, New York, NY, USA, 193–196. https://doi.org/10.1145/800045.801609
[26]
J. F. Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems 2, 1 (Jan. 1984), 26–41. https://doi.org/10.1145/357417.357420
[27]
Anam Ahmad Khan, Joshua Newn, James Bailey, and Eduardo Velloso. 2022. Integrating Gaze and Speech for Enabling Implicit Interactions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3491102.3502134
[28]
Philipp Kirschthaler, Martin Porcheron, and Joel E. Fischer. 2020. What Can I Say?: Effects of Discoverability in VUIs on Task Performance and User Experience. In Proceedings of the 2nd Conference on Conversational User Interfaces. ACM, Bilbao Spain, 1–9. https://doi.org/10.1145/3405755.3406119
[29]
Mark L. Knapp, Judith A. Hall, and Terrence G. Horgan. 2013. Nonverbal Communication in Human Interaction. Cengage Learning. Google-Books-ID: rWoWAAAAQBAJ.
[30]
Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3173655
[31]
Raina Langevin, Ross J Lordon, Thi Avrahami, Benjamin R. Cowan, Tad Hirsch, and Gary Hsieh. 2021. Heuristic Evaluation of Conversational Agents. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3411764.3445312
[32]
Gierad P. Laput, Mira Dontcheva, Gregg Wilensky, Walter Chang, Aseem Agarwala, Jason Linder, and Eytan Adar. 2013. PixelTone: a multimodal interface for image editing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’13). Association for Computing Machinery, New York, NY, USA, 2185–2194. https://doi.org/10.1145/2470654.2481301
[33]
Jaewook Lee, Sebastian S. Rodriguez, Raahul Natarrajan, Jacqueline Chen, Harsh Deep, and Alex Kirlik. 2021. What’s This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice Assistants. In Proceedings of the 2021 International Conference on Multimodal Interaction(ICMI ’21). Association for Computing Machinery, New York, NY, USA, 512–520. https://doi.org/10.1145/3462244.3479902
[34]
Sunok Lee, Minji Cho, and Sangsu Lee. 2020. What If Conversational Agents Became Invisible? Comparing Users’ Mental Models According to Physical Entity of AI Speaker. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (Sept. 2020), 88:1–88:24. https://doi.org/10.1145/3411840 Number: 3.
[35]
Mathias N. Lystbæk, Ken Pfeuffer, Jens Emil Sloth Grønbæk, and Hans Gellersen. 2022. Exploring Gaze for Assisting Freehand Selection-based Text Entry in AR. Proceedings of the ACM on Human-Computer Interaction 6, ETRA (May 2022), 141:1–141:16. https://doi.org/10.1145/3530882
[36]
Mathias N. Lystbæk, Peter Rosenberg, Ken Pfeuffer, Jens Emil Grønbæk, and Hans Gellersen. 2022. Gaze-Hand Alignment: Combining Eye Gaze and Mid-Air Pointing for Interacting with Menus in Augmented Reality. Proceedings of the ACM on Human-Computer Interaction 6, ETRA (May 2022), 145:1–145:18. https://doi.org/10.1145/3530886
[37]
Nikolas Martelaro and Wendy Ju. 2017. WoZ Way: Enabling Real-time Remote Interaction Prototyping & Observation in On-road Vehicles. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing(CSCW ’17). Association for Computing Machinery, New York, NY, USA, 169–182. https://doi.org/10.1145/2998181.2998293
[38]
Sven Mayer, Gierad Laput, and Chris Harrison. 2020. Enhancing Mobile Voice Assistants with WorldGaze. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3313831.3376479
[39]
Donald McMillan, Barry Brown, Ikkaku Kawaguchi, Razan Jaber, Jordi Solsona Belenguer, and Hideaki Kuzuoka. 2019. Designing with Gaze: Tama – a Gaze Activated Smart-Speaker. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 176:1–176:26. https://doi.org/10.1145/3359278
[40]
David McNeill. 1992. Hand and mind: What gestures reveal about thought. University of Chicago Press, Chicago, IL, US. Pages: xi, 416.
[41]
Katsumi Minakata, John Paulin Hansen, I. Scott MacKenzie, Per Bækgaard, and Vijay Rajanna. 2019. Pointing by gaze, head, and foot in a head-mounted display. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications(ETRA ’19). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3317956.3318150
[42]
Christine Murad and Cosmin Munteanu. 2020. Designing Voice Interfaces: Back to the (Curriculum) Basics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376522
[43]
Christine Murad, Cosmin Munteanu, Leigh Clark, and Benjamin R. Cowan. 2018. Design guidelines for hands-free speech interaction. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct(MobileHCI ’18). Association for Computing Machinery, New York, NY, USA, 269–276. https://doi.org/10.1145/3236112.3236149
[44]
Christine Murad, Cosmin Munteanu, Benjamin R. Cowan, and Leigh Clark. 2019. Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing 18, 2 (April 2019), 33–45. https://doi.org/10.1109/MPRV.2019.2906991 Number: 2 Conference Name: IEEE Pervasive Computing.
[45]
Christine Murad, Humaira Tasnim, and Cosmin Munteanu. 2022. “Voice-First Interfaces in a GUI-First Design World”: Barriers and Opportunities to Supporting VUI Designers On-the-Job. In Proceedings of the 4th Conference on Conversational User Interfaces(CUI ’22). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3543829.3543842
[46]
Jakob Nielsen. 1994. Enhancing the explanatory power of usability heuristics. In Conference Companion on Human Factors in Computing Systems(CHI ’94). Association for Computing Machinery, New York, NY, USA, 210. https://doi.org/10.1145/259963.260333
[47]
Jakob Nielsen and Rolf Molich. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’90). Association for Computing Machinery, New York, NY, USA, 249–256. https://doi.org/10.1145/97243.97281
[48]
Debajyoti Pal, Mohammad Dawood Babakerkhell, and Pranab Roy. 2022. How Perceptions of Trust and Intrusiveness Affect the Adoption of Voice Activated Personal Assistants. IEEE Access 10 (2022), 123094–123113. https://doi.org/10.1109/ACCESS.2022.3224236
[49]
Debajyoti Pal, Mohammad Dawood Babakerkhell, and Xiangmin Zhang. 2021. Exploring the Determinants of Users’ Continuance Usage Intention of Smart Voice Assistants. IEEE Access 9 (2021), 162259–162275. https://doi.org/10.1109/ACCESS.2021.3132399
[50]
Alexander Plopski, Teresa Hirzle, Nahal Norouzi, Long Qian, Gerd Bruder, and Tobias Langlotz. 2022. The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-worn Extended Reality. Comput. Surveys 55, 3 (March 2022), 53:1–53:39. https://doi.org/10.1145/3491207
[51]
Martin Porcheron, Joel E. Fischer, Stuart Reeves, and Sarah Sharples. 2018. Voice Interfaces in Everyday Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3174214
[52]
Yuan Yuan Qian and Robert J. Teather. 2017. The eyes don’t have it: an empirical comparison of head-based and eye-based selection in virtual reality. In Proceedings of the 5th Symposium on Spatial User Interaction(SUI ’17). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/3131277.3132182
[53]
Laurel D. Riek. 2012. Wizard of Oz studies in HRI: a systematic review and new reporting guidelines. Journal of Human-Robot Interaction 1, 1 (July 2012), 119–136. https://doi.org/10.5898/JHRI.1.1.Riek
[54]
Alex Sciuto, Arnita Saini, Jodi Forlizzi, and Jason I. Hong. 2018. "Hey Alexa, What’s Up?": A Mixed-Methods Studies of In-Home Conversational Agent Usage. In Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18). Association for Computing Machinery, New York, NY, USA, 857–868. https://doi.org/10.1145/3196709.3196772
[55]
Katie Seaborn, Norihisa P. Miyake, Peter Pennefather, and Mihoko Otake-Matsuura. 2021. Voice in Human–Agent Interaction: A Survey. Comput. Surveys 54, 4 (May 2021), 81:1–81:43. https://doi.org/10.1145/3386867
[56]
Ludwig Sidenmark and Hans Gellersen. 2019. Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology(UIST ’19). Association for Computing Machinery, New York, NY, USA, 1161–1174. https://doi.org/10.1145/3332165.3347921
[57]
Becky Spittle, Maite Frutos-Pascual, Chris Creed, and Ian Williams. 2023. A Review of Interaction Techniques for Immersive Environments. IEEE Transactions on Visualization and Computer Graphics 29, 9 (Sept. 2023), 3900–3921. https://doi.org/10.1109/TVCG.2022.3174805 Conference Name: IEEE Transactions on Visualization and Computer Graphics.
[58]
Zhida Sun, Sitong Wang, Chengzhong Liu, and Xiaojuan Ma. 2022. Metaphoraction: Support Gesture-based Interaction Design with Metaphorical Meanings. ACM Transactions on Computer-Human Interaction 29, 5 (Oct. 2022), 45:1–45:33. https://doi.org/10.1145/3511892
[59]
Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan. 2020. "How do I make this thing smile?": An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376606
[60]
Hamish Tennent, Wen-Ying Lee, Yoyo Tsung-Yu Hou, Ilan Mandel, and Malte Jung. 2018. PAPERINO: Remote Wizard-Of-Oz Puppeteering For Social Robot Behaviour Design. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing(CSCW ’18 Companion). Association for Computing Machinery, New York, NY, USA, 29–32. https://doi.org/10.1145/3272973.3272994
[61]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. ACM, Eindhoven Netherlands, 855–872. https://doi.org/10.1145/3357236.3395511
[62]
Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard, and Alex Pentland. 2008. Social signals, their function, and automatic analysis: a survey. In Proceedings of the 10th international conference on Multimodal interfaces(ICMI ’08). Association for Computing Machinery, New York, NY, USA, 61–68. https://doi.org/10.1145/1452392.1452405
[63]
T. Wachtel. 1988. Making sense of corrupt input: contextual interpretation in LOQUI. In IEE Colloquium on Natural Language Understanding. 4/1–4/5. https://ieeexplore.ieee.org/document/209632
[64]
Zhuxiaona Wei and James A. Landay. 2018. Evaluating Speech-Based Smart Devices Using New Usability Heuristics. IEEE Pervasive Computing 17, 2 (April 2018), 84–96. https://doi.org/10.1109/MPRV.2018.022511249 Number: 2 Conference Name: IEEE Pervasive Computing.
[65]
Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (Jan. 1966), 36–45. https://doi.org/10.1145/365153.365168
[66]
Terry Winograd. [n. d.]. SHRDLU. http://hci.stanford.edu/winograd/shrdlu/
[67]
Haijun Xia, Michael Glueck, Michelle Annett, Michael Wang, and Daniel Wigdor. 2022. Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI Literature. ACM Transactions on Computer-Human Interaction 29, 4 (May 2022), 37:1–37:54. https://doi.org/10.1145/3503537
[68]
Wenge Xu, Hai-Ning Liang, Yuxuan Zhao, Difeng Yu, and Diego Monteiro. 2019. DMove: Directional Motion-based Interaction for Augmented Reality Head-Mounted Displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300674
[69]
Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. HeadCross: Exploring Head-Based Crossing Selection on Head-Mounted Displays. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 1 (March 2020), 35:1–35:22. https://doi.org/10.1145/3380983
[70]
Yukang Yan, Chun Yu, Xin Yi, and Yuanchun Shi. 2018. HeadGesture: Hands-Free Input Approach Leveraging Head Movements for HMD Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 4 (Dec. 2018), 198:1–198:23. https://doi.org/10.1145/3287076
[71]
Chun Yu, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang Luo, and Yuanchun Shi. 2017. Tap, Dwell or Gesture? Exploring Head-Based Text Entry Techniques for HMDs. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). Association for Computing Machinery, New York, NY, USA, 4479–4488. https://doi.org/10.1145/3025453.3025964
[72]
Difeng Yu, Hai-Ning Liang, Xueshi Lu, Tianyu Zhang, and Wenge Xu. 2019. DepthMove: Leveraging Head Motions in the Depth Dimension to Interact with Virtual Reality Head-Worn Displays. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 103–114. https://doi.org/10.1109/ISMAR.2019.00-20 ISSN: 1554-7868.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DIS '24: Proceedings of the 2024 ACM Designing Interactive Systems Conference
July 2024
3616 pages
ISBN:9798400705830
DOI:10.1145/3643834
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 July 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Wizard-of-Oz study
  2. design space
  3. gesture interaction
  4. voice user interface

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

DIS '24
Sponsor:
DIS '24: Designing Interactive Systems Conference
July 1 - 5, 2024
Copenhagen, Denmark

Acceptance Rates

Overall Acceptance Rate 1,158 of 4,684 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 160
    Total Downloads
  • Downloads (Last 12 months)160
  • Downloads (Last 6 weeks)58
Reflects downloads up to 12 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media