Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3663548.3675663acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

Accessible Nonverbal Cues to Support Conversations in VR for Blind and Low Vision People

Published: 27 October 2024 Publication History

Abstract

Social VR has increased in popularity due to its affordances for rich, embodied, and nonverbal communication. However, nonverbal communication remains inaccessible for blind and low vision people in social VR. We designed accessible cues with audio and haptics to represent three nonverbal behaviors: eye contact, head shaking, and head nodding. We evaluated these cues in real-time conversation tasks where 16 blind and low vision participants conversed with two other users in VR. We found that the cues were effective in supporting conversations in VR. Participants had statistically significantly higher scores for accuracy and confidence in detecting attention during conversations with the cues than without. We also found that participants had a range of preferences and uses for the cues, such as learning social norms. We present design implications for handling additional cues in the future, such as the challenges of incorporating AI. Through this work, we take a step towards making interpersonal embodied interactions in VR fully accessible for blind and low vision people.

References

[1]
Nadine Aburumman, Marco Gillies, Jamie A Ward, and Antonia F de C Hamilton. 2022. Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions. International Journal of Human-Computer Studies 164 (2022), 102819. https://www.sciencedirect.com/science/article/pii/S1071581922000489#ecom0001
[2]
Ronny Andrade, Steven Baker, Jenny Waycott, and Frank Vetere. 2018. Echo-house: Exploring a virtual environment by using echolocation. In Proceedings of the 30th Australian conference on computer-human interaction. 278–289. https://dl.acm.org/doi/abs/10.1145/3292147.3292163
[3]
LinkedIn Community Articles. n.d. How do you evaluate the emotional impact of your UX design?https://www.linkedin.com/advice/0/how-do-you-evaluate-emotional-impact-your-ux-design
[4]
Jeremy N Bailenson, Andrew C Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2004. Transformed social interaction: Decoupling representation from behavior and form in collaborative virtual environments. Presence: Teleoperators & Virtual Environments 13, 4 (2004), 428–441. https://direct.mit.edu/pvar/article-abstract/13/4/428/18516/Transformed-Social-Interaction-Decoupling
[5]
Andrew Begel, John Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel McDuff, 2020. Lessons learned in designing ai for autistic adults. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–6. https://andrewbegel.com/papers/lessons-learned-autism.pdf
[6]
Viverse Blog. n.d. Exploring the VR Music Revolution: Top 5 VRChat Music Concerts, Videos, and Performances. https://www.news.viverse.com/post/exploring-the-vr-music-revolution-top-5-vrchat-music-concerts-and-performances
[7]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101. https://www.tandfonline.com/doi/abs/10.1191/1478088706qp063oa
[8]
Global Listening Centre. n.d. Body Language of Listeners. https://www.globallisteningcentre.org/body-language-of-listeners
[9]
Pietro Cipresso and Giuseppe Riva. 2015. Virtual Reality for Artificial Intelligence: human-centered simulation for social science.Annual Review of Cybertherapy and Telemedicine 2015: Virtual Reality in Healthcare: Medical Simulation and Experiential Interface (2015). https://books.google.com/books?hl=en&lr=&id=_MbXCwAAQBAJ&oi=fnd&pg=PA177&dq=artificial+intelligence+and+virtual+reality&ots=d49mPiIo1v&sig=BXFBrnLmz1yIQLQ1erc1PKX3vBY#v=onepage&q=artificial%20intelligence%20and%20virtual%20reality&f=false
[10]
Jazmin Collins, Crescentia Jung, and Shiri Azenkot. 2023. Making Avatar Gaze Accessible for Blind and Low Vision People in Virtual Reality: Preliminary Insights. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 701–705. https://ieeexplore.ieee.org/abstract/document/10322247
[11]
Jazmin Collins, Crescentia Jung, Yeonju Jang, Danielle Montour, Andrea Stevenson Won, and Shiri Azenkot. 2023. “The Guide Has Your Back”: Exploring How Sighted Guides Can Enhance Accessibility in Social Virtual Reality for Blind and Low Vision People. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1–14. https://dl.acm.org/doi/abs/10.1145/3597638.3608386
[12]
Loredana Crisan. 2021. Emojis finally have a voice: Introducing Soundmojis on Messenger. https://messengernews.fb.com/2021/07/15/emojis-finally-have-a-voice-introducing-soundmojis-on-messenger/
[13]
Aaron Gluck, Kwajo Boateng, and Julian Brinkley. 2021. Racing in the dark: Exploring accessible virtual reality by developing a racing game for people who are blind. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 65. SAGE Publications Sage CA: Los Angeles, CA, 1114–1118. https://journals.sagepub.com/doi/abs/10.1177/1071181321651224
[14]
Aaron Gluck and Julian Brinkley. 2020. Implementing ‘The Enclosing Dark’: A VR Auditory Adventure. Journal on Technology and Persons with Disabilities 8, 2020 (2020), 149–159. https://scholarworks.csun.edu/bitstream/handle/10211.3/215985/2197%20Implementing%20The%20Enclosing%20Dark%20A%20VR%20Auditory%20Adventure.pdf
[15]
Inês Gonçalves, André Rodrigues, Tiago Guerreiro, and João Guerreiro. 2023. Inclusive Social Virtual Environments: Exploring the Acceptability of Different Navigation and Awareness Techniques. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7.
[16]
Ricardo E Gonzalez Penuela, Wren Poremba, Christina Trice, and Shiri Azenkot. 2022. Hands-On: Using Gestures to Control Descriptions of a Virtual Environment for People with Visual Impairments. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–4. https://dl.acm.org/doi/abs/10.1145/3526114.3558669
[17]
Ria J Gualano, Lucy Jiang, Kexin Zhang, Andrea Stevenson Won, and Shiri Azenkot. 2023. “Invisible Illness Is No Longer Invisible”: Making Social VR Avatars More Inclusive for Invisible Disability Representation. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. 1–4. https://dl.acm.org/doi/abs/10.1145/3597638.3614480
[18]
João Guerreiro, Yujin Kim, Rodrigo Nogueira, SeungA Chung, André Rodrigues, and Uran Oh. 2023. The design space of the auditory representation of objects and their behaviours in virtual reality for blind people. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2763–2773. https://ieeexplore.ieee.org/abstract/document/10049631
[19]
Yu Hao, Junchi Feng, John-Ross Rizzo, Yao Wang, and Yi Fang. 2022. Detect and approach: Close-range navigation support for people with blindness and low vision. In European Conference on Computer Vision. Springer, 607–622. https://link.springer.com/chapter/10.1007/978-3-031-25075-0_41
[20]
Daniel Hung, Heji Kim, and Bruce Land. 1996. Modeling Six Universal Emotions. https://people.ece.cornell.edu/land/OldStudentProjects/cs490-95to96/HJKIM/emotions.html
[21]
Masahiro Ide, Shoji Oshima, Shingo Mori, Masato Yoshimi, Junko Ichino, and Shunichi Tano. 2020. Effects of avatar’s symbolic gesture in virtual reality brainstorming. In Proceedings of the 32nd Australian conference on human-computer interaction. 170–177. https://dl.acm.org/doi/abs/10.1145/3441000.3441081
[22]
William James. 2007. The principles of psychology. Vol. 1. Cosimo, Inc.http://infomotions.com/sandbox/great-books-redux/corpus/html/principles.html
[23]
Tiger F Ji, Brianna Cochran, and Yuhang Zhao. 2022. Vrbubble: Enhancing peripheral awareness of avatars for people with visual impairments in social virtual reality. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–17. https://dl.acm.org/doi/abs/10.1145/3517428.3544821
[24]
Stevanus Kevin, Yun Suen Pai, and Kai Kunze. 2018. Virtual gaze: exploring use of gaze as rich interaction method with virtual agent in interactive virtual reality content. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology. 1–2. https://dl.acm.org/doi/abs/10.1145/3281505.3281587
[25]
Diana Kornbrot, Paul Penn, Helen Petrie, Stephen Furner, and Andrew Hardwick. 2007. Roughness perception in haptic virtual reality for sighted and blind people. Perception & psychophysics 69 (2007), 502–512. https://link.springer.com/article/10.3758/bf03193907
[26]
Marco Kurzweg, Jens Reinhardt, Wladimir Nabok, and Katrin Wolf. 2021. Using body language of avatars in vr meetings as communication status cue. In Proceedings of Mensch und Computer 2021. 366–377. https://dl.acm.org/doi/abs/10.1145/3473856.3473865
[27]
Rebecca Lawson. 2015. I just love the attention: implicit preference for direct eye contact. Visual Cognition 23, 4 (2015), 450–488. https://www.tandfonline.com/doi/abs/10.1080/13506285.2015.1039101
[28]
Joan Llobera, Bernhard Spanlang, Giulio Ruffini, and Mel Slater. 2010. Proxemics with multiple dynamic characters in an immersive virtual environment. ACM Transactions on Applied Perception (TAP) 8, 1 (2010), 1–12. https://dl.acm.org/doi/abs/10.1145/1857893.1857896
[29]
Michael Luck and Ruth Aylett. 2000. Applying artificial intelligence to virtual reality: Intelligent virtual environments. Applied artificial intelligence 14, 1 (2000), 3–32. https://www.tandfonline.com/doi/abs/10.1080/088395100117142
[30]
Maruricio Lumbreras and Jaime Sánchez. 1999. Interactive 3D sound hyperstories for blind children. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 318–325. https://dl.acm.org/doi/abs/10.1145/302979.303101
[31]
Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. " Talking without a Voice" Understanding Non-verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–25. https://dl.acm.org/doi/abs/10.1145/3415246
[32]
Changing Minds. n.d. Attentive Body Language. https://changingminds.org/techniques/body/attentive_body.htm
[33]
Tony Morelli, John Foley, Luis Columna, Lauren Lieberman, and Eelke Folmer. 2010. VI-Tennis: a vibrotactile/audio exergame for players who are visually impaired. In Proceedings of the Fifth International Conference on the Foundations of Digital Games. 147–154. https://dl.acm.org/doi/abs/10.1145/1822348.1822368
[34]
Vishnu Nair, Jay L Karp, Samuel Silverman, Mohar Kalra, Hollis Lehv, Faizan Jamil, and Brian A Smith. 2021. Navstick: Making video games blind-accessible via the ability to look around. In The 34th Annual ACM Symposium on User Interface Software and Technology. 538–551. https://dl.acm.org/doi/abs/10.1145/3472749.3474768
[35]
Vishnu Nair, Shao-en Ma, Ricardo E. Gonzalez Penuela, Yicheng He, Karen Lin, Mason Hayes, Hannah Huddleston, Matthew Donnelly, and Brian A. Smith. 2022. Uncovering Visually Impaired Gamers’ Preferences for Spatial Awareness Tools Within Video Games(ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 6, 16 pages. https://doi.org/10.1145/3517428.3544802
[36]
Georgios Nikolakis, Dimitrios Tzovaras, Serafim Moustakidis, and Michael G Strintzis. 2004. Cybergrasp and phantom integration: Enhanced haptic access for visually impaired users. In 9th Conference Speech and Computer. https://www.isca-archive.org/specom_2004/nikolakis04_specom.pdf
[37]
Crusoe Okwong. 2023. Usability and usefulness as user experience best practices. https://bootcamp.uxdesign.cc/usability-and-usefulness-as-user-experience-best-practices-93a43f34f9ac
[38]
Sébastien Paquette, Isabelle Peretz, and Pascal Belin. 2013. The “Musical Emotional Bursts”: a validated set of musical affect bursts to investigate auditory affective processing. Frontiers in psychology 4 (2013), 52505. https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00509/full
[39]
Christine E Parsons, Katherine S Young, Michelle G Craske, Alan L Stein, and Morten L Kringelbach. 2014. Introducing the Oxford Vocal (OxVoc) Sounds database: a validated set of non-acted affective sounds from human infants, adults, and domestic animals. Frontiers in psychology 5 (2014), 562. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.00562/full
[40]
Tainã Ribeiro de Oliveira, Brenda Biancardi Rodrigues, Matheus Moura da Silva, Rafael Antonio N. Spinassé, Gabriel Giesen Ludke, Mateus Ruy Soares Gaudio, Guilherme Iglesias Rocha Gomes, Luan Guio Cotini, Daniel da Silva Vargens, Marcelo Queiroz Schimidt, 2023. Virtual reality solutions employing artificial intelligence methods: A systematic literature review. Comput. Surveys 55, 10 (2023), 1–29. https://dl.acm.org/doi/full/10.1145/3565020
[41]
Jonathan Segal, Samuel Rodriguez, Akshaya Raghavan, Heysil Baez, Crescentia Jung, Jazmin Collins, Shiri Azenkot, and Andrea Stevenson Won. 2024. SocialCueSwitch: Towards Customizable Accessibility by Representing Social Cues in Multiple Senses. In To appear in Proceedings of the 26th International ACM Conference on Human Factors in Computing Systems.
[42]
Mike Sinclair, Eyal Ofek, Mar Gonzalez-Franco, and Christian Holz. 2019. Capstancrunch: A haptic vr controller with user-supplied force feedback. In Proceedings of the 32nd annual ACM symposium on user interface software and technology. 815–829. https://dl.acm.org/doi/abs/10.1145/3332165.3347891
[43]
Alexa F Siu, Mike Sinclair, Robert Kovacs, Eyal Ofek, Christian Holz, and Edward Cutrell. 2020. Virtual reality without vision: A haptic and auditory white cane to navigate complex virtual worlds. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–13. https://dl.acm.org/doi/abs/10.1145/3313831.3376353
[44]
Autumn Sprabary. 2022. How eye contact can help or hurt communication. https://www.allaboutvision.com/resources/human-interest/importance-of-eye-contact/
[45]
Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan. 2020. " How do I make this thing smile?" An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13. https://dl.acm.org/doi/pdf/10.1145/3313831.3376606
[46]
Charles Tidwell. n.d. Non-Verbal Communication Modes. https://www.andrews.edu/ tidwell/bsad560/NonVerbal.html
[47]
Ryan Wedoff, Lindsay Ball, Amelia Wang, Yi Xuan Khoo, Lauren Lieberman, and Kyle Rector. 2019. Virtual showdown: An accessible virtual reality game with scaffolds for youth with visual impairments. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15. https://dl.acm.org/doi/abs/10.1145/3290605.3300371
[48]
Markus Wieland and Tonja Machulla. 2022. Towards Inclusive Conversations in Virtual Reality for People with Visual Impairments. (2022). https://dl.gi.de/server/api/core/bitstreams/54742add-e79b-458b-8cd2-6cdbf40c4132/content
[49]
Markus Wieland, Michael Sedlmair, and Tonja-Katrin Machulla. 2023. Vr, gaze, and visual impairment: An exploratory study of the perception of eye contact across different sensory modalities for people with visual impairments in virtual reality. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–6. https://visvar.github.io/pdf/wieland2023vr.pdf
[50]
Markus Wieland, Lauren Thevin, Albrecht Schmidt, and Tonja Machulla. 2022. Non-verbal communication and joint attention between people with and without visual impairments: Deriving guidelines for inclusive conversations in virtual realities. In International Conference on Computers Helping People with Special Needs. Springer, 295–304. https://link.springer.com/chapter/10.1007/978-3-031-08648-9_34
[51]
Nick Yee, Jeremy N Bailenson, Mark Urbanek, Francis Chang, and Dan Merget. 2007. The unbearable likeness of being digital: The persistence of nonverbal social norms in online virtual environments. CyberPsychology & Behavior 10, 1 (2007), 115–121. https://www.liebertpub.com/doi/abs/10.1089/cpb.2006.9984
[52]
Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, and Yuhang Zhao. 2022. “it’s just part of me:” understanding avatar diversity and self-presentation of people with disabilities in social virtual reality. In Proceedings of the 24th international ACM SIGACCESS conference on computers and accessibility. 1–16. https://dl.acm.org/doi/abs/10.1145/3517428.3544829
[53]
Lei Zhang, Klevin Wu, Bin Yang, Hao Tang, and Zhigang Zhu. 2020. Exploring virtual environments by visually impaired using a mixed reality cane without visual feedback. In 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 51–56. https://ieeexplore.ieee.org/abstract/document/9288373
[54]
Yuhang Zhao, Cynthia L Bennett, Hrvoje Benko, Edward Cutrell, Christian Holz, Meredith Ringel Morris, and Mike Sinclair. 2018. Enabling people with visual impairments to navigate virtual reality with a haptic and auditory cane simulation. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–14. https://dl.acm.org/doi/abs/10.1145/3173574.3173690
[55]
Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D Wilson. 2019. SeeingVR: A set of tools to make virtual reality more accessible to people with low vision. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–14. https://dl.acm.org/doi/abs/10.1145/3290605.3300341

Index Terms

  1. Accessible Nonverbal Cues to Support Conversations in VR for Blind and Low Vision People

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ASSETS '24: Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility
    October 2024
    1475 pages
    ISBN:9798400706776
    DOI:10.1145/3663548
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 October 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. VR
    2. accessibility
    3. blind
    4. low vision

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ASSETS '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 436 of 1,556 submissions, 28%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 138
      Total Downloads
    • Downloads (Last 12 months)138
    • Downloads (Last 6 weeks)23
    Reflects downloads up to 27 Jan 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media