Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613905.3651109acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

SocialCueSwitch: Towards Customizable Accessibility by Representing Social Cues in Multiple Senses

Published: 11 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    In virtual environments, many social cues (e.g. gestures, eye contact, and proximity) are currently conveyed visually or auditorily. Indicating social cues in other modalities, such as haptic cues to complement visual or audio signals, will help to increase VR’s accessibility and take advantage of the platform’s inherent flexibility. However, accessibility implementations in social VR are often siloed by single sensory modalities. To broaden the accessibility of social virtual reality beyond replacing one sensory modality with another, we identified a subset of social cues and built tools to enhance them allowing users to switch between modalities to choose how these cues are represented. Because consumer VR uses primarily visual and auditory stimuli, we started with social cues that were not accessible for blind and low vision (BLV) and d/Deaf and hard of hearing (DHH) people, and expanded how they could be represented to accommodate a number of needs. We describe how these tools were designed around the principle of social cue switching, and a standard distribution method to amplify reach.

    Supplemental Material

    MP4 File - Video Preview
    Video Preview
    MP4 File
    Talk Video
    MP4 File
    Video Figure
    ZIP File - SocialCueSwitch Package for Unity
    The SocialCueSwitch Unity package is designed to enhance virtual interactions within Unity-powered environments, focusing on improving communication and engagement between avatars through a sophisticated system of audio and visual cues based on proximity and gestures. This package includes a range of pre-configured assets, scripts, and demo scenes to facilitate rapid integration and customization tailored to specific project requirements.

    References

    [1]
    Sami Abboud, Shlomi Hanassy, Shelly Levy-Tzedek, Shachar Maidenbaum, and Amir Amedi. 2014. EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restorative Neurology and Neuroscience 32, 2 (2014), 247–257. https://doi.org/10.3233/RNN-130338
    [2]
    Ahsan Abdullah, Jan Kolkmeier, Vivian Lo, and Michael Neff. 2021. Videoconference and Embodied VR: Communication Patterns Across Task and Medium. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 453:1–453:29. https://doi.org/10.1145/3479597
    [3]
    Ronny Andrade, Steven Baker, Jenny Waycott, and Frank Vetere. 2018. Echo-house: exploring a virtual environment by using echolocation. In Proceedings of the 30th Australian Conference on Computer-Human Interaction(OzCHI ’18). Association for Computing Machinery, New York, NY, USA, 278–289. https://doi.org/10.1145/3292147.3292163
    [4]
    Yannick Assogba and Judith Donath. 2010. Share: a programming environment for loosely bound cooperation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’10). Association for Computing Machinery, New York, NY, USA, 961–970. https://doi.org/10.1145/1753326.1753469
    [5]
    Paul Bach-y Rita and Stephen W. Kercel. 2003. Sensory substitution and the human–machine interface. Trends in Cognitive Sciences 7, 12 (Dec. 2003), 541–546. https://doi.org/10.1016/j.tics.2003.10.013
    [6]
    Jeremy N. Bailenson, Andrew C. Beall, Jack Loomis, Jim Blascovich, and Matthew Turk. 2004. Transformed Social Interaction: Decoupling Representation from Behavior and Form in Collaborative Virtual Environments. Presence: Teleoperators and Virtual Environments 13, 4 (Aug. 2004), 428–441. https://doi.org/10.1162/1054746041944803
    [7]
    Harshadha Balasubramanian, Cecily Morrison, Martin Grayson, Zhanat Makhataeva, Rita Faia Marques, Thomas Gable, Dalya Perez, and Edward Cutrell. 2023. Enable Blind Users’ Experience in 3D Virtual Environments: The Scene Weaver Prototype. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3544549.3583909
    [8]
    Ruizhi Cheng, Nan Wu, Songqing Chen, and Bo Han. 2022. Reality Check of Metaverse: A First Look at Commercial Social Virtual Reality Platforms. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). 141–148. https://doi.org/10.1109/VRW55335.2022.00040
    [9]
    Jazmin Collins, Crescentia Jung, and Shiri Azenkot. 2023. Making Avatar Gaze Accessible for Blind and Low Vision People in Virtual Reality: Preliminary Insights. 701–705. https://doi.org/10.1109/ISMAR-Adjunct60411.2023.00150
    [10]
    Jazmin Collins, Crescentia Jung, Yeonju Jang, Danielle Montour, Andrea Stevenson Won, and Shiri Azenkot. 2023. “The Guide Has Your Back”: Exploring How Sighted Guides Can Enhance Accessibility in Social Virtual Reality for Blind and Low Vision People. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility(ASSETS ’23). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3597638.3608386
    [11]
    Barthélémy Durette, Nicolas Louveton, David Alleysson, and Jeanny Hérault. 2008. Visuo-auditory sensory substitution for mobility assistance: testing TheVIBE. https://inria.hal.science/inria-00325414
    [12]
    João Marcelo Evangelista Belo, Mathias N. Lystbæk, Anna Maria Feit, Ken Pfeuffer, Peter Kán, Antti Oulasvirta, and Kaj Grønbæk. 2022. AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology(UIST ’22). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3526113.3545651
    [13]
    Guo Freeman and Dane Acena. 2021. Hugging from A Distance: Building Interpersonal Relationships in Social Virtual Reality. In ACM International Conference on Interactive Media Experiences. ACM, Virtual Event USA, 84–95. https://doi.org/10.1145/3452918.3458805
    [14]
    Geoffrey Gorisse, Olivier Christmann, and Charlotte Dubosc. 2022. REC: A Unity Tool to Replay, Export and Capture Tracked Movements for 3D and Virtual Reality Applications. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces(AVI 2022). Association for Computing Machinery, New York, NY, USA, 1–3. https://doi.org/10.1145/3531073.3534472
    [15]
    Ihshan Gumilar, Amit Barde, Ashkan F. Hayati, Mark Billinghurst, Gun Lee, Abdul Momin, Charles Averill, and Arindam Dey. 2021. Connecting the Brains via Virtual Eyes : Eye-Gaze Directions and Inter-brain Synchrony in VR. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–7. https://doi.org/10.1145/3411763.3451583
    [16]
    Ayah Hamad and Bochen Jia. 2022. How Virtual Reality Technology Has Changed Our Lives: An Overview of the Current and Potential Applications and Limitations. International Journal of Environmental Research and Public Health 19, 18 (Sept. 2022), 11278. https://doi.org/10.3390/ijerph191811278
    [17]
    Yu Hao, Junchi Feng, John-Ross Rizzo, Yao Wang, and Yi Fang. 2023. Detect and Approach: Close-Range Navigation Support for People with Blindness and Low Vision. In Computer Vision – ECCV 2022 Workshops(Lecture Notes in Computer Science), Leonid Karlinsky, Tomer Michaeli, and Ko Nishino (Eds.). Springer Nature Switzerland, Cham, 607–622. https://doi.org/10.1007/978-3-031-25075-0_41
    [18]
    Volker Hohmann, Richard Paluch, Melanie Krueger, Markus Meis, and Giso Grimm. 2020. The Virtual Reality Lab: Realization and Application of Virtual Sound Environments.Ear and Hearing (2020). https://doi.org/10.1097/AUD.0000000000000945
    [19]
    Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E. Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 241–250. https://doi.org/10.1145/2702123.2702393
    [20]
    Dhruv Jain, Sasa Junuzovic, Eyal Ofek, Mike Sinclair, John R. Porter, Chris Yoon, Swetha Machanavajhala, and Meredith Ringel Morris. 2021. Towards Sound Accessibility in Virtual Reality. In Proceedings of the 2021 International Conference on Multimodal Interaction. ACM, Montréal QC Canada, 80–91. https://doi.org/10.1145/3462244.3479946
    [21]
    Tiger Ji, Brianna R. Cochran, and Yuhang Zhao. 2022. VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility. 1–17. https://doi.org/10.1145/3517428.3544821 arXiv:2208.11071 [cs].
    [22]
    Lise A. Johnson and Charles M. Higgins. 2006. A Navigation Aid for the Blind Using Tactile-Visual Sensory Substitution. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society. 6289–6292. https://doi.org/10.1109/IEMBS.2006.259473 ISSN: 1557-170X.
    [23]
    Negar Khojasteh and Andrea Stevenson Won. 2021. Working Together on Diverse Tasks: A Longitudinal Study on Individual Workload, Presence and Emotional Recognition in Collaborative Virtual Environments. Frontiers in Virtual Reality 2 (2021). https://www.frontiersin.org/articles/10.3389/frvir.2021.643331
    [24]
    Jinmo Kim. 2020. VIVR: Presence of Immersive Interaction for Visual Impairment Virtual Reality. IEEE Access 8 (2020), 196151–196159. https://doi.org/10.1109/ACCESS.2020.3034363 Conference Name: IEEE Access.
    [25]
    Karim R. Lakhani and Robert G. Wolf. 2003. Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects. https://doi.org/10.2139/ssrn.443040
    [26]
    Jialang Victor Li, Max Kreminski, Sean M Fernandes, Anya Osborne, Joshua McVeigh-Schultz, and Katherine Isbister. 2022. Conversation Balance: A Shared VR Visualization to Support Turn-taking in Meetings. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New Orleans LA USA, 1–4. https://doi.org/10.1145/3491101.3519879
    [27]
    Ziming Li, Shannon Connell, Wendy Dannels, and Roshan Peiris. 2022. SoundVizVR: Sound Indicators for Accessible Sounds in Virtual Reality for Deaf or Hard-of-Hearing Users. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility(ASSETS ’22). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3517428.3544817
    [28]
    Ziming Li and Roshan Peiris. 2021. VR Sound Mapping: Make Sound Accessible for DHH People in Virtual Reality Environments. (2021).
    [29]
    Shachar Maidenbaum, Sami Abboud, Galit Buchs, and Amir Amedi. 2015. Blind in a virtual world: Using sensory substitution for generically increasing the accessibility of graphical virtual environments. In 2015 IEEE Virtual Reality (VR). 233–234. https://doi.org/10.1109/VR.2015.7223381 ISSN: 2375-5334.
    [30]
    Shachar Maidenbaum and Amir Amedi. 2015. Non-visual virtual interaction: Can Sensory Substitution generically increase the accessibility of Graphical virtual reality to the blind?. In 2015 3rd IEEE VR International Workshop on Virtual and Augmented Assistive Technology (VAAT). 15–17. https://doi.org/10.1109/VAAT.2015.7155404
    [31]
    Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. "Talking without a Voice": Understanding Non-verbal Communication in Social Virtual Reality. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 1–25. https://doi.org/10.1145/3415246
    [32]
    Joshua McVeigh-Schultz and Katherine Isbister. 2021. The Case for “Weird Social” in VR/XR: A Vision of Social Superpowers Beyond Meatspace. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–10. https://doi.org/10.1145/3411763.3450377
    [33]
    Joshua McVeigh-Schultz, Elena Márquez Segura, Nick Merrill, and Katherine Isbister. 2018. What’s It Mean to "Be Social" in VR? Mapping the Social VR Design Ecology. In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems(DIS ’18 Companion). Association for Computing Machinery, New York, NY, USA, 289–294. https://doi.org/10.1145/3197391.3205451
    [34]
    Mohammadreza Mirzaei, Peter Kán, and Hannes Kaufmann. 2021. Head Up Visualization of Spatial Sound Sources in Virtual Reality for Deaf and Hard-of-Hearing People. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR). 582–587. https://doi.org/10.1109/VR50410.2021.00083 ISSN: 2642-5254.
    [35]
    Anya Osborne, Sabrina Fielder, Joshua Mcveigh-Schultz, Timothy Lang, Max Kreminski, George Butler, Jialang Victor Li, Diana R. Sanchez, and Katherine Isbister. 2023. Being Social in VR Meetings: A Landscape Analysis of Current Tools. In Proceedings of the 2023 ACM Designing Interactive Systems Conference(DIS ’23). Association for Computing Machinery, New York, NY, USA, 1789–1809. https://doi.org/10.1145/3563657.3595959
    [36]
    Manuel Piçarra, André Rodrigues, and João Guerreiro. 2023. Evaluating Accessible Navigation for Blind People in Virtual Environments. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems(CHI EA ’23). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3544549.3585813
    [37]
    Thomas Potter, Zoran Cvetković, and Enzo De Sena. 2022. On the Relative Importance of Visual and Spatial Audio Rendering on VR Immersion. Frontiers in Signal Processing 2 (2022). https://www.frontiersin.org/articles/10.3389/frsip.2022.904866
    [38]
    Xue Qin and Foyzul Hassan. 2023. DyTRec: A Dynamic Testing Recommendation tool for Unity-based Virtual Reality Software. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering(ASE ’22). Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/3551349.3560510
    [39]
    Shi Qiu, Pengcheng An, Jun Hu, Ting Han, and Matthias Rauterberg. 2020. Understanding visually impaired people’s experiences of social signal perception in face-to-face communication. Universal Access in the Information Society 19 (Nov. 2020), 1–18. https://doi.org/10.1007/s10209-019-00698-3
    [40]
    Rutger Rienks, Ronald Poppe, and Dirk Heylen. 2010. Differences in head orientation behavior for speakers and listeners: An experiment in a virtual environment. ACM Transactions on Applied Perception 7, 1 (Jan. 2010), 1–13. https://doi.org/10.1145/1658349.1658351
    [41]
    Omar Shaikh, Yilu Sun, and Andrea Stevenson Won. 2018. Movement Visualizer for Networked Virtual Reality Platforms. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 681–682. https://doi.org/10.1109/VR.2018.8446398
    [42]
    Lei Shi, Brianna J. Tomlinson, John Tang, Edward Cutrell, Daniel McDuff, Gina Venolia, Paul Johns, and Kael Rowan. 2019. Accessible Video Calling: Enabling Nonvisual Perception of Visual Conversation Cues. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–22. https://doi.org/10.1145/3359233
    [43]
    Alexa F. Siu, Mike Sinclair, Robert Kovacs, Eyal Ofek, Christian Holz, and Edward Cutrell. 2020. Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376353
    [44]
    Yilu Sun, Omar Shaikh, and Andrea Stevenson Won. 2019. Nonverbal synchrony in virtual reality. PLOS ONE 14, 9 (Sept. 2019), e0221803. https://doi.org/10.1371/journal.pone.0221803 Publisher: Public Library of Science.
    [45]
    Shari M. Trewin, Mark R. Laff, Anna Cavender, and Vicki L. Hanson. 2008. Accessibility in virtual worlds. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’08). Association for Computing Machinery, New York, NY, USA, 2727–2732. https://doi.org/10.1145/1358628.1358752
    [46]
    Dimitrios Tzovaras, Konstantinos Moustakas, Georgios Nikolakis, and Michael G. Strintzis. 2009. Interactive mixed reality white cane simulation for the training of the blind and the visually impaired. Personal and Ubiquitous Computing 13, 1 (Jan. 2009), 51–58. https://doi.org/10.1007/s00779-007-0171-2
    [47]
    Sophie Villenave, Jonathan Cabezas, Patrick Baert, Florent Dupont, and Guillaume Lavoué. 2022. XREcho: a unity plug-in to record and visualize user behavior during XR sessions. In Proceedings of the 13th ACM Multimedia Systems Conference(MMSys ’22). Association for Computing Machinery, New York, NY, USA, 341–346. https://doi.org/10.1145/3524273.3532909
    [48]
    Markus Wieland, Michael Sedlmair, and Tonja-Katrin Machulla. 2023. VR, Gaze, and Visual Impairment: An Exploratory Study of the Perception of Eye Contact across different Sensory Modalities for People with Visual Impairments in Virtual Reality. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–6. https://doi.org/10.1145/3544549.3585726
    [49]
    Lei Zhang, Klevin Wu, Bin Yang, Hao Tang, and Zhigang Zhu. 2020. Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane Without Visual Feedback. In 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). 51–56. https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00028
    [50]
    Yuhang Zhao, Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson. 2019. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300341
    [51]
    Yuhang Zhao, Elizabeth Kupferstein, Brenda Veronica Castro, Steven Feiner, and Shiri Azenkot. 2019. Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology(UIST ’19). Association for Computing Machinery, New York, NY, USA, 387–402. https://doi.org/10.1145/3332165.3347906

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Author Tags

    1. accessibility
    2. code sharing
    3. collaborative development
    4. sensory substitution

    Qualifiers

    • Work in progress
    • Research
    • Refereed limited

    Data Availability

    SocialCueSwitch Package for Unity: The SocialCueSwitch Unity package is designed to enhance virtual interactions within Unity-powered environments, focusing on improving communication and engagement between avatars through a sophisticated system of audio and visual cues based on proximity and gestures. This package includes a range of pre-configured assets, scripts, and demo scenes to facilitate rapid integration and customization tailored to specific project requirements. https://dl.acm.org/doi/10.1145/3613905.3651109#3613905.3651109-supplement-1.zip
    SocialCueSwitch Package for Unity: The SocialCueSwitch Unity package is designed to enhance virtual interactions within Unity-powered environments, focusing on improving communication and engagement between avatars through a sophisticated system of audio and visual cues based on proximity and gestures. This package includes a range of pre-configured assets, scripts, and demo scenes to facilitate rapid integration and customization tailored to specific project requirements. https://dl.acm.org/doi/10.1145/3613905.3651109#3613905.3651109-supplement-1.zip

    Funding Sources

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 199
      Total Downloads
    • Downloads (Last 12 months)199
    • Downloads (Last 6 weeks)48
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media