Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2858036.2858456acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Tap the ShapeTones: Exploring the Effects of Crossmodal Congruence in an Audio-Visual Interface

Published: 07 May 2016 Publication History

Abstract

There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.

Supplementary Material

ZIP File (pn2098-file4.zip)
suppl.mov (pn2098.mp4)
Supplemental video

References

[1]
P Bach-y Rita. 1988. Brain plasticity. Rehabilitation Medicine. Mosby (1988), 113--8.
[2]
Elisheva Ben-Artzi and Lawrence E Marks. 1995. Visual-auditory interaction in speeded classification: Role of stimulus difference. Perception & Psychophysics 57, 8 (1995), 1151--1162.
[3]
Elizabeth A. Boyle, Thomas M. Connolly, Thomas Hainey, and James M. Boyle. 2012. Engagement in digital entertainment games: A systematic review. Computers in Human Behavior 28, 3 (May 2012), 771--780.
[4]
Alfred O Effenberg. 2005. Movement sonification: Effects on perception and action. IEEE multimedia 2 (2005), 53--59.
[5]
Karla K Evans and Anne Treisman. 2010. Natural cross-modal mappings between visual and auditory features. Journal of vision 10, 1 (2010), 6.
[6]
Thomas Ferris, Robert Penfold, Shameem Hameed, and Nadine Sarter. 2006. The implications of crossmodal links in attention for the design of multimodal interfaces: A driving simulation study. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 50. Sage Publications, 406--409.
[7]
Mark A Guadagnoli and Timothy D Lee. 2004. Challenge point: a framework for conceptualizing the effects of various practice conditions in motor learning. Journal of motor behavior 36, 2 (2004), 212--224.
[8]
Stephen J Guastello, Katherine Reiter, Matthew Malon, and Anton Shircel. 2015. When auditory and visual signal processing conflict: cross-modal interference in extended work periods. Theoretical Issues in Ergonomics Science 16, 3 (2015), 232--254.
[9]
Eve Hoggan, Topi Kaaresoja, Pauli Laitinen, and Stephen Brewster. 2008. Crossmodal congruence: the look, feel and sound of touchscreen widgets. In Proceedings of the 10th international conference on Multimodal interfaces. ACM, 157--164.
[10]
Michael Xuelin Huang, Will WW Tang, Kenneth WK Lo, CK Lau, Grace Ngai, and Stephen Chan. 2012. MelodicBrush: a novel system for cross-modal digital art creation linking calligraphy and music. In Proceedings of the Designing Interactive Systems Conference. ACM, 418--427.
[11]
Joseph 'Jofish' Kaye, Kirsten Boehner, Jarmo Laaksolahti, and Anna Stahl. 2007. Evaluating experience-focused HCI. In CHI '07 extended abstracts on Human factors in computing systems (CHI EA '07). ACM, NY, NY, USA, 2117--2120.
[12]
Ju-Hwan Lee and Charles Spence. 2008. Feeling what you hear: Task-irrelevant sounds modulate tactile perception delivered via a touch screen. Journal on Multimodal User Interfaces 2, 3--4 (2008), 145--156.
[13]
Dominic W Massaro. 1998. Illusions and issues in bimodal speech perception. In AVSP'98 International Conference on Auditory-Visual Speech Processing.
[14]
Harry Mcgurk and John Macdonald. 1976. Hearing lips and seeing voices. Nature 264, 5588 (Dec. 1976), 746--748.
[15]
Peter BL Meijer. 1992. An experimental system for auditory image representations. Biomedical Engineering, IEEE Transactions on 39, 2 (1992), 112--121.
[16]
Oussama Metatla, Nick Bryan-Kinns, Tony Stockman, and Fiore Martin. 2012. Supporting cross-modal collaboration in the workplace. In Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers. British Computer Society, 109--118.
[17]
Oussama Metatla, Nick Bryan-Kinns, Tony Stockman, and Fiore Martin. 2015. Sonification of reference markers for auditory graphs: Effects on non-visual point estimation tasks. In PeerJ PrePrints.
[18]
Micah M Murray, Sophie Molholm, Christoph M Michel, Dirk J Heslenfeld, Walter Ritter, Daniel C Javitt, Charles E Schroeder, and John J Foxe. 2005. Grabbing your ear: rapid auditory-somatosensory multisensory interactions in low-level sensory cortices are not constrained by stimulus alignment. Cerebral Cortex 15, 7 (2005), 963--974.
[19]
Shaun Musgrave. 2015. "Dark Echo" Review Silence Is Golden, And So Is This Game. (Feb. 2015). http://toucharcade.com/2015/02/11/dark-echo-review/
[20]
Lennart E. Nacke, Mark N. Grimshaw, and Craig A. Lindley. 2010. More than a feeling: Measurement of sonic user experience and psychophysiology in a first-person shooter game. Interacting with Computers 22, 5 (Sept. 2010), 336--343.
[21]
Kent L. Norman. 2013. GEQ (Game Engagement/Experience Questionnaire): A Review of Two Papers. Interacting with Computers 25, 4 (July 2013), 278--283.
[22]
Heather O'Brien and Paul Cairns. 2015. An empirical evaluation of the User Engagement Scale (UES) in online news environments. Information Processing & Management 51, 4 (July 2015), 413--427.
[23]
Heather L. O'Brien and Elaine G. Toms. 2008. What is User Engagement? A Conceptual Framework for Defining User Engagement with Technology. J. Am. Soc. Inf. Sci. Technol. 59, 6 (April 2008), 938--955.
[24]
Heather L. O'Brien and Elaine G. Toms. 2010. The development and evaluation of a survey to measure user engagement. Journal of the American Society for Information Science and Technology 61, 1 (Jan. 2010), 50--69.
[25]
Sharon Oviatt, Rachel Coulston, and Rebecca Lunsford. 2004. When do we interact multimodally?: cognitive load and multimodal communication patterns. In Proceedings of the 6th international conference on Multimodal interfaces. ACM, 129--136.
[26]
Geoffrey R Patching and Philip T Quinlan. 2002. Garner and congruence effects in the speeded classification of bimodal signals. Journal of Experimental Psychology: Human Perception and Performance 28, 4 (2002), 755.
[27]
Mary C. Potter, Brad Wyble, Carl Erick Hagmann, and Emily S. McCourt. 2013. Detecting meaning in RSVP at 13 ms per picture. Attention, Perception, & Psychophysics 76, 2 (Dec. 2013), 270--279.
[28]
Rick Raymer. 2011. Gamification: Using Game Mechanics to Enhance eLearning. eLearn 2011, 9 (Sept. 2011).
[29]
Paul Rodway. 2005. The modality shift effect and the effectiveness of warning signals in different modalities. Acta Psychologica 120, 2 (2005), 199--226.
[30]
Richard A Schmidt. 1991. Frequent augmented feedback can degrade learning: Evidence and interpretations. In Tutorials in motor neuroscience. Springer, 59--75.
[31]
Ladan Shams, Yukiyasu Kamitani, and Shinsuke Shimojo. 2000. Illusions: What you see is what you hear. Nature 408, 6814 (Dec. 2000), 788--788.
[32]
Roland Sigrist, Georg Rauter, Robert Riener, and Peter Wolf. 2013. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychonomic bulletin & review 20, 1 (2013), 21--53.
[33]
Charles Spence. 2010. Crossmodal spatial attention. Annals of the NY Academy of Sciences 1191, 1 (2010), 182--200.
[34]
Charles Spence. 2011. Crossmodal correspondences: A tutorial review. Attention and Perceptual Psychophysics (2011).
[35]
Charles Spence and Jon Driver. 1997. Cross-modal links in attention between audition, vision, and touch: Implications for interface design. International Journal of Cognitive Ergonomics (1997).
[36]
Charles Spence and Jon Driver. 2004. Crossmodal space and crossmodal attention. Oxford University Press.
[37]
Charles Spence and Cristy Ho. 2015. Multisensory information processing. In APA handbook of human systems integration, D. A. Boehm-Davis, F. T. Durso, and J. D. Lee (Eds.). American Psychological Association, Washington, DC, Ch. 27.
[38]
JK Stefanucci and DR Proffitt. 2005. Multimodal interfaces improve memory. In Proceedings of the 11th International Conference on Human-Computer Interaction (HCI'05).
[39]
Penelope Sweetser and Peta Wyeth. 2005. GameFlow: a model for evaluating player enjoyment in games. Computers in Entertainment (CIE) 3 (July 2005), 3--3. ACM ID: 1077253.
[40]
Andrew Webster. 2013. Gaming in darkness: 'Papa Sangre II' is a terrifying world made entirely of sound. (Oct. 2013). http://www.theverge.com/2013/10/31/5048298/papa-sangre-ii-is-a-terrifying-world-made-of-sound
[41]
Eric N. Wiebe, Allison Lamb, Megan Hardy, and David Sharek. 2014. Measuring engagement in video game-based environments: Investigation of the User Engagement Scale. Computers in Human Behavior 32 (March 2014), 123--132.
[42]
Fredrik Winberg. 2006. Supporting cross-modal collaboration: adding a social dimension to accessibility. In Haptic and Audio Interaction Design. Springer, 102--110.
[43]
Peter Wright, Jayne Wallace, and John McCarthy. 2008. Aesthetics and experience-centered design. ACM Transactions on Computer-Human Interaction 15, 4 (Dec. 2008), 18:1--18:21.

Cited By

View all
  • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
  • (2024)“I Don't Really Get Involved In That Way”: Investigating Blind and Visually Impaired Individuals' Experiences of Joint Attention with Sighted PeopleProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642940(1-16)Online publication date: 11-May-2024
  • (2023)Exploring Effective Relationships Between Visual-Audio Channels in Data VisualizationJournal of Visualization10.1007/s12650-023-00909-326:4(937-956)Online publication date: 10-Apr-2023
  • Show More Cited By

Index Terms

  1. Tap the ShapeTones: Exploring the Effects of Crossmodal Congruence in an Audio-Visual Interface

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
      May 2016
      6108 pages
      ISBN:9781450333627
      DOI:10.1145/2858036
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 May 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. audio-visual display
      2. crossmodal congruence
      3. games
      4. spatial mappings
      5. user engagement
      6. user experience

      Qualifiers

      • Research-article

      Funding Sources

      • EPSRC
      • EU Marie Curie fellowship FP7 REA

      Conference

      CHI'16
      Sponsor:
      CHI'16: CHI Conference on Human Factors in Computing Systems
      May 7 - 12, 2016
      California, San Jose, USA

      Acceptance Rates

      CHI '16 Paper Acceptance Rate 565 of 2,435 submissions, 23%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)197
      • Downloads (Last 6 weeks)23
      Reflects downloads up to 27 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
      • (2024)“I Don't Really Get Involved In That Way”: Investigating Blind and Visually Impaired Individuals' Experiences of Joint Attention with Sighted PeopleProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642940(1-16)Online publication date: 11-May-2024
      • (2023)Exploring Effective Relationships Between Visual-Audio Channels in Data VisualizationJournal of Visualization10.1007/s12650-023-00909-326:4(937-956)Online publication date: 10-Apr-2023
      • (2023)Using Crossmodal Correspondence Between Colors and Music to Enhance Online Art Exhibition Visitors’ ExperienceInformation for a Better World: Normality, Virtuality, Physicality, Inclusivity10.1007/978-3-031-28035-1_12(144-159)Online publication date: 10-Mar-2023
      • (2022)Birdbox: Exploring the User Experience of Crossmodal, Multisensory Data RepresentationsProceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia10.1145/3568444.3568455(12-21)Online publication date: 27-Nov-2022
      • (2022)Exploring visual stimuli as a support for novices’ creative engagement with digital musical interfacesJournal on Multimodal User Interfaces10.1007/s12193-022-00393-316:3(343-356)Online publication date: 1-Aug-2022
      • (2021)Feeling Colours: Crossmodal Correspondences Between Tangible 3D Objects, Colours and EmotionsProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445373(1-12)Online publication date: 6-May-2021
      • (2020)How Do We Experience Crossmodal Correspondent Mulsemedia Content?IEEE Transactions on Multimedia10.1109/TMM.2019.294127422:5(1249-1258)Online publication date: May-2020
      • (2020)Exploring crossmodal perceptual enhancement and integration in a sequence-reproducing task with cognitive primingJournal on Multimodal User Interfaces10.1007/s12193-020-00326-y15:1(45-59)Online publication date: 13-Jul-2020
      • (2019)Negative Emotions, Positive ExperienceExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290607.3313000(1-6)Online publication date: 2-May-2019
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media