Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3616195.3616209acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article
Open access

Stringesthesia: Dynamically Shifting Musical Agency Between Audience and Performer Based on Trust in an Interactive and Improvised Performance

Published: 11 October 2023 Publication History

Abstract

This paper introduces Stringesthesia, an interactive and improvised performance paradigm. Stringesthesia uses real-time neuroimaging to connect performers and audiences, enabling direct access to the performer’s mental state and determining audience participation during the performance. Functional near-infrared spectroscopy (fNIRS), a noninvasive neuroimaging tool, was used to assess metabolic activity of brain areas collectively associated with a metric we call “trust”. A visualization representing the real-time measurement of the performer’s level of trust was projected behind the performer and used to dynamically restrict or promote audience participation: e.g., as the performer’s trust in the audience grew, more participatory stations for playing drums and selecting the performer’s chords were activated. Throughout the paper we discuss prior work that heavily influenced our design, conceptual and methodological issues with using fNIRS technology, and our system architecture. We then describe feedback from the audience and performer in a performance setting with a solo guitar player.

References

[1]
Hasan Ayaz, Wesley B Baker, Giles Blaney, David A Boas, Heather Bortfeld, Kenneth Brady, Joshua Brake, Sabrina Brigadoi, Erin M Buckley, Stefan A Carp, 2022. Optical imaging and spectroscopy for the study of the human brain: status report. Neurophotonics 9, S2 (2022), S24001.
[2]
Kevin C. Baird. 2005. Real-Time Generation of Music Notation via Audience Interaction Using Python and GNU Lilypond. In Proceedings of the International Conference on New Interfaces for Musical Expression. Vancouver, BC, Canada, 240–241. https://doi.org/10.5281/zenodo.1176695
[3]
Philip Bobko, Leanne Hirshfield, Lucca Eloy, Cara Spencer, Emily Doherty, Jack Driscoll, and Hannah Obolsky. 2022. Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems. Theoretical Issues in Ergonomics Science (2022), 1–25.
[4]
Linda Bouchard and Joseph Browne. 2019. Live Structures. In Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’19, Cat Hope, Lindsay Vickery, and Nat Grant (Eds.). Monash University, Melbourne, Australia, 26–32.
[5]
Courtney Brown and Garth Paine. 2019. A case study in collaborative learning via participatory music interactive systems: Interactive Tango Milonga. New Directions in Music and Human-Computer Interaction (2019), 285–306.
[6]
Luke Dahl, Jorge Herrera, and Carr Wilkerson. 2011. TweetDreams : Making Music with the Audience and the World using Real-time Twitter Data. In Proceedings of the International Conference on New Interfaces for Musical Expression. Oslo, Norway, 272–275. https://doi.org/10.5281/zenodo.1177991
[7]
Henrikke Dybvik and Martin Steinert. 2021. Real-World fNIRS Brain Activity Measurements during Ashtanga Vinyasa Yoga. Brain Sciences 11, 6 (2021), 742.
[8]
Joel Eaton, Weiwei Jin, and Eduardo Miranda. 2014. The Space Between Us. A Live Performance with Musical Score Generated via Emotional Levels Measured in EEG of One Performer and an Audience Member. In Proceedings of the International Conference on New Interfaces for Musical Expression. Goldsmiths, University of London, London, United Kingdom, 593–596. https://doi.org/10.5281/zenodo.1178756
[9]
Lucca Eloy, Emily J Doherty, Cara Ann Spencer, Philip Bobko, and Leanne Hirshfield. 2022. Using fNIRS to identify transparency-and reliability-sensitive markers of trust across multiple timescales in collaborative human-human-agent triads. Frontiers in Neuroergonomics (2022), 10.
[10]
Yuan-Yi Fan and Myles Sciotto. 2013. BioSync: An Informed Participatory Interface for Audience Dynamics and Audiovisual Content Co-creation using Mobile PPG and EEG. In Proceedings of the International Conference on New Interfaces for Musical Expression. Graduate School of Culture Technology, KAIST, Daejeon, Republic of Korea, 248–251. https://doi.org/10.5281/zenodo.1178514
[11]
György Fazekas, Mathieu Barthet, and Mark B Sandler. 2014. Novel methods in facilitating audience and performer interaction using the mood conductor framework. In Sound, Music, and Motion: 10th International Symposium, CMMR 2013, Marseille, France, October 15-18, 2013. Revised Selected Papers 10. Springer, 122–147.
[12]
Anne-Kathrin J Fett, Paula M Gromann, Vincent Giampietro, Sukhi S Shergill, and Lydia Krabbendam. 2014. Default distrust? An fMRI investigation of the neural development of trust and cooperation. Social cognitive and affective neuroscience 9, 4 (2014), 395–402.
[13]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627–660.
[14]
Ethan Hein and Sumanth Srinivasan. 2019. The Groove Pizza. New directions in music and human-computer interaction (2019), 71–94.
[15]
Abram Hindle. 2013. SWARMED: Captive Portals, Mobile Devices, and Audience Participation in Multi-User Music Performance. In Proceedings of the International Conference on New Interfaces for Musical Expression. Graduate School of Culture Technology, KAIST, Daejeon, Republic of Korea, 174–179. https://doi.org/10.5281/zenodo.1178550
[16]
Leanne Hirshfield, Phil Bobko, Alex Barelka, Natalie Sommer, and Senem Velipasalar. 2019. Toward interfaces that help users identify misinformation online: using fnirs to measure suspicion. Augmented Human Research 4, 1 (2019), 1–13.
[17]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[18]
HookTheory. 2022. Create Amazing Music with HookTheory. https://www.hooktheory.com. (2022).
[19]
Torin Hopkins and Ellen Do. 2020. Jam Tabs: A Color Based Notation System for Novice Improvisation. In Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’20/21, Rama Gottfried, Georg Hajdu, Jacob Sello, Alessandro Anatrini, and John MacCallum (Eds.). Hamburg University for Music and Theater, Hamburg, Germany, 56–62.
[20]
Torin Hopkins, Peter Pascente, Wayne Seltzer, Kellie Masterson, and Ellen Yi-Luen Do. 2021. The Jam Station: Gamifying Collaborative Musical Experiences Through Algorithmic Assessment. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (Salzburg, Austria) (TEI ’21). Association for Computing Machinery, New York, NY, USA, Article 58, 6 pages. https://doi.org/10.1145/3430524.3442466
[21]
Steven L Jacques. 2013. Optical properties of biological tissues: a review. Physics in Medicine & Biology 58, 11 (2013), R37.
[22]
Alexander Kunze, Stephen J Summerskill, Russell Marshall, and Ashleigh J Filtness. 2019. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics 62, 3 (2019), 345–360.
[23]
Sang Won Lee, Georg Essl, and Z. Morley Mao. 2014. Distributing Mobile Music Applications for Audience Participation Using Mobile Ad-hoc Network (MANET). In Proceedings of the International Conference on New Interfaces for Musical Expression. Goldsmiths, University of London, London, United Kingdom, 533–536. https://doi.org/10.5281/zenodo.1178849
[24]
Sang Won Lee and Jason Freeman. 2013. echobo : Audience Participation Using The Mobile Music Instrument. In Proceedings of the International Conference on New Interfaces for Musical Expression. Graduate School of Culture Technology, KAIST, Daejeon, Republic of Korea, 450–455. https://doi.org/10.5281/zenodo.1178594
[25]
Sang Won Lee and Jason Freeman. 2013. echobo : Audience Participation Using The Mobile Music Instrument. In Proceedings of the International Conference on New Interfaces for Musical Expression. Graduate School of Culture Technology, KAIST, Daejeon, Republic of Korea, 450–455. https://doi.org/10.5281/zenodo.1178594
[26]
Sang Won Lee, Jason Freeman, and Andrew Collela. 2012. Real-Time Music Notation, Collaborative Improvisation, and Laptop Ensembles. In Proceedings of the International Conference on New Interfaces for Musical Expression. University of Michigan, Ann Arbor, Michigan. https://doi.org/10.5281/zenodo.1178315
[27]
Grace Leslie, Alejandro Ojeda, and Scott Makeig. 2014. Measuring musical engagement using expressive movement and EEG brain dynamics.Psychomusicology: Music, Mind, and Brain 24, 1 (2014), 75.
[28]
Ning Liu, Xu Cui, Daniel M Bryant, Gary H Glover, and Allan L Reiss. 2015. Inferring deep-brain activity from cortical activity using functional near-infrared spectroscopy. Biomedical optics express 6, 3 (2015), 1074–1089.
[29]
Alvin Lucier and Douglas Simon. 2012. Chambers: Scores by Alvin Lucier. Wesleyan University Press.
[30]
Caitlin EV Mahy, Louis J Moses, and Jennifer H Pfeifer. 2014. How and where: Theory-of-mind in the brain. Developmental cognitive neuroscience 9 (2014), 68–81.
[31]
Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno. 2019. In a silent way: Communication between ai and improvising musicians beyond sound. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–11.
[32]
Jon McCormack, Patrick Hutchings, Toby Gifford, Matthew Yee-King, Maria Teresa Llano, and Mark D’inverno. 2020. Design Considerations for Real-Time Collaboration with Creative Artificial Intelligence. Organised Sound 25, 1 (2020), 41–52. https://doi.org/10.1017/S1355771819000451
[33]
Tom Mudd. 2019. Material-oriented musical interactions. New Directions in Music and Human-Computer Interaction (2019), 123–133.
[34]
Tim Mullen, Alexander Khalil, Tomas Ward, John Iversen, Grace Leslie, Richard Warp, Matt Whitman, Victor Minces, Aaron McCoy, Alejandro Ojeda, 2015. MindMusic: playful and social installations at the interface between music and the brain. More Playful User Interfaces: Interfaces that Invite Social and Physical Interaction (2015), 197–229.
[35]
J Adam Noah, Yumie Ono, Yasunori Nomoto, Sotaro Shimada, Atsumichi Tachibana, Xian Zhang, Shaw Bronner, and Joy Hirsch. 2015. fMRI validation of fNIRS measurements during a naturalistic task. JoVE (Journal of Visualized Experiments)100 (2015), e52116.
[36]
Jaume R Perello-March, Christopher G Burns, Roger Woodman, Mark T Elliott, and Stewart A Birrell. 2022. Using fNIRS to verify trust in highly automated driving. IEEE Transactions on Intelligent Transportation Systems (2022).
[37]
Denise M Rousseau, Sim B Sitkin, Ronald S Burt, and Colin Camerer. 1998. Not so different after all: A cross-discipline view of trust. Academy of management review 23, 3 (1998), 393–404.
[38]
Catherine L Sebastian, Nathalie MG Fontaine, Geoffrey Bird, Sarah-Jayne Blakemore, Stephane A De Brito, Eamon JP McCrory, and Essi Viding. 2012. Neural processing associated with cognitive and affective Theory of Mind in adolescents and adults. Social cognitive and affective neuroscience 7, 1 (2012), 53–63.
[39]
Anemone GW Van Zijl and John Sloboda. 2011. Performers’ experienced emotions in the construction of expressive musical performance: An exploratory investigation. Psychology of Music 39, 2 (2011), 196–219.
[40]
Patricia Vanzella, Joana B Balardin, Rogério A Furucho, Guilherme Augusto Zimeo Morais, Thenille Braun Janzen, Daniela Sammler, and João R Sato. 2019. fNIRS responses in professional violinists while playing duets: evidence for distinct leader and follower roles at the brain level. Frontiers in psychology 10 (2019), 164.
[41]
Nathan Weitzner, Jason Freeman, Stephen Garrett, and Yan-Ling Chen. 2012. massMobile -an Audience Participation Framework. In Proceedings of the International Conference on New Interfaces for Musical Expression. University of Michigan, Ann Arbor, Michigan. https://doi.org/10.5281/zenodo.1178449
[42]
Kathryn L West, Mark D Zuppichini, Monroe P Turner, Dinesh K Sivakolundu, Yuguang Zhao, Dema Abdelkarim, Jeffrey S Spence, and Bart Rypma. 2019. BOLD hemodynamic response function changes significantly with healthy aging. NeuroImage 188 (2019), 198–207.
[43]
Anna Xambó and Visda Goudarzi. 2022. The Mobile Audience as a Digital Musical Persona in Telematic Performance. In Proceedings of the International Conference on New Interfaces for Musical Expression. The University of Auckland, New Zealand, Article 3. https://doi.org/10.21428/92fbeb44.706b549e
[44]
Anna Xambó and Gerard Roma. 2020. Performing Audiences: Composition Strategies for Network Music using Mobile Phones. In Proceedings of the International Conference on New Interfaces for Musical Expression, Romain Michon and Franziska Schroeder (Eds.). Birmingham City University, Birmingham, UK, 55–60. https://doi.org/10.5281/zenodo.4813192
[45]
Beste F. Yuksel, Kurt B. Oleson, Remco Chang, and Robert J. K. Jacob. 2019. Detecting and Adapting to Users’ Cognitive and Affective State to Develop Intelligent Musical Interfaces. Springer International Publishing, Cham, 163–177. https://doi.org/10.1007/978-3-319-92069-6_11

Index Terms

  1. Stringesthesia: Dynamically Shifting Musical Agency Between Audience and Performer Based on Trust in an Interactive and Improvised Performance

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AM '23: Proceedings of the 18th International Audio Mostly Conference
    August 2023
    204 pages
    ISBN:9798400708183
    DOI:10.1145/3616195
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 October 2023

    Check for updates

    Author Tags

    1. Neuroimaging
    2. Performance paradigms
    3. fNIRS
    4. improvisation
    5. musical agency
    6. trust

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AM '23
    AM '23: Audio Mostly 2023
    August 30 - September 1, 2023
    Edinburgh, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 177 of 275 submissions, 64%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 265
      Total Downloads
    • Downloads (Last 12 months)265
    • Downloads (Last 6 weeks)47
    Reflects downloads up to 06 Oct 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media