This study examines the role of source identification in the emotional response to everyday sound... more This study examines the role of source identification in the emotional response to everyday sounds. Although it is widely acknowledged that sound identification modulates the unpleasantness of sounds, this assumption is based on sparse evidence on a select few sounds. We gathered more robust evidence by having listeners judge the causal properties of sounds, such as actions, materials, and causal agents. Participants also identified and rated the pleasantness of the sounds. We included sounds from a variety of emotional categories, such as Neutral, Misophonic, Unpleasant, and Pleasant. The Misophonic category consists of everyday sounds that are uniquely distressing to a subset of listeners who suffer from Misophonia. Sounds from different emotional categories were paired together based on similar causal properties. This enabled us to test the prediction that a sound’s pleasantness should increase or decrease if it is misheard as being in a more or less pleasant emotional category, ...
Journal of the Acoustical Society of America, Mar 1, 2023
Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech pe... more Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech perception. However, existing research indicates that environmental sound identification tasks also remain challenging for adult CI users compared to normal-hearing (NH) or hearing-impaired (HI) peers. In contrast, anecdotal reports indicate that environmental sound perception improves post-implantation. Methodological choices may contribute to this discrepancy; CI users may be benefiting from more integrative higher-level processes that are not adequately measured by source identification tasks. This study employs two alternative tasks that are designed to assess perception of the semantic properties of environmental sounds. The first is a comprehension tasks, which requires listeners to make inferences about naturalistic sound scene recordings (e.g. which other activities might you expect to take place, or at which time of day does this scene most likely occur). The second task presents a triplet of isolated environmental sound recordings and requires participants to select the sound that does not belong. Preliminary data indicate that CI users are able to perform the tasks with varying levels of proficiency relative to NH and HI listeners. Comprehension tasks focused on context-dependent semantic processing may, thus, complement findings from more traditional single-sound identification tasks.
One way to study human sound recognition is to investigate the reasons why sounds are sometimes m... more One way to study human sound recognition is to investigate the reasons why sounds are sometimes misheard as coming from the wrong source. Understanding this cognitive process can not only help prevent undesirable sound confusions (e.g., auditory display design) but can also promote useful confusions (e.g., Foley effects, cognitive reappraisal for misophonia). We tested the hypothesis that sounds are more confusable if their source events share causal properties. In Expi.1, listeners assessed causal properties of everyday sounds (ESC-50 Dataset) by judging their actions (e.g., tapping), materials (e.g., metal), and causal agents (e.g., machines). Causal similarity between sounds was measured by the distance between their causal properties. In Exp. 2, new listeners identified these sounds with 90% accuracy. Using the distances obtained in Exp. 1, misidentifications were predicted with 91% sensitivity and 89% specificity. The causal properties that had the largest effect on recognition...
Although the sound level reaching a listener’s ear depends upon the sound source level and the en... more Although the sound level reaching a listener’s ear depends upon the sound source level and the environment, a stable source level can be perceived (McDermott et al., 2021). Nonetheless, variation in sound level can disrupt recognition in a short-term old/new task (Susini et al., 2019). We asked whether there is evidence of long-term memory of the typical level of everyday sounds. First, we found that listeners can report the level at which they typically hear a sound. Next, we compared sound judgements over headphones (ESC-50 dataset) across two conditions: (1) “typical”: levels set to produce the loudness experienced as “typical” for each sound (as determined by pilot studies); and (2) “equal”: levels at 70 dB SPL. Recognition, familiarity, and pleasantness were judged. There was no significant difference in recognition accuracy between level conditions and no interaction with whether sounds were louder or softer than their typical levels. In addition, recognition increased as soun...
We report a series of experiments about a little-studied type of compatibility effect between a s... more We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Partici-pants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created dur-ing the experiment (i.e. no pre-existing knowledge) with ecological associations corre-sponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experi-ment). Two results were found. First, the priming effect exists for ecological as well as arbi-trary ass...
Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech pe... more Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech perception. However, existing research indicates that environmental sound identification tasks also remain challenging for adult CI users compared to normal-hearing (NH) or hearing-impaired (HI) peers. In contrast, anecdotal reports indicate that environmental sound perception improves post-implantation. Methodological choices may contribute to this discrepancy; CI users may be benefiting from more integrative higher-level processes that are not adequately measured by source identification tasks. This study employs two alternative tasks that are designed to assess perception of the semantic properties of environmental sounds. The first is a comprehension tasks, which requires listeners to make inferences about naturalistic sound scene recordings (e.g. which other activities might you expect to take place, or at which time of day does this scene most likely occur). The second task presents ...
When hearing knocking on a door, a listener typically identifies both the action (forceful and re... more When hearing knocking on a door, a listener typically identifies both the action (forceful and repeated impacts) and the object (a thick wooden board) causing the sound. The current work studied the neural bases of sound source identification by switching listeners\u27 attention toward these different aspects of a set of simple sounds during functional magnetic resonance imaging scanning: participants either discriminated the action or the material that caused the sounds, or they simply discriminated meaningless scrambled versions of them. Overall, discriminating action and material elicited neural activity in a left-lateralized frontoparietal network found in other studies of sound identification, wherein the inferior frontal sulcus and the ventral premotor cortex were under the control of selective attention and sensitive to task demand. More strikingly, discriminating materials elicited increased activity in cortical regions connecting auditory inputs to semantic, motor, and even...
This study examines the role of source identification in the emotional response to everyday sound... more This study examines the role of source identification in the emotional response to everyday sounds. Although it is widely acknowledged that sound identification modulates the unpleasantness of sounds, this assumption is based on sparse evidence on a select few sounds. We gathered more robust evidence by having listeners judge the causal properties of sounds, such as actions, materials, and causal agents. Participants also identified and rated the pleasantness of the sounds. We included sounds from a variety of emotional categories, such as Neutral, Misophonic, Unpleasant, and Pleasant. The Misophonic category consists of everyday sounds that are uniquely distressing to a subset of listeners who suffer from Misophonia. Sounds from different emotional categories were paired together based on similar causal properties. This enabled us to test the prediction that a sound’s pleasantness should increase or decrease if it is misheard as being in a more or less pleasant emotional category, ...
Proceedings of the 23rd International Conference on Auditory Display - ICAD 2017, 2017
Echolocation - the ability to detect objects in space through the perception of echoes from these... more Echolocation - the ability to detect objects in space through the perception of echoes from these objects - has been identified as a promising venue to help visually impaired individuals navigate within their environments. The interest is in part because a proof-of-concept exists: certain visually impaired individuals are able to navigate using active echolocation. Why, then is echolocation is not in more widespread use among visually impaired individuals? It is possible that a lack of systematic echolocation training platforms has impeded individuals in picking up this skill. We designed a game-application that serves as a training platform for individuals, sighted or not, to train themselves to echolocate. Preliminary testing from both sighted and visually impaired individuals showed that users uniformly understood the game, although their enjoyment of the game was mixed. Although a number of game features could be improved, it is a promising training tool prototype for individual...
In Psychology, actions are paramount for humans to perceive and separate sound events. In Machine... more In Psychology, actions are paramount for humans to perceive and separate sound events. In Machine Learning (ML), action recognition achieves high accuracy; however, it has not been asked if identifying actions can benefit Sound Event Classification (SEC), as opposed to mapping the audio directly to a sound event. Therefore, we propose a new Psychology-inspired approach for SEC that includes identification of actions via human listeners. To achieve this goal, we used crowdsourcing to have listeners identify 20 actions that in isolation or in combination may have produced any of the 50 sound events in the well-studied dataset ESC-50. The resulting annotations for each audio recording relate actions to a database of sound events for the first time . The annotations were used to create semantic representations called Action Vectors (AVs). We evaluated SEC by comparing the AVs with two types of audio features – log-mel spectrograms and state of the art audio embeddings. Because audio fea...
This study examines the role of source identification in the emotional response to everyday sound... more This study examines the role of source identification in the emotional response to everyday sounds. Although it is widely acknowledged that sound identification modulates the unpleasantness of sounds, this assumption is based on sparse evidence on a select few sounds. We gathered more robust evidence by having listeners judge the causal properties of sounds, such as actions, materials, and causal agents. Participants also identified and rated the pleasantness of the sounds. We included sounds from a variety of emotional categories, such as Neutral, Misophonic, Unpleasant, and Pleasant. The Misophonic category consists of everyday sounds that are uniquely distressing to a subset of listeners who suffer from Misophonia. Sounds from different emotional categories were paired together based on similar causal properties. This enabled us to test the prediction that a sound’s pleasantness should increase or decrease if it is misheard as being in a more or less pleasant emotional category, ...
Journal of the Acoustical Society of America, Mar 1, 2023
Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech pe... more Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech perception. However, existing research indicates that environmental sound identification tasks also remain challenging for adult CI users compared to normal-hearing (NH) or hearing-impaired (HI) peers. In contrast, anecdotal reports indicate that environmental sound perception improves post-implantation. Methodological choices may contribute to this discrepancy; CI users may be benefiting from more integrative higher-level processes that are not adequately measured by source identification tasks. This study employs two alternative tasks that are designed to assess perception of the semantic properties of environmental sounds. The first is a comprehension tasks, which requires listeners to make inferences about naturalistic sound scene recordings (e.g. which other activities might you expect to take place, or at which time of day does this scene most likely occur). The second task presents a triplet of isolated environmental sound recordings and requires participants to select the sound that does not belong. Preliminary data indicate that CI users are able to perform the tasks with varying levels of proficiency relative to NH and HI listeners. Comprehension tasks focused on context-dependent semantic processing may, thus, complement findings from more traditional single-sound identification tasks.
One way to study human sound recognition is to investigate the reasons why sounds are sometimes m... more One way to study human sound recognition is to investigate the reasons why sounds are sometimes misheard as coming from the wrong source. Understanding this cognitive process can not only help prevent undesirable sound confusions (e.g., auditory display design) but can also promote useful confusions (e.g., Foley effects, cognitive reappraisal for misophonia). We tested the hypothesis that sounds are more confusable if their source events share causal properties. In Expi.1, listeners assessed causal properties of everyday sounds (ESC-50 Dataset) by judging their actions (e.g., tapping), materials (e.g., metal), and causal agents (e.g., machines). Causal similarity between sounds was measured by the distance between their causal properties. In Exp. 2, new listeners identified these sounds with 90% accuracy. Using the distances obtained in Exp. 1, misidentifications were predicted with 91% sensitivity and 89% specificity. The causal properties that had the largest effect on recognition...
Although the sound level reaching a listener’s ear depends upon the sound source level and the en... more Although the sound level reaching a listener’s ear depends upon the sound source level and the environment, a stable source level can be perceived (McDermott et al., 2021). Nonetheless, variation in sound level can disrupt recognition in a short-term old/new task (Susini et al., 2019). We asked whether there is evidence of long-term memory of the typical level of everyday sounds. First, we found that listeners can report the level at which they typically hear a sound. Next, we compared sound judgements over headphones (ESC-50 dataset) across two conditions: (1) “typical”: levels set to produce the loudness experienced as “typical” for each sound (as determined by pilot studies); and (2) “equal”: levels at 70 dB SPL. Recognition, familiarity, and pleasantness were judged. There was no significant difference in recognition accuracy between level conditions and no interaction with whether sounds were louder or softer than their typical levels. In addition, recognition increased as soun...
We report a series of experiments about a little-studied type of compatibility effect between a s... more We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Partici-pants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created dur-ing the experiment (i.e. no pre-existing knowledge) with ecological associations corre-sponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experi-ment). Two results were found. First, the priming effect exists for ecological as well as arbi-trary ass...
Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech pe... more Performance outcomes for cochlear implant (CI) users traditionally focus on measures of speech perception. However, existing research indicates that environmental sound identification tasks also remain challenging for adult CI users compared to normal-hearing (NH) or hearing-impaired (HI) peers. In contrast, anecdotal reports indicate that environmental sound perception improves post-implantation. Methodological choices may contribute to this discrepancy; CI users may be benefiting from more integrative higher-level processes that are not adequately measured by source identification tasks. This study employs two alternative tasks that are designed to assess perception of the semantic properties of environmental sounds. The first is a comprehension tasks, which requires listeners to make inferences about naturalistic sound scene recordings (e.g. which other activities might you expect to take place, or at which time of day does this scene most likely occur). The second task presents ...
When hearing knocking on a door, a listener typically identifies both the action (forceful and re... more When hearing knocking on a door, a listener typically identifies both the action (forceful and repeated impacts) and the object (a thick wooden board) causing the sound. The current work studied the neural bases of sound source identification by switching listeners\u27 attention toward these different aspects of a set of simple sounds during functional magnetic resonance imaging scanning: participants either discriminated the action or the material that caused the sounds, or they simply discriminated meaningless scrambled versions of them. Overall, discriminating action and material elicited neural activity in a left-lateralized frontoparietal network found in other studies of sound identification, wherein the inferior frontal sulcus and the ventral premotor cortex were under the control of selective attention and sensitive to task demand. More strikingly, discriminating materials elicited increased activity in cortical regions connecting auditory inputs to semantic, motor, and even...
This study examines the role of source identification in the emotional response to everyday sound... more This study examines the role of source identification in the emotional response to everyday sounds. Although it is widely acknowledged that sound identification modulates the unpleasantness of sounds, this assumption is based on sparse evidence on a select few sounds. We gathered more robust evidence by having listeners judge the causal properties of sounds, such as actions, materials, and causal agents. Participants also identified and rated the pleasantness of the sounds. We included sounds from a variety of emotional categories, such as Neutral, Misophonic, Unpleasant, and Pleasant. The Misophonic category consists of everyday sounds that are uniquely distressing to a subset of listeners who suffer from Misophonia. Sounds from different emotional categories were paired together based on similar causal properties. This enabled us to test the prediction that a sound’s pleasantness should increase or decrease if it is misheard as being in a more or less pleasant emotional category, ...
Proceedings of the 23rd International Conference on Auditory Display - ICAD 2017, 2017
Echolocation - the ability to detect objects in space through the perception of echoes from these... more Echolocation - the ability to detect objects in space through the perception of echoes from these objects - has been identified as a promising venue to help visually impaired individuals navigate within their environments. The interest is in part because a proof-of-concept exists: certain visually impaired individuals are able to navigate using active echolocation. Why, then is echolocation is not in more widespread use among visually impaired individuals? It is possible that a lack of systematic echolocation training platforms has impeded individuals in picking up this skill. We designed a game-application that serves as a training platform for individuals, sighted or not, to train themselves to echolocate. Preliminary testing from both sighted and visually impaired individuals showed that users uniformly understood the game, although their enjoyment of the game was mixed. Although a number of game features could be improved, it is a promising training tool prototype for individual...
In Psychology, actions are paramount for humans to perceive and separate sound events. In Machine... more In Psychology, actions are paramount for humans to perceive and separate sound events. In Machine Learning (ML), action recognition achieves high accuracy; however, it has not been asked if identifying actions can benefit Sound Event Classification (SEC), as opposed to mapping the audio directly to a sound event. Therefore, we propose a new Psychology-inspired approach for SEC that includes identification of actions via human listeners. To achieve this goal, we used crowdsourcing to have listeners identify 20 actions that in isolation or in combination may have produced any of the 50 sound events in the well-studied dataset ESC-50. The resulting annotations for each audio recording relate actions to a database of sound events for the first time . The annotations were used to create semantic representations called Action Vectors (AVs). We evaluated SEC by comparing the AVs with two types of audio features – log-mel spectrograms and state of the art audio embeddings. Because audio fea...
Uploads
Papers by Laurie Heller