Forty-four participants were asked to sing moderate, high, and low pitches while their faces were... more Forty-four participants were asked to sing moderate, high, and low pitches while their faces were photographed. In a two-alternative forced choice task, independent judges selected the high-pitch faces as more friendly than the low-pitch faces. When photographs were cropped to show only the eye region, judges still rated the high-pitch faces friendlier than the low-pitch faces. These results are consistent with prior research showing that vocal pitch height is used to signal aggression (low pitch) or appeasement (high pitch). An analysis of the facial features shows a strong correlation between eyebrow position and sung pitch—consistent with the role of eyebrows in signaling aggression and appeasement. Overall, the results are consistent with an inter-modal linkage between vocal and facial expressions.
The interactive game environment, Ghost in the Cave, presented in this short paper, is a work sti... more The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.
When we observe someone perform a familiar action, we can usually predict what kind of sound that... more When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.► Action expertise alters audiovisual brain mechanisms of biological motion. ► Cerebellum activity is reduced for over-learned audiovisual synchrony actions. ► Natural audiovisual covariation reduces fronto-temporo-parietal activity for experts.
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-l... more We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos × three accents × nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions × nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts’ ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-l... more We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos × three accents × nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions × nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts’ ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
Abstract Single cell data from macaque suggest special processing of the sights and sounds of bio... more Abstract Single cell data from macaque suggest special processing of the sights and sounds of biological actions (Kohler, Keysers, et al, Science 2002). Recently Arrighi, Alais & Burr (JOV, 2006) have examined this hypothesis using judgments of perceptual synchrony of audio and visual streams of conga drumming as well as with synthetic audio and visual streams. The perception of audiovisual temporal synchrony provides a window on how these two different sensory modalities are integrated. To further investigate the perception ...
Forty-four participants were asked to sing moderate, high, and low pitches while their faces were... more Forty-four participants were asked to sing moderate, high, and low pitches while their faces were photographed. In a two-alternative forced choice task, independent judges selected the high-pitch faces as more friendly than the low-pitch faces. When photographs were cropped to show only the eye region, judges still rated the high-pitch faces friendlier than the low-pitch faces. These results are consistent with prior research showing that vocal pitch height is used to signal aggression (low pitch) or appeasement (high pitch). An analysis of the facial features shows a strong correlation between eyebrow position and sung pitch—consistent with the role of eyebrows in signaling aggression and appeasement. Overall, the results are consistent with an inter-modal linkage between vocal and facial expressions.
The interactive game environment, Ghost in the Cave, presented in this short paper, is a work sti... more The interactive game environment, Ghost in the Cave, presented in this short paper, is a work still in progress. The game involves participants in an activity using non-verbal emotional expressions. Two teams use expressive gestures in either voice or body movements to compete. Each team has an avatar controlled either by singing into a microphone or by moving in front of a video camera. Participants/players control their avatars by using acoustical or motion cues. The avatar is navigated in a 3D distributed virtual environment using the Octagon server and player system. The voice input is processed using a musical cue analysis module yielding performance variables such as tempo, sound level and articulation as well as an emotional prediction. Similarly, movements captured from a video camera are analyzed in terms of different movement cues. The target group is young teenagers and the main purpose to encourage creative expressions through new forms of collaboration.
When we observe someone perform a familiar action, we can usually predict what kind of sound that... more When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.► Action expertise alters audiovisual brain mechanisms of biological motion. ► Cerebellum activity is reduced for over-learned audiovisual synchrony actions. ► Natural audiovisual covariation reduces fronto-temporo-parietal activity for experts.
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-l... more We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos × three accents × nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions × nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts’ ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-l... more We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos × three accents × nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions × nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts’ ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.
Abstract Single cell data from macaque suggest special processing of the sights and sounds of bio... more Abstract Single cell data from macaque suggest special processing of the sights and sounds of biological actions (Kohler, Keysers, et al, Science 2002). Recently Arrighi, Alais & Burr (JOV, 2006) have examined this hypothesis using judgments of perceptual synchrony of audio and visual streams of conga drumming as well as with synthetic audio and visual streams. The perception of audiovisual temporal synchrony provides a window on how these two different sensory modalities are integrated. To further investigate the perception ...
Uploads