Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Frank Russo
  • Toronto, Ontario, Canada

Frank Russo

  • noneedit
  • Frank Russo is a professor of Psychology at Ryerson University and an affiliate scientist at the Toronto Rehabilitati... moreedit
Tapping along with a metronome or the beat of music is a relatively easy task. Certain acoustic features of music have been found to support this behavioural synchronization. For example, lower frequency content has been found to be... more
Tapping along with a metronome or the beat of music is a relatively easy task. Certain acoustic features of music have been found to support this behavioural synchronization. For example, lower frequency content has been found to be related to higher tapping velocity and lower tapping variability (Stupacher, Hove, & Janata, 2016). Neurons will also entrain their firing to the beat of music, but it is unknown whether those same acoustic features that support behavioural synchronization will also support neural entrainment. The current study seeks to investigate which acoustic features of music support the entrainment of neurons that are related to behavioural synchronization, such as those in premotor areas of the brain. In a previous study, participants listened to music while EEG was measured from the surface of the scalp. Independent components analysis was used to identify sources of activity in auditory and premotor areas of the brain. In a post-hoc analysis, certain acoustic features of the music were found to correlate with neural entrainment. Specifically, tempo and RMS were found to correlate with entrainment of premotor areas, whereas low energy rate (the proportion of the signal below the average energy) and spectral centroid were found to correlate with beta-band phase coherence of auditory and premotor areas. In a second [pilot] study, a stimulus set was created to specifically investigate these features and their ability to entrain neurons in premotor areas of the brain.
Hearing loss, which most adults will experience to some degree as they age, has been associated with decreased emotional wellbeing and reduced quality of life in aging adults. Although assistive technologies (e.g., hearing aids) can... more
Hearing loss, which most adults will experience to some degree as they age, has been associated with decreased emotional wellbeing and reduced quality of life in aging adults. Although assistive technologies (e.g., hearing aids) can target aspects of peripheral hearing loss, persistent perceptual deficits are widely reported. One prevalent example is the loss of the ability to perceive speech in a noisy environment, which severely impacts quality of life and goes relatively unremediated by hearing aids. Musicianship has been shown to improve aspects of auditory processing, but has not been studied as a short-term intervention for improving these abilities in older adults. The current study investigates whether short-term choir participation can improve three aspects of auditory processing: perception of speech in noise, pitch discrimination, and the neural response to brief auditory stimuli (frequency following response; FFR). Forty-six older adults (aged 50+) participated in a choir for 10 weeks, during which they took part in group singing (2 hours/week) supported by individual online musical training (1 hour/week). Choir participants (n=46) underwent pre- and post-training assessments, conducted during the first week of the choir and again after the last week. Two control groups were assessed, including a group of older adults (aged 50+) involved in 10 weeks of music appreciation classes (music perception group; n=17), and an age- and audiometry-matched do-nothing control group (aged 50+; n=25). Control participants underwent the same battery of assessments, measured twice over the same time frame as the choir participants. Auditory assessments were administered electronically, and the FFR was obtained using electroencephalography (EEG). Preliminary statistical analyses showed that choir participants improved across all auditory measures, while both control groups showed no differences. These findings support our hypothesis that short-term choir participation is an effective intervention for neural and perceptual aspects of age-related hearing loss.
Comfort is an important characteristic of hearing protectors, as important as the sound attenuation. A bibliographical study was performed by Behar and Segu (submitted) examining research on comfort from hearing protectors published... more
Comfort is an important characteristic of hearing protectors, as important as the sound attenuation. A bibliographical study was performed by Behar and Segu (submitted) examining research on comfort from hearing protectors published during the last 25 years. The study was a background document for developing a procedure for ranking comfort of hearing protectors on the basis of physical characteristics. The paper’s main recommendation was work on only one type of protector. (Prof. S. Gerges, Universidade Federal de Santa Catarina, Brazil is studying comfort from ear muffs exclusively). The current project will focus on comfort from foam earplugs exclusively. Twenty participants will evaluate twelve types of foam earplugs. Each participant will be asked to assess the comfort using a visual analog scale. In addition, wax molds will be obtained from the ears of all participants. These molds will be digitized to obtain the shape and size of the ear canal. The physical characteristics of the earplugs will also be measured (density, stiffness, diameter, etc...) Finally, a correlation will be sought between the physical characteristics of the plugs and the comfort experienced by the participants. A multiple regression approach will be used to assess the influence of physical measures including stiffness, size, density, shape, and material on comfort ratings obtained across individuals with ear canals of varying diameter. The models will be realized as formulae specifying proportion of variance accounted for by each factor, the weight of the factor and the directionality. This formula could then be used to classify any foam plug that may be developed after completion of this study.
Musical rhythms elicits a perception of a beat (or pulse) which in turn elicit spontaneous motor synchronization (Repp & Su, 2013). Electroencephalography (EEG) research has shown that endogenous neural oscillations dynamically... more
Musical rhythms elicits a perception of a beat (or pulse) which in turn elicit spontaneous motor synchronization (Repp & Su, 2013). Electroencephalography (EEG) research has shown that endogenous neural oscillations dynamically entrain to beat frequencies of musical rhythms providing a neurological marker for beat perception(Nozaradan, Peretz, Missal, & Mouraux, 2011). Rhythms however, vary on in complexity modulating ability to synchronize motor movement. Although musical rhythms are assumed to be from auditory sources, recent research suggests that rhythms presented through vibro-tactile stimulation of the spine elicits a comparable capacity as auditory in motor synchronization in simpler rhythms, however, this trend diminishes as complexity increases(Ammirante, Patel, & Russo, 2016). The current research purposes to explore the neural correlates of vibro-tactile beat perception with the aim in providing further evidence for rhythm perception from a vibro-tactile modality. Participants will be passively exposed to simple and complex rhythms from auditory, vibro-tactile, and multi-modal sources. Synchronization ability as well as EEG recording will be obtained in order to provide behavioural and neurological indexes of beat perception. Results from this research will provide evidence for non-auditory, vibro-tactile capabilities of music perception.
Introduction: This study is a follow-up to prior research from our group that attempts to relate noise exposure and hearing thresholds in active performing musicians of the National Ballet of Canada Orchestra. Materials and methods:... more
Introduction: This study is a follow-up to prior research from our group that attempts to relate noise exposure and hearing thresholds in active performing musicians of the National Ballet of Canada Orchestra. Materials and methods: Exposures obtained in early 2010 were compared to exposures obtained in early 2017 (the present study). In addition, audiometric thresholds obtained in early 2012 were compared to thresholds obtained in early 2017 (the present study). This collection of measurements presents an opportunity to observe the regularities in the patterns of exposure as well as threshold changes that may be expected in active orchestra musicians over a 5-year span. Results: The pattern of noise exposure across instrument groups, which was consistent over the two time points, reveals highest exposures among brass, percussion/basses, and woodwinds. However, the average noise exposure across groups and time was consistently below 85 dBA, which suggests no occupational hazard. These observations were corroborated by audiometric thresholds, which were generally (a) in the normal range and (b) unchanged in the 5-year period between measurements. Conclusion: Because exposure levels were consistently below 85 dBA and changes in audiometric thresholds were minimal, we conclude that musicians experienced little-to-no risk of noise-induced hearing loss.
Where does a listener's anticipation of the next note in an unfamiliar melody come from? One view is that expectancies reflect innate grouping biases; another is that expectancies reflect statistical learning through previous musical... more
Where does a listener's anticipation of the next note in an unfamiliar melody come from? One view is that expectancies reflect innate grouping biases; another is that expectancies reflect statistical learning through previous musical exposure. Listening experiments support both views but in limited contexts, e.g., using only instrumental renditions of melodies. Here we report replications of two previous experiments, but with additional manipulations of timbre (instrumental vs. sung renditions) and register (modal vs. upper). Following a proposal that melodic expectancy is vocally constrained, we predicted that sung renditions would encourage an expectation that the next tone will be a “singable” one, operationalized here as one having an absolute pitch height that falls within the modal register. Listeners heard melodic fragments and gave goodness-of-fit ratings on the final tone (Experiment 1) or rated how certain they were about what the next note would be (Experiment 2). Ratings in the instrumental conditions were consistent with the original findings, but differed significantly from ratings in the sung conditions, which were more consistent with the vocal constraints model. We discuss how a vocal constraints model could be extended to include expectations about duration and tonality.
Skips are relatively infrequent in diatonic melodies and are compositionally treated in systematic ways. This treatment has been attributed to deliberate compositional strategies that are also subject to certain constraints. Study 1... more
Skips are relatively infrequent in diatonic melodies and are compositionally treated in systematic ways. This treatment has been attributed to deliberate compositional strategies that are also subject to certain constraints. Study 1 showed that ease of vocal production may be accommodated compositionally. Number of skips and their distribution within a melody’s pitch range were compared between diverse statistical samples of vocal and instrumental melodies. Skips occurred less frequently in vocal melodies. Skips occurred more frequently in melodies’ lower and upper ranges, but there were more low skips than high (“low-skip bias”), especially in vocal melodies. Study 2 replicated these findings in the vocal and instrumental melodies of a single composition (Bach’s Mass in B minor). Study 3 showed that among the instrumental melodies of classical composers, low-skip bias was correlated with the proportion of vocal music within composers’ total output. We propose that, to varying degrees, composers apply a vocal template to instrumental melodies.
Music software applications often require similarity-finding measures. In this study, we describe an empirically derived measure for determining similarity between two melodies with multiple-note changes. The derivation of our final model... more
Music software applications often require similarity-finding measures. In this study, we describe an empirically derived measure for determining similarity between two melodies with multiple-note changes. The derivation of our final model involved three stages. In Stage 1, eight standard melodies were systematically varied with respect to pitch distance, pitch direction, tonal stability, metric salience and melodic contour. Comparison melodies with a one-note change were presented in transposed and nontransposed conditions. For the nontransposed condition, predictors of explained variance in similarity ratings were pitch distance, pitch direction and melodic contour. For the transposed condition, predictors were tonal stability and melodic contour. In Stage 2, we added the effects of primacy and recency. In Stage 3, comparison melodies with two-note changes were introduced, which allowed us to derive a more generalizable model capable of accommodating multiple-note changes. In a follow-up experiment, we show that our empirically derived measure of melodic similarity yielded superior performance to the Mongeau and Sankoff similarity measure. An empirically derived measure, such as the one described here, has the potential to extend the domain of similarity-finding methods in music information retrieval, on the basis of psychological predictors.
<strong>Contact Information</strong> If you would like further information about the RAVDESS Facial Landmark Tracking data set, or if you experience any issues downloading files, please contact us at ravdess@gmail.com.... more
<strong>Contact Information</strong> If you would like further information about the RAVDESS Facial Landmark Tracking data set, or if you experience any issues downloading files, please contact us at ravdess@gmail.com. <strong>Tracking Examples</strong> Watch a sample of the facial tracking results. <strong>Description</strong> This data set contains tracked facial landmark movements from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [RAVDESS Zenodo page]. Motion tracking of actors' faces was produced by OpenFace 2.1.0 (Baltrusaitis, T., Zadeh, A., Lim, Y. C., & Morency, L. P., 2018). Tracked information includes: facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. This data set contains tracking for all 2452 RAVDESS trials. All tracking movement data are contained in "FacialTracking_Actors_01-24.zip", which contains 2452 .CSV files. Each actor has 104 tracked trials (60 speech, 44 song). Note, there are no song files for Actor 18. Total Tracked Files = (24 Actors x 60 Speech trials) + (23 Actors x 44 Song trials) = 2452 files. Tracking results for each trial are provided as individual comma separated value files (CSV format). File naming convention of tracked files is identical to that of the RAVDESS. For example, tracked file "01-01-01-01-01-01-01.csv" corresponds to RAVDESS audio-video file "01-01-01-01-01-01-01.mp4". For a complete description of the RAVDESS file naming convention and experimental manipulations, please see the RAVDESS Zenodo page. Tracking overlay videos for all trials are also provided (720p Xvid, .avi), one zip file per Actor. As the RAVDESS does not contain "ground truth" facial landmark locations, the overlay videos provide a visual 'sanity check' for researchers to confirm the general accuracy of the tracking results. The file naming convention of tracking overlay videos also matches that of the RAVDESS. For example, tracking video "01-01-01-01-01-01-01.avi" corresponds to RAVDESS audio-video file "01-01 [...]
Comfort in an important property of hearing protectors, perhaps as important as the sound attenuation. If a protector is deemed to be uncomfortable, it will not be worn or it is modified in some manner by the user, often to the detriment... more
Comfort in an important property of hearing protectors, perhaps as important as the sound attenuation. If a protector is deemed to be uncomfortable, it will not be worn or it is modified in some manner by the user, often to the detriment of the attenuation. Although doubtless important, this characteristic is not studied as often as it should. A search conducted on the database Web of Science, shows that between the years 1970 and 2014, 208 papers were published dealing with attenuation in hearing protectors, while there were only 22 papers related to comfort. One reason for the scarcity of research on comfort could be the lack of a consistent definition, due to its inherently subjective nature. Another is the dependency of comfort on factors other than the protector itself, such as temperature and humidity of the workplace and the need for intelligibility. Finally, there are the anatomical differences among wearers that cause differences of comfort. This paper analyzes comfort stud...
This study examined how contextual relationships in time can affect perception, specifically the influence of a regularly occurring (isochronous) rhythm on judgements of simultaneity in both the auditory and vibrotactile modalities. Using... more
This study examined how contextual relationships in time can affect perception, specifically the influence of a regularly occurring (isochronous) rhythm on judgements of simultaneity in both the auditory and vibrotactile modalities. Using the method of constant stimuli and a two-alternative forced choice, participants were presented with pairs of pure tones played either simultaneously or with various levels of stimulus onset asynchrony (SOA), and thresholds of detection (TOD) were defined as the SOA value at which participants achieved 75% accuracy. Stimuli in both modalities were nested within either: i) a regularly occurring, predictable (isochronous) rhythm ii) an irregular unpredictable (non-isochronous) rhythm, or iii) no rhythm at all. TODs were significantly reduced by the regular rhythm as compared to no rhythm, but only in the auditory modality. Also, vibrotactile conditions also showed far greater variability overall, suggesting these tasks were more difficult.
Music informatics is an interdisciplinary research area that encompasses data driven approaches to the analysis, generation, and retrieval of music. In the era of big data, two goals weigh heavily on many research agendas in this area:... more
Music informatics is an interdisciplinary research area that encompasses data driven approaches to the analysis, generation, and retrieval of music. In the era of big data, two goals weigh heavily on many research agendas in this area: (a) the identification of better features and (b) the acquisition of better training data. To this end, researchers have started to incorporate findings and methods from music cognition, a related but historically distinct research area that is concerned with elucidating the underlying mental processes involved in music-related behavior
The perception of an event is strongly influenced by the context in which it occurs. Here, we examined the effect of a rhythmic context on detection of asynchrony in both the auditory and vibrotactile modalities. Using the method of... more
The perception of an event is strongly influenced by the context in which it occurs. Here, we examined the effect of a rhythmic context on detection of asynchrony in both the auditory and vibrotactile modalities. Using the method of constant stimuli and a two-alternative forced choice (2AFC), participants were presented with pairs of pure tones played either simultaneously or with various levels of stimulus onset asynchrony (SOA). Target stimuli in both modalities were nested within either: (i) a regularly occurring, predictable rhythm (ii) an irregular, unpredictable rhythm, or (iii) no rhythm at all. Vibrotactile asynchrony detection had higher thresholds and showed greater variability than auditory asynchrony detection in general. Asynchrony detection thresholds for auditory targets but not vibrotactile targets were significantly reduced when the target stimulus was embedded in a regular rhythm as compared to no rhythm. Embedding within an irregular rhythm produced no such improv...
Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing... more
Objectives: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. Design: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abil...
... html Publisher MARCS Auditory Laboratories, University of Western Sydney Date 2007 Author/Creator Quinto, Lena Author/Creator Thompson, William F Author ... Twenty-nine participants heard sequences of syllables (la-la-la-ba,... more
... html Publisher MARCS Auditory Laboratories, University of Western Sydney Date 2007 Author/Creator Quinto, Lena Author/Creator Thompson, William F Author ... Twenty-nine participants heard sequences of syllables (la-la-la-ba, la-la-la-ga) that were spoken or sung to a steady ...
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic... more
A novel algorithm for frequency lowering in music was developed and experimentally tested in hearing-impaired listeners. Harmonic frequency lowering (HFL) combines frequency transposition and frequency compression to preserve the harmonic content of music stimuli. Listeners were asked to make judgments regarding detail and sound quality in music stimuli. Stimuli were presented under different signal processing conditions: original, low-pass filtered, HFL, and nonlinear frequency compressed. Results showed that participants reported perceiving the most detail in the HFL condition. In addition, there was no difference in sound quality across conditions.
A talker’s emotional state is one important type of information carried by the speech signal. Past studies have shown that listeners with hearing loss have difficulties identifying vocal emotion. However, there is little research on how... more
A talker’s emotional state is one important type of information carried by the speech signal. Past studies have shown that listeners with hearing loss have difficulties identifying vocal emotion. However, there is little research on how much hearing aids may ameliorate these difficulties. The amplitude compression performed by hearing aids makes words easier to recognize, but little is known about how such processing affects the emotional cues carried in the speech signal. The speech materials used in this study were sentences spoken by a young female actor portraying different vocal emotions. These sentences were processed using different hearing aid simulations: a flat 10 dB gain across frequencies; linear gain according to NAL-NL2 targets; fast amplitude compression, and slow amplitude compression. Acoustic analyses of the hearing aid-processed speech showed that the amplitude envelope was flattened by fast amplitude compression, more so for sentences spoken in Angry and Happy co...
A talker’s emotional state is one important type of information carried by the speech signal. While the frequency and amplitude compression performed by hearing aids may make speech easier to understand, little is known about how such... more
A talker’s emotional state is one important type of information carried by the speech signal. While the frequency and amplitude compression performed by hearing aids may make speech easier to understand, little is known about how such processing affects users’ perception of emotion in speech. This study investigated how hearing aid use affected the perception of emotion in speech and the recognition of speech spoken with emotion. Listeners were hearing aid users who were tested with and without their aids in separate sessions. They heard sentences spoken by a young female actor portraying different vocal emotions, and were asked to report the keyword and identify the portrayed emotion. The use of hearing aids improved listeners’ word recognition performance from 43% correct (unaided) to 68% correct (aided). In contrast, hearing aids did not improve listeners’ emotion identification (38% unaided, compared to 40% aided). Emotions that were more easily identified were not necessarily t...
Noise exposure and hearing loss was assessed in different instrument groups of a professional ballet orchestra. Those group members experiencing the highest levels of exposure also had the highest pure tone thresholds. We found that... more
Noise exposure and hearing loss was assessed in different instrument groups of a professional ballet orchestra. Those group members experiencing the highest levels of exposure also had the highest pure tone thresholds. We found that thresholds were not uniform across instrument groups. The greatest difference in thresholds was observed at test frequencies above 2000 Hz, peaking at 4000 Hz where the average difference between groups was as high as 15 dB. Five years have elapsed since these initial measurements were taken. In this followup we reassess ‐ differences across the instrument groups in pure tone thresholds, and noise exposure. We also include a measure of functional hearing. This study provides information that extends current understanding of the occupational risks faced by professional musicians playing in orchestras.
The influences of inharmonicity and bandwidth on sensitivity to tonality in the low-frequency range (A0 to G#1) were tested in a listening experiment. Participants were presented a key-defining context (do-mi-do-so) and were asked to rate... more
The influences of inharmonicity and bandwidth on sensitivity to tonality in the low-frequency range (A0 to G#1) were tested in a listening experiment. Participants were presented a key-defining context (do-mi-do-so) and were asked to rate the goodness of fit of probe tones to the context. Probe tones were the 12 tones of the chromatic scale beginning on do. The set of 12 ratings, called the probe-tone profile, was compared to an established standardized profile for the Western tonal hierarchy. Prior research employing this method with real (sampled) piano tones has suggested that sensitivity to tonality is influenced by inharmonicity, particularly in the lowest octaves of the piano where inharmonicity levels are substantially above the detection threshold. In the present experiment, sensitivity to tonality was tested using synthesized piano-like tones that were either harmonic or inharmonic. Participants were tested in either a broadband (no filtering) or low-pass (low-pass filtered...
Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through non-verbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the... more
Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through non-verbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech, a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips expressed in a way that was either a happy, sad, or neutral. These stimuli were also presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left pre-supplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-relate...
Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks,... more
Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of netwo...
Page 1. Modelling Perceptual Elements of Music in a Vibrotactile Display for Deaf Users: A Field Study ... We have found little work in the area of music enjoyment, towards making the emotional experi-ences of music listening more... more
Page 1. Modelling Perceptual Elements of Music in a Vibrotactile Display for Deaf Users: A Field Study ... We have found little work in the area of music enjoyment, towards making the emotional experi-ences of music listening more accessible to deaf and hard of hearing people. ...
Striking changes in sensitivity to tonality across the pitch range are reported. Participants were presented a key-defining context (do-mi-do-sol) followed by one of the 12 chromatic tones of the octave, and rated the goodness of fit of... more
Striking changes in sensitivity to tonality across the pitch range are reported. Participants were presented a key-defining context (do-mi-do-sol) followed by one of the 12 chromatic tones of the octave, and rated the goodness of fit of the probe tone to the context. The set of ratings, called the probe-tone profile, was compared to an established standardised profile for the Western tonal hierarchy. The presentation of context and probe tones at low and high pitch registers resulted in significantly reduced sensitivity to tonality. Sensitivity was especially poor for presentations in the lowest octaves where inharmonicity levels were substantially above the threshold for detection. We propose that sensitivity to tonality may be influenced by pitch salience (or a co-varying factor such as exposure to pitch distributional information) as well as suprathreshold inharmonicity.
Advertisement. ...

And 107 more

The present study examined the influence of infant visual cues on maternal vocal and facial expressiveness while speaking or singing and the influence of maternal visual cues on infant attention. Experiment 1 asked whether mothers exhibit... more
The present study examined the influence of infant visual cues on maternal vocal and facial expressiveness while speaking or singing and the influence of maternal visual cues on infant attention. Experiment 1 asked whether mothers exhibit more
vocal emotion when speaking and singing to infants in or out of view. Adults judged which of each pair of audio excerpts (in view,  out of view) sounded more emotional. Face-to-face vocalizations were judged more emotional than vocalizations
to infants out of view. Moreover, mothers smiled considerably more while singing than while speaking to infants. Experiment 2 examined the influence of video feedback from infants on maternal speech and singing. Maternal vocalizations in the context of video feedback were judged to be less emotional than those in face-to-face contexts but more emotional than those in out-of-view contexts. Experiment 3 compared six-month-old infants’ attention to maternal speech and singing
with audio-only versions or with silent video-only versions. Infants exhibited comparable attention to audio-only versions of speech and singing but greater attention to video-only versions of singing. The present investigation is unique in documenting
the contribution of infant visual feedback to maternal vocal emotion in contexts that control for infants’ presence, visibility, and proximity.
Research Interests: