Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
  • New Haven, Connecticut, United States

Julia Irwin

Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual... more
Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual (AV) speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers' mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.
Two competing theories have been proposed to explain the fact that vision can dominate over audition in syllables that have been spliced so that the two modalities specify different phonemes [McGurk and McDonald, Nature [bold 263],... more
Two competing theories have been proposed to explain the fact that vision can dominate over audition in syllables that have been spliced so that the two modalities specify different phonemes [McGurk and McDonald, Nature [bold 263], 746–748 (1976)] . The first theory states that ...
ABSTRACT Visual speech information influences what listeners hear. When the place of articulation of visual and auditory speech tokens are incongruent, perceivers often report hearing a visually influenced response (the "McGurk... more
ABSTRACT Visual speech information influences what listeners hear. When the place of articulation of visual and auditory speech tokens are incongruent, perceivers often report hearing a visually influenced response (the "McGurk effect", McGurk and MacDonald, 1976). However, individual differences in this visual influence are poorly understood. Extending work by Grant & Seitz (1998) and Conrey and Pisoni (2006), we examined correlations between susceptibility to the "McGurk effect" and performance on three related audiovisual tasks. (1) AV speech in noise: we assessed visual gain by comparing word identification in audio-only and AV conditions. (2) AV asynchrony detection: Participants made asynchrony judgments of speech and nonspeech stimuli with asynchronies ranging from +250 ms visual to +250 ms auditory lead. The speech stimuli were CV syllables and the nonspeech stimuli consisted of lissajous circles paired with sine waves. In one set of nonspeech stimuli, the lissajous was modeled on the lip aperture of the CV and the sine wave, amplitude and frequency were derived from the CV. For the other set, the lissajous and sine wave were derived from clapping hands. (3) Speechreading: Participants identified isolated words presented visually. Factors associated with a strong McGurk effect will be discussed. [Work supported by NIH.].
To examine the reliability and validity of the 42-item Brief Infant-Toddler Social and Emotional Assessment (BITSEA), a screener for social-emotional/behavioral problems and delays in competence. Parents in a representative healthy birth... more
To examine the reliability and validity of the 42-item Brief Infant-Toddler Social and Emotional Assessment (BITSEA), a screener for social-emotional/behavioral problems and delays in competence. Parents in a representative healthy birth cohort of 1,237 infants aged 12 to 36 months completed the Infant-Toddler Social and Emotional Assessment (ITSEA)/BITSEA, the Child Behavior Checklist (CBCL)/1.5-5, the MacArthur Communication Developmental Inventory vocabulary checklist, and worry questions. In a subsample, independent evaluators rated infant-toddler behavior. Test-retest reliability was excellent and interrater agreement (mother/father and parent/child-care provider) was good. Supporting validity, BITSEA problems correlated with concurrent evaluator problem ratings and CBCL/1.5-5 scores and also predicted CBCL/1.5-5 and ITSEA problem scores one year later. BITSEA measures of competence correlated with concurrent observed competence and predicted later ITSEA competence measures. Supporting discriminant validity, only 23% of high BITSEA problem scorers had delayed vocabulary. Moreover, the combined BITSEA problem/competence cutpoints identified 85% of subclinical/clinical CBCL/1.5-5 scores, while maintaining acceptable specificity (75%). Findings support the BITSEA as a screener for social-emotional/behavioral problems and delays in social-emotional competence.
Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD.... more
Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces and voices, but scored similarly to children without ASD on audiovisual tasks involving nonhuman stimuli (bouncing balls). Results suggest that children with ASD may use visual information for speech differently from children without ASD. Exploratory results support an inverse association between audiovisual speech processing capacities and social impairment in children with ASD.
ACKNOWLEDGMENTS This work was supported by NIH grants DC-007339 (Julia R. Irwin, PI) AND DC-00403 (Catherine T. Best, PI) and to Haskins Laboratories. Thanks to Cathi Best for her continued support, larry Brancazio for technical... more
ACKNOWLEDGMENTS This work was supported by NIH grants DC-007339 (Julia R. Irwin, PI) AND DC-00403 (Catherine T. Best, PI) and to Haskins Laboratories. Thanks to Cathi Best for her continued support, larry Brancazio for technical assistance and for graciously serving as speaker for stimuli and demonstration, to Jessica Grittner and Tiffany Gooding for assistance with data collection. CONCLUSIONS REFERENCES De
Parents, librarians and educators alike are invested in children learning to read. The library storytime provides a unique opportunity to introduce skills essential to pre-literacy development. This article reviews the literature on... more
Parents, librarians and educators alike are invested in children learning to read. The library storytime provides a unique opportunity to introduce skills essential to pre-literacy development. This article reviews the literature on school-aged children and applies these findings as a basis for activities appropriate for pre-readers. Important areas for the development of pre-literacy are identified and explained, including alphabet knowledge, concepts about print, book handling skills, phonological awareness and expressive vocabulary. Specific activities using children's literature for each of these areas are provided.
The lexical decision (LD) and naming (NAM) tasks are ubiquitous paradigms that employ printed word identification. They are major tools for investigating how factors like morphology, semantic information, lexical neighborhood and others... more
The lexical decision (LD) and naming (NAM) tasks are ubiquitous paradigms that employ printed word identification. They are major tools for investigating how factors like morphology, semantic information, lexical neighborhood and others affect identification. Although use of the tasks is widespread, there has been little research into how performance in LD or NAM relates to reading ability, a deficiency that limits the translation of research with these tasks to the understanding of individual differences in reading. The present research was designed to provide a link from LD and NAM to the specific variables that characterize reading ability (e.g., decoding, sight word recognition, fluency, vocabulary, and comprehension) as well as to important reading-related abilities (phonological awareness and rapid naming). We studied 99 adults with a wide range of reading abilities. LD and NAM strongly predicted individual differences in word identification, less strongly predicted vocabulary...
Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and... more
Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.
Objective:To examine the social-emotional problems and competencies of toddlers who evidenced lags in expressive language without concomitant receptive language delays.