Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similari... more Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similarity of clean signals presented to their CI and modified signals presented to their normal-hearing ear. The signals to the normal-hearing ear were created by (a) filtering, (b) spectral smearing, (c) changing overall fundamental frequency (F0), (d) F0 contour flattening, (e) changing formant frequencies, (f) altering resonances and ring times to create a metallic sound quality, (g) using a noise vocoder, or (h) using a sine vocoder. The operations could be used singly or in any combination. On a scale of 1 to 10 where 10 was a complete match to the sound of the CI, the mean match score was 8.8. Over half of the matches were 9.0 or higher. The most common alterations to a clean signal were band-pass or low-pass filtering, spectral peak smearing, and F0 contour flattening. On average, 3.4 operations were used to create a match. Upshifts in formant frequencies were implemented most often for ...
Purpose There is a growing body of literature that suggests a linkage between impaired auditory f... more Purpose There is a growing body of literature that suggests a linkage between impaired auditory function, increased listening effort, and fatigue in children and adults with hearing loss. Research suggests this linkage may be associated with hearing loss–related variations in diurnal cortisol levels. Here, we examine variations in cortisol profiles between young adults with and without severe sensorineural hearing loss and examine associations between cortisol and subjective measures of listening effort and fatigue. Method This study used a repeated-measures, matched-pair design. Two groups ( n = 8 per group) of adults enrolled in audiology programs participated, 1 group of adults with hearing loss (AHL) and 1 matched control group without hearing loss. Salivary cortisol samples were collected at 7 time points over a 2-week period and used to quantify physiological stress. Subjective measures of listening effort, stress, and fatigue were also collected to investigate relationships b...
Journal of Neurological Surgery Part B: Skull Base
Unilateral severe-to-profound sensorineural hearing loss (SNHL), also known as single sided deafn... more Unilateral severe-to-profound sensorineural hearing loss (SNHL), also known as single sided deafness (SSD), is a problem that affects both children and adults, and can have severe and detrimental effects on multiple aspects of life including music appreciation, speech understanding in noise, speech and language acquisition, performance in the classroom and/or the workplace, and quality of life. Additionally, the loss of binaural hearing in SSD patients affects those processes that rely on two functional ears including sound localization, binaural squelch and summation, and the head shadow effect. Over the last decade, there has been increasing interest in cochlear implantation for SSD to restore binaural hearing. Early data are promising that cochlear implantation for SSD can help to restore binaural functionality, improve quality of life, and may faciliate reversal of neuroplasticity related to auditory deprivation in the pediatric population. Additionally, this new patient populat...
Journal of speech, language, and hearing research : JSLHR, Jan 17, 2017
The aim of this article is to summarize recent published and unpublished research from our 2 labo... more The aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs). CI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple, diffuse noise sources, and the other was a cocktail party with 2 spatially separated point sources of competing speech. At issue was the value of the following sources of information, or interventions, on speech understanding: (a) visual information, (b) adaptive beamformer microphones and remote microphones, (c) bimodal fittings, that is, a CI and contralateral low-frequency acoustic hearing, (d) hearing preservation fittings, that is, a CI with preserved low-frequency acoustic in the same ear plus low-frequency acoustic hearing in the contralateral ear, and (e) bilateral CIs. A remote microphone provided the largest improvement in speech understanding. Visual ...
Journal of speech, language, and hearing research : JSLHR, Dec 1, 2016
Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear impl... more Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relat...
Consonant recognition was measured as a function of the number of stimulation channels for Hybrid... more Consonant recognition was measured as a function of the number of stimulation channels for Hybrid short-electrode cochlear implant (CI) users, long-electrode CI users, and normal-hearing (NH) listeners in quiet and background noise. Short-electrode CI subjects were tested with 1-6 channels allocated to a frequency range of 1063-7938 Hz. Long-electrode CI subjects were tested with 1-6, 8, or 22 channels allocated to 188-7938 Hz, or 1-6 or 15 channels from the basal 15 electrodes allocated to 1063-7938 Hz. NH listeners were tested with simulations of each CI group/condition. Despite differences in intracochlear electrode spacing for equivalent channel conditions, all CI subject groups performed similarly at each channel condition and improved up to at least four channels in quiet and noise. All CI subject groups underperformed relative to NH subjects. These preliminary findings suggest that the limited channel benefit seen for CI users may not be due solely to increases in channel interactions as a function of electrode density. Other factors such as pre-operative patient history, location of stimulation in the base versus apex, or a limit on the number of electric channels that can be processed cognitively, may also interact with the effects of electrode contact spacing along the cochlea.
Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimula... more Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimulation can provide better speech intelligibility than a single CI. In both cases patients need to combine information from two ears into a single percept. In this paper we ask whether the physiological and psychological processes associated with aging alter the ability of bilateral and bimodal CI patients to combine information across two ears in the service of speech understanding. The subjects were 60 adult bilateral CI patients and 91 adult bimodal patients. The test battery was composed of monosyllabic words presented in quiet and the AzBio sentences presented in quiet, at +10 and at +5 dB signal-to-noise ratio (SNR). The subjects were tested in standard audiometric sound booths. Speech and noise were always presented from a single speaker directly in front of the listener. Age and bilateral or bimodal benefit were not significantly correlated for any test measure. Other factors equal, both bilateral CIs and bimodal CIs can be recommended for elderly patients.
Journal of Speech Language and Hearing Research Jslhr, Feb 1, 1999
Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve s... more Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve speech recognition in listeners with hearing impairment, with the intention of incorporating this approach into emerging amplification technology. Unfortunately, most previous studies have enhanced CVRs by increasing consonant energy, thus possibly confounding CVR effects with consonant audibility. In this study, we held consonant audibility constant by reducing vowel transition and steady-state energy rather than increasing consonant energy. Performance-by-intensity (PI) functions were obtained for recognition of voiceless stop consonants (/p/, /t/, /k/) presented in isolation (burst and aspiration digitally separated from the vowel) and for consonant-vowel syllables, with readdition of the vowel /a/. There were three CVR conditions: normal CVR, vowel reduction by 6 dB, and vowel reduction by 12 dB. Testing was conducted in broadband noise fixed at 70 dB SPL and at 85 dB SPL. Six adults with sensorineural hearing impairment and 2 adults with normal hearing served as listeners. Results indicated that CVR enhancement did not improve identification performance when consonant audibility was held constant, except at the higher noise level for one listener with hearing impairment. The re-addition of the vowel energy to the isolated consonant did, however, produce large and significant improvements in phoneme identification.
The aims of this study were 1) to determine the number of channels of stimulation needed by norma... more The aims of this study were 1) to determine the number of channels of stimulation needed by normal-hearing adults and children to achieve a high level of word recognition and 2) to compare the performance of normal-hearing children and adults listening to speech processed into 6 to 20 channels of stimulation with the performance of children who use the Nucleus 22 cochlear implant. In Experiment 1, the words from the Multisyllabic Lexical Neighborhood Test (MLNT) were processed into 6 to 20 channels and output as the sum of sine waves at the center frequency of the analysis bands. The signals were presented to normal-hearing adults and children for identification. In Experiment 2, the wideband recordings of the MLNT words were presented to early-implanted and late-implanted children who used the Nucleus 22 cochlear implant. Experiment 1: Normal-hearing children needed more channels of stimulation than adults to recognize words. Ten channels allowed 99% correct word recognition for adults; 12 channels allowed 92% correct word recognition for children. Experiment 2: The average level of intelligibility for both early- and late-implanted children was equivalent to that found for normal-hearing adults listening to four to six channels of stimulation. The best intelligibility for implanted children was equivalent to that found for normal-hearing adults listening to six channels of stimulation. The distribution of scores for early- and late-implanted children differed. Nineteen percent of the late-implanted children achieved scores below that allowed by a 6-channel processor. None of the early-implanted children fell into this category. The average implanted child must deal with a signal that is significantly degraded. This is likely to prolong the period of language acquisition. The period could be significantly shortened if implants were able to deliver at least eight functional channels of stimulation. Twelve functional channels of stimulation would provide signals near the intelligibility of wideband signals in quiet.
Sentence intelligibility in quiet and in noise was assessed for two types of signal processing al... more Sentence intelligibility in quiet and in noise was assessed for two types of signal processing algorithms commonly implemented for cochlear implants. Experiment 1 determined the number of m channels in an m-of-18 processor that are necessary for asymptotic performance. Experiment 2 determined the number of fixed channels necessary to equal the performance of the best spectral- maxima processor. A 4-of-18
Our primary aim was to determine whether listeners in the following patient groups achieve locali... more Our primary aim was to determine whether listeners in the following patient groups achieve localization accuracy within the 95th percentile of accuracy shown by younger or older normal-hearing (NH) listeners: (1) hearing impaired with bilateral hearing aids, (2) bimodal cochlear implant (CI), (3) bilateral CI, (4) hearing preservation CI, (5) single-sided deaf CI and (6) combined bilateral CI and bilateral hearing preservation. The listeners included 57 young NH listeners, 12 older NH listeners, 17 listeners fit with hearing aids, 8 bimodal CI listeners, 32 bilateral CI listeners, 8 hearing preservation CI listeners, 13 single-sided deaf CI listeners and 3 listeners with bilateral CIs and bilateral hearing preservation. Sound source localization was assessed in a sound-deadened room with 13 loudspeakers arrayed in a 180-degree arc. The root mean square (rms) error for the NH listeners was 6 degrees. The 95th percentile was 11 degrees. Nine of 16 listeners with bilateral hearing aids...
Ieee Transactions on Bio Medical Engineering, Jun 1, 2007
The speech reception performance of a recipient of the Clarion CII implant was evaluated with a c... more The speech reception performance of a recipient of the Clarion CII implant was evaluated with a comprehensive set of tests. The same tests were administered for a group of six subjects with normal hearing. Scores for the implant subject were not different from the scores for the normal-hearing subjects, for seven of the nine tests, including the most difficult test used in standard clinical practice. These results are both surprising and encouraging, in that the implant provides only a very crude mimicking of only some aspects of the normal physiology.
To determine durational differences between vowel and nasal segments preceding word-final /t/ and... more To determine durational differences between vowel and nasal segments preceding word-final /t/ and /d/, spectrograms were made of adult speakers' productions of minimal pairs of the type /pent/-/pend/. Vowel, nasal, and vowel plus nasal (vocalic nucleus) durations were greater before /d/ than before /t/. Assuming the voiceless context as a base, the increase in nasal duration in the voiced case was proportionately greater than the increase in vowel duration. This outcome suggests that nasal duration is a more powerful cue to the voicing characteristic of the following consonant than is vowel duration. To test this, adult listeners were presented synthetic CVNC utterances in which the nasal and vowel segments were independently varied in duration over a range of 40 msec to 200 msec and were instructed to label the final stop consonant as either voiced /d/ or voiceless /t/. Although changes in both vowel and nasal duration were sufficient to cue both voiced and voiceless judgements...
Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similari... more Fourteen single-sided deaf listeners fit with an MED-EL cochlear implant (CI) judged the similarity of clean signals presented to their CI and modified signals presented to their normal-hearing ear. The signals to the normal-hearing ear were created by (a) filtering, (b) spectral smearing, (c) changing overall fundamental frequency (F0), (d) F0 contour flattening, (e) changing formant frequencies, (f) altering resonances and ring times to create a metallic sound quality, (g) using a noise vocoder, or (h) using a sine vocoder. The operations could be used singly or in any combination. On a scale of 1 to 10 where 10 was a complete match to the sound of the CI, the mean match score was 8.8. Over half of the matches were 9.0 or higher. The most common alterations to a clean signal were band-pass or low-pass filtering, spectral peak smearing, and F0 contour flattening. On average, 3.4 operations were used to create a match. Upshifts in formant frequencies were implemented most often for ...
Purpose There is a growing body of literature that suggests a linkage between impaired auditory f... more Purpose There is a growing body of literature that suggests a linkage between impaired auditory function, increased listening effort, and fatigue in children and adults with hearing loss. Research suggests this linkage may be associated with hearing loss–related variations in diurnal cortisol levels. Here, we examine variations in cortisol profiles between young adults with and without severe sensorineural hearing loss and examine associations between cortisol and subjective measures of listening effort and fatigue. Method This study used a repeated-measures, matched-pair design. Two groups ( n = 8 per group) of adults enrolled in audiology programs participated, 1 group of adults with hearing loss (AHL) and 1 matched control group without hearing loss. Salivary cortisol samples were collected at 7 time points over a 2-week period and used to quantify physiological stress. Subjective measures of listening effort, stress, and fatigue were also collected to investigate relationships b...
Journal of Neurological Surgery Part B: Skull Base
Unilateral severe-to-profound sensorineural hearing loss (SNHL), also known as single sided deafn... more Unilateral severe-to-profound sensorineural hearing loss (SNHL), also known as single sided deafness (SSD), is a problem that affects both children and adults, and can have severe and detrimental effects on multiple aspects of life including music appreciation, speech understanding in noise, speech and language acquisition, performance in the classroom and/or the workplace, and quality of life. Additionally, the loss of binaural hearing in SSD patients affects those processes that rely on two functional ears including sound localization, binaural squelch and summation, and the head shadow effect. Over the last decade, there has been increasing interest in cochlear implantation for SSD to restore binaural hearing. Early data are promising that cochlear implantation for SSD can help to restore binaural functionality, improve quality of life, and may faciliate reversal of neuroplasticity related to auditory deprivation in the pediatric population. Additionally, this new patient populat...
Journal of speech, language, and hearing research : JSLHR, Jan 17, 2017
The aim of this article is to summarize recent published and unpublished research from our 2 labo... more The aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs). CI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple, diffuse noise sources, and the other was a cocktail party with 2 spatially separated point sources of competing speech. At issue was the value of the following sources of information, or interventions, on speech understanding: (a) visual information, (b) adaptive beamformer microphones and remote microphones, (c) bimodal fittings, that is, a CI and contralateral low-frequency acoustic hearing, (d) hearing preservation fittings, that is, a CI with preserved low-frequency acoustic in the same ear plus low-frequency acoustic hearing in the contralateral ear, and (e) bilateral CIs. A remote microphone provided the largest improvement in speech understanding. Visual ...
Journal of speech, language, and hearing research : JSLHR, Dec 1, 2016
Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear impl... more Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. (a) Most CI users report that most of the time, they have access to both A and V information when listening to speech. (b) CI users did not achieve better scores on a task of speechreading than did listeners with normal hearing. (c) Sentences that are easy to speechread provided 12 percentage points more gain to speech understanding than did sentences that were difficult. (d) Ease of speechreading for sentences is related to phrase familiarity. (e) Users of bimodal CIs benefit from low-frequency acoustic hearing even when V cues are available, and a second CI adds to the benefit of a single CI when V cues are available. (f) V information facilitates lexical segmentation by improving the recognition of the number of syllables produced and the relat...
Consonant recognition was measured as a function of the number of stimulation channels for Hybrid... more Consonant recognition was measured as a function of the number of stimulation channels for Hybrid short-electrode cochlear implant (CI) users, long-electrode CI users, and normal-hearing (NH) listeners in quiet and background noise. Short-electrode CI subjects were tested with 1-6 channels allocated to a frequency range of 1063-7938 Hz. Long-electrode CI subjects were tested with 1-6, 8, or 22 channels allocated to 188-7938 Hz, or 1-6 or 15 channels from the basal 15 electrodes allocated to 1063-7938 Hz. NH listeners were tested with simulations of each CI group/condition. Despite differences in intracochlear electrode spacing for equivalent channel conditions, all CI subject groups performed similarly at each channel condition and improved up to at least four channels in quiet and noise. All CI subject groups underperformed relative to NH subjects. These preliminary findings suggest that the limited channel benefit seen for CI users may not be due solely to increases in channel interactions as a function of electrode density. Other factors such as pre-operative patient history, location of stimulation in the base versus apex, or a limit on the number of electric channels that can be processed cognitively, may also interact with the effects of electrode contact spacing along the cochlea.
Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimula... more Both bilateral cochlear implants (CIs) and bimodal (electric plus contralateral acoustic) stimulation can provide better speech intelligibility than a single CI. In both cases patients need to combine information from two ears into a single percept. In this paper we ask whether the physiological and psychological processes associated with aging alter the ability of bilateral and bimodal CI patients to combine information across two ears in the service of speech understanding. The subjects were 60 adult bilateral CI patients and 91 adult bimodal patients. The test battery was composed of monosyllabic words presented in quiet and the AzBio sentences presented in quiet, at +10 and at +5 dB signal-to-noise ratio (SNR). The subjects were tested in standard audiometric sound booths. Speech and noise were always presented from a single speaker directly in front of the listener. Age and bilateral or bimodal benefit were not significantly correlated for any test measure. Other factors equal, both bilateral CIs and bimodal CIs can be recommended for elderly patients.
Journal of Speech Language and Hearing Research Jslhr, Feb 1, 1999
Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve s... more Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve speech recognition in listeners with hearing impairment, with the intention of incorporating this approach into emerging amplification technology. Unfortunately, most previous studies have enhanced CVRs by increasing consonant energy, thus possibly confounding CVR effects with consonant audibility. In this study, we held consonant audibility constant by reducing vowel transition and steady-state energy rather than increasing consonant energy. Performance-by-intensity (PI) functions were obtained for recognition of voiceless stop consonants (/p/, /t/, /k/) presented in isolation (burst and aspiration digitally separated from the vowel) and for consonant-vowel syllables, with readdition of the vowel /a/. There were three CVR conditions: normal CVR, vowel reduction by 6 dB, and vowel reduction by 12 dB. Testing was conducted in broadband noise fixed at 70 dB SPL and at 85 dB SPL. Six adults with sensorineural hearing impairment and 2 adults with normal hearing served as listeners. Results indicated that CVR enhancement did not improve identification performance when consonant audibility was held constant, except at the higher noise level for one listener with hearing impairment. The re-addition of the vowel energy to the isolated consonant did, however, produce large and significant improvements in phoneme identification.
The aims of this study were 1) to determine the number of channels of stimulation needed by norma... more The aims of this study were 1) to determine the number of channels of stimulation needed by normal-hearing adults and children to achieve a high level of word recognition and 2) to compare the performance of normal-hearing children and adults listening to speech processed into 6 to 20 channels of stimulation with the performance of children who use the Nucleus 22 cochlear implant. In Experiment 1, the words from the Multisyllabic Lexical Neighborhood Test (MLNT) were processed into 6 to 20 channels and output as the sum of sine waves at the center frequency of the analysis bands. The signals were presented to normal-hearing adults and children for identification. In Experiment 2, the wideband recordings of the MLNT words were presented to early-implanted and late-implanted children who used the Nucleus 22 cochlear implant. Experiment 1: Normal-hearing children needed more channels of stimulation than adults to recognize words. Ten channels allowed 99% correct word recognition for adults; 12 channels allowed 92% correct word recognition for children. Experiment 2: The average level of intelligibility for both early- and late-implanted children was equivalent to that found for normal-hearing adults listening to four to six channels of stimulation. The best intelligibility for implanted children was equivalent to that found for normal-hearing adults listening to six channels of stimulation. The distribution of scores for early- and late-implanted children differed. Nineteen percent of the late-implanted children achieved scores below that allowed by a 6-channel processor. None of the early-implanted children fell into this category. The average implanted child must deal with a signal that is significantly degraded. This is likely to prolong the period of language acquisition. The period could be significantly shortened if implants were able to deliver at least eight functional channels of stimulation. Twelve functional channels of stimulation would provide signals near the intelligibility of wideband signals in quiet.
Sentence intelligibility in quiet and in noise was assessed for two types of signal processing al... more Sentence intelligibility in quiet and in noise was assessed for two types of signal processing algorithms commonly implemented for cochlear implants. Experiment 1 determined the number of m channels in an m-of-18 processor that are necessary for asymptotic performance. Experiment 2 determined the number of fixed channels necessary to equal the performance of the best spectral- maxima processor. A 4-of-18
Our primary aim was to determine whether listeners in the following patient groups achieve locali... more Our primary aim was to determine whether listeners in the following patient groups achieve localization accuracy within the 95th percentile of accuracy shown by younger or older normal-hearing (NH) listeners: (1) hearing impaired with bilateral hearing aids, (2) bimodal cochlear implant (CI), (3) bilateral CI, (4) hearing preservation CI, (5) single-sided deaf CI and (6) combined bilateral CI and bilateral hearing preservation. The listeners included 57 young NH listeners, 12 older NH listeners, 17 listeners fit with hearing aids, 8 bimodal CI listeners, 32 bilateral CI listeners, 8 hearing preservation CI listeners, 13 single-sided deaf CI listeners and 3 listeners with bilateral CIs and bilateral hearing preservation. Sound source localization was assessed in a sound-deadened room with 13 loudspeakers arrayed in a 180-degree arc. The root mean square (rms) error for the NH listeners was 6 degrees. The 95th percentile was 11 degrees. Nine of 16 listeners with bilateral hearing aids...
Ieee Transactions on Bio Medical Engineering, Jun 1, 2007
The speech reception performance of a recipient of the Clarion CII implant was evaluated with a c... more The speech reception performance of a recipient of the Clarion CII implant was evaluated with a comprehensive set of tests. The same tests were administered for a group of six subjects with normal hearing. Scores for the implant subject were not different from the scores for the normal-hearing subjects, for seven of the nine tests, including the most difficult test used in standard clinical practice. These results are both surprising and encouraging, in that the implant provides only a very crude mimicking of only some aspects of the normal physiology.
To determine durational differences between vowel and nasal segments preceding word-final /t/ and... more To determine durational differences between vowel and nasal segments preceding word-final /t/ and /d/, spectrograms were made of adult speakers' productions of minimal pairs of the type /pent/-/pend/. Vowel, nasal, and vowel plus nasal (vocalic nucleus) durations were greater before /d/ than before /t/. Assuming the voiceless context as a base, the increase in nasal duration in the voiced case was proportionately greater than the increase in vowel duration. This outcome suggests that nasal duration is a more powerful cue to the voicing characteristic of the following consonant than is vowel duration. To test this, adult listeners were presented synthetic CVNC utterances in which the nasal and vowel segments were independently varied in duration over a range of 40 msec to 200 msec and were instructed to label the final stop consonant as either voiced /d/ or voiceless /t/. Although changes in both vowel and nasal duration were sufficient to cue both voiced and voiceless judgements...
Uploads
Papers by Michael Dorman