Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Lutz Wiegrebe

    Temporal resolution is often measured using the detection of temporal gaps or signals in temporal gaps embedded in long-duration stimuli. In this study, psychoacoustical paradigms are developed for measuring the temporal encoding of... more
    Temporal resolution is often measured using the detection of temporal gaps or signals in temporal gaps embedded in long-duration stimuli. In this study, psychoacoustical paradigms are developed for measuring the temporal encoding of transient stimuli. The stimuli consisted of very short pips which, in two experiments, contained a steady state portion. The carrier was high-pass filtered, dynamically compressed noise, refreshed for every stimulus presentation. The first experiment shows that, with these very short stimuli, gap detection thresholds are about the same as obtained in previous investigations. Experiments II and III show that, using the same stimuli, temporal-separation thresholds and duration-discrimination thresholds are better than gap-detection thresholds. Experiment IV investigates the significance of residual spectral cues for the listeners' performance. In experiment V, temporal separation thresholds were measured as a function of the signal-pip sensation level (SL) in both forward- and backward-masking conditions. The separation thresholds show a strong temporal asymmetry with good separation thresholds independent of signal-pip SL in backward-masking conditions and increasing separation thresholds with decreasing signal-pip SL in forward-masking conditions. A model of the auditory periphery is used to stimulate the gap-detection and temporal-separation thresholds quantitatively. By varying parameters like auditory-filter width and transduction time constants, the model provides some insight into how the peripheral auditory system may cope with temporal processing tasks and thus represents a more physiology-related complement to current models of temporal processing.
    A water surface acts not only as an optic mirror but also as an acoustic mirror. Echolocation calls emitted by bats at low heights above water are reflected away from the bat, and hence the background clutter is reduced. Moreover, targets... more
    A water surface acts not only as an optic mirror but also as an acoustic mirror. Echolocation calls emitted by bats at low heights above water are reflected away from the bat, and hence the background clutter is reduced. Moreover, targets on the surface create an enhanced echo. Here, we formally quantified the effect of the surface and target height on both target detection and -discrimination in a combined laboratory and field approach with Myotis daubentonii. In a two-alternative, forced-choice paradigm, the bats had to detect a mealworm and discriminate it from an inedible dummy (20 mm PVC disc). Psychophysical performance was measured as a function of height above either smooth surfaces (water or PVC) or above a clutter surface (artificial grass). At low heights above the clutter surface (10, 20, or 35 cm), the bats' detection performance was worse than above a smooth surface. At a height of 50 cm, the surface structure had no influence on target detection. Above the clutter...
    To localize low-frequency sound sources in azimuth, the binaural system compares the timing of sound waves at the two ears with microsecond precision. A similarly high precision is also seen in the binaural processing of the envelopes of... more
    To localize low-frequency sound sources in azimuth, the binaural system compares the timing of sound waves at the two ears with microsecond precision. A similarly high precision is also seen in the binaural processing of the envelopes of high-frequency complex sounds. Both for low- and high-frequency sounds, interaural time difference (ITD) acuity is to a large extent independent of sound
    For a gleaning bat hunting prey from the ground, rustling sounds generated by prey movements are essential to invoke a hunting behaviour. The detection of prey-generated rustling sounds may depend heavily on the time structure of the... more
    For a gleaning bat hunting prey from the ground, rustling sounds generated by prey movements are essential to invoke a hunting behaviour. The detection of prey-generated rustling sounds may depend heavily on the time structure of the prey-generated and the masking sounds due to their spectral similarity. Here, we systematically investigate the effect of the temporal structure on psychophysical rustling-sound detection in the gleaning bat, Megaderma lyra. A recorded rustling sound serves as the signal; the maskers are either Gaussian noise or broadband noise with various degrees of envelope fluctuations. Exploratory experiments indicate that the selective manipulation of the temporal structure of the rustling sound does not influence its detection in a Gaussian-noise masker. The results of the main experiment show, however, that the temporal structure of the masker has a strong and systematic effect on rustling-sound detection: When the width of irregularly spaced gaps in the masker ...
    Temporal resolution is often measured using the detection of temporal gaps or signals in temporal gaps embedded in long-duration stimuli. In this study, psychoacoustical paradigms are developed for measuring the temporal encoding of... more
    Temporal resolution is often measured using the detection of temporal gaps or signals in temporal gaps embedded in long-duration stimuli. In this study, psychoacoustical paradigms are developed for measuring the temporal encoding of transient stimuli. The stimuli consisted of very short pips which, in two experiments, contained a steady state portion. The carrier was high-pass filtered, dynamically compressed noise, refreshed for every stimulus presentation. The first experiment shows that, with these very short stimuli, gap detection thresholds are about the same as obtained in previous investigations. Experiments II and III show that, using the same stimuli, temporal-separation thresholds and duration-discrimination thresholds are better than gap-detection thresholds. Experiment IV investigates the significance of residual spectral cues for the listeners' performance. In experiment V, temporal separation thresholds were measured as a function of the signal-pip sensation level (SL) in both forward- and backward-masking conditions. The separation thresholds show a strong temporal asymmetry with good separation thresholds independent of signal-pip SL in backward-masking conditions and increasing separation thresholds with decreasing signal-pip SL in forward-masking conditions. A model of the auditory periphery is used to stimulate the gap-detection and temporal-separation thresholds quantitatively. By varying parameters like auditory-filter width and transduction time constants, the model provides some insight into how the peripheral auditory system may cope with temporal processing tasks and thus represents a more physiology-related complement to current models of temporal processing.
    The pitch strength of rippled noise and iterated rippled noise has recently been fitted by an exponential function of the height of the first peak in the normalized autocorrelation function [Yost, J. Acoust. Soc. Am. 100, 3329-3335... more
    The pitch strength of rippled noise and iterated rippled noise has recently been fitted by an exponential function of the height of the first peak in the normalized autocorrelation function [Yost, J. Acoust. Soc. Am. 100, 3329-3335 (1996)]. The current study compares the pitch strengths and autocorrelation functions of rippled noise (RN) and another regular-interval noise, "AABB." RN is generated by delaying a copy of a noise sample and adding it to the undelayed version. AABB with the same pitch is generated by taking a sample of noise (A) with the same duration as the RN delay and repeating it to produce AA, and then concatenating many of these once-repeated sequences to produce AABBCCDD.... The height of the first peak (h1) in the normalized autocorrelation function of AABB is 0.5, identical to that of RN. The current experiments show the following: (1) AABB and RN can be discriminated when the pitch is less than about 250 Hz. (2) For these low pitches, the pitch strength of AABB is greater than that for RN whereas it is about the same for pitches above 250 Hz. (3) When RN is replaced by iterated rippled noise (IRN) adjusted to match the pitch strength of AABB, the two are no longer discriminable. The pitch-strength difference between AABB and RN below 250 Hz is explained in terms of a three-stage, running-autocorrelation model. It is suggested that temporal integration of pitch information is achieved in two stages separated by a nonlinearity. The first integration stage is implemented as running autocorrelation with a time constant of 1.5 ms. The second model stage is a nonlinear transformation. In the third model stage, the output of the nonlinear transformation is long-term averaged (second integration stage) to provide a measure of pitch strength. The model provides an excellent fit to the pitch-strength matching data over a wide range of pitches.
    Echolocating bats can not only extract spatial information from the auditory analysis of their ultrasonic emissions, they can also discriminate, classify and identify the three-dimensional shape of objects reflecting their emissions.... more
    Echolocating bats can not only extract spatial information from the auditory analysis of their ultrasonic emissions, they can also discriminate, classify and identify the three-dimensional shape of objects reflecting their emissions. Effective object recognition requires the segregation of size and shape information. Previous studies have shown that, like in visual object recognition, bats can transfer an echo-acoustic object discrimination task to objects of different size and that they spontaneously classify scaled versions of virtual echo-acoustic objects according to trained virtual-object standards. The current study aims to bridge the gap between these previous findings using a different class of real objects and a classification-instead of a discrimination paradigm. Echolocating bats (Phyllostomus discolor) were trained to classify an object as either a sphere or an hour-glass shaped object. The bats spontaneously generalised this classification to objects of the same shape. The generalisation cannot be explained based on similarities of the power spectra or temporal structures of the echo-acoustic object images and thus require dedicated neural mechanisms dealing with size-invariant echo-acoustic object analysis. Control experiments with human listeners classifying the echo-acoustic images of the objects confirm the universal validity of auditory size invariance. The current data thus corroborate and extend previous psychophysical evidence for sonar auditory-object normalisation and suggest that the underlying auditory mechanisms following the initial neural extraction of the echo-acoustic images in echolocating bats may be very similar in bats and humans.
    Bats use natural landmarks such as trees for orientation. Echoes reflected by a tree are stochastic and complex. The degree of irregular loudness fluctuations of perceived echoes, i.e. the echo roughness, may be used to classify natural... more
    Bats use natural landmarks such as trees for orientation. Echoes reflected by a tree are stochastic and complex. The degree of irregular loudness fluctuations of perceived echoes, i.e. the echo roughness, may be used to classify natural objects reliably. Bats are able to discriminate and classify echoes of different roughness. A neural correlate of the psychophysical roughness sensitivity has been described in the auditory cortex of the bat Phyllostomus discolor. Here, the role of the inferior colliculus of P. discolor is explored in the neural representation of echo roughness. Using extracellular recording techniques, responses were obtained to simulated stochastic echoes of different roughness. The representation of these irregular loudness fluctuations in echoes is compared to the representation of periodic loudness fluctuations elicited by sinusoidal amplitude modulation (SAM) and to the shape of the peri-stimulus time histogram in response to pure tones. About half the recorded units responded significantly differently to echoes with different roughness. Roughness sensitivity was related to the units' sensitivity to the depth of an SAM: units that responded best to strong SAMs also responded best to echoes of high roughness. In response to pure tones, these units were typically characterized as Onset units. In contrast to the auditory cortex experiments, the responses of many units in the inferior colliculus decreased with increasing echo roughness. These units typically preferred weak SAMs and showed a sustained response to pure tones. The data show that auditory midbrain sensitivity to SAM is an important prerequisite for the neural representation of echo roughness as an ecologically important echo-acoustic parameter.
    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this... more
    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size.
    Humans often have to focus on a single target sound while ignoring competing maskers in everyday situations. In such conditions, speech intelligibility (SI) is improved when a target speaker is spatially separated from a masker (spatial... more
    Humans often have to focus on a single target sound while ignoring competing maskers in everyday situations. In such conditions, speech intelligibility (SI) is improved when a target speaker is spatially separated from a masker (spatial release from making, SRM) compared to situations where both are co-located. Such asymmetric spatial configurations lead to a 'better-ear effect' with improved signal-to-noise ratio (SNR) at one ear. However, maskers often surround the listener leading to more symmetric configurations where better-ear effects are absent in a long-term, wideband sense. Nevertheless, better-ear glimpses distributed across time and frequency persist and were suggested to account for SRM (Brungart and Iyer 2012). Here, speech reception was assessed using symmetric masker configurations while varying the spatio-temporal distribution of potential better-ear glimpses. Listeners were presented with a frontal target and eight single-talker maskers in four different sym...
    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to... more
    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the Active condition, subjects rotated their head by ~35° to the left or right, following a pre-trained trajectory. In the Passive condition, subjects were rotated along the same trajectory in a rotating chair. In the Cancellation condition, subjects rotated their head as in the Active condition, but the chair was counter-rotated ...
    Many animal species adjust the spectral composition of their acoustic signals to variable environments. However, the physiological foundation of such spectral plasticity is often unclear. The source-filter theory of sound production,... more
    Many animal species adjust the spectral composition of their acoustic signals to variable environments. However, the physiological foundation of such spectral plasticity is often unclear. The source-filter theory of sound production, initially established for human speech, applies to vocalizations in birds and mammals. According to this theory, adjusting the spectral structure of vocalizations could be achieved by modifying either the laryngeal/syringeal source signal or the vocal tract, which filters the source signal. Here, we show that in pale spear-nosed bats, spectral plasticity induced by moderate level background noise is dominated by the vocal tract rather than the laryngeal source signal. Specifically, we found that with increasing background noise levels, bats consistently decreased the spectral centroid of their echolocation calls up to 3.2 kHz, together with other spectral parameters. In contrast, noise-induced changes in fundamental frequency were small (maximally 0.1 k...
    The perceptual insensitivity to low frequency (LF) sound in humans has led to an underestimation of the physiological impact of LF exposure on the inner ear. It is known, however, that intense, LF sound causes cyclic changes of indicators... more
    The perceptual insensitivity to low frequency (LF) sound in humans has led to an underestimation of the physiological impact of LF exposure on the inner ear. It is known, however, that intense, LF sound causes cyclic changes of indicators of inner ear function after LF stimulus offset, for which the term "Bounce" phenomenon has been coined.Here, we show that the mechanical amplification of hair cells (OHCs) is significantly affected after the presentation of LF sound. First, we show the Bounce phenomenon in slow level changes of quadratic, but not cubic, distortion product otoacoustic emissions (DPOAEs). Second, Bouncing in response to LF sound is seen in slow, oscillating frequency and correlated level changes of spontaneous otoacoustic emissions (SOAEs). Surprisingly, LF sound can induce new SOAEs which can persist for tens of seconds. Further, we show that the Bounce persists under free-field conditions, i.e. without an in-ear probe occluding the auditory meatus. Finally, we show that the Bounce is affected by contralateral acoustic stimulation synchronised to the ipsilateral LF sound. These findings clearly demonstrate that the origin of the Bounce lies in the modulation of cochlear amplifier gain. We conclude that activity changes of OHCs are the source of the Bounce, most likely caused by a temporary disturbance of OHC calcium homeostasis. In the light of these findings, the effects of long-duration, anthropogenic LF sound on the human inner ear require further research.
    Short-term adjustments of signal characteristics allow animals to maintain reliable communication in noise. Noise-dependent vocal plasticity often involves simultaneous changes in multiple parameters. Here, we quantified for the first... more
    Short-term adjustments of signal characteristics allow animals to maintain reliable communication in noise. Noise-dependent vocal plasticity often involves simultaneous changes in multiple parameters. Here, we quantified for the first time the relative contributions of signal amplitude, duration, and redundancy for improving signal detectability in noise. To this end, we used a combination of behavioural experiments on pale spear-nosed bats (Phyllostomus discolor) and signal detection models. In response to increasing noise levels, all bats raised the amplitude of their echolocation calls by 1.8-7.9 dB (the Lombard effect). Bats also increased signal duration by 13%-85%, corresponding to an increase in detectability of 1.0-5.3 dB. Finally, in some noise conditions, bats increased signal redundancy by producing more call groups. Assuming optimal cognitive integration, this could result in a further detectability improvement by up to 4 dB. Our data show that while the main improvement in signal detectability was due to the Lombard effect, increasing signal duration and redundancy can also contribute markedly to improving signal detectability. Overall, our findings demonstrate that the observed adjustments of signal parameters in noise are matched to how these parameters are processed in the receiver's sensory system, thereby facilitating signal transmission in fluctuating environments.
    Intense, low-frequency sound presented to the mammalian cochlea induces temporary changes of cochlear sensitivity, for which the term 'Bounce' phenomenon has been coined. Typical... more
    Intense, low-frequency sound presented to the mammalian cochlea induces temporary changes of cochlear sensitivity, for which the term 'Bounce' phenomenon has been coined. Typical manifestations are slow oscillations of hearing thresholds or the level of otoacoustic emissions. It has been suggested that these alterations are caused by changes of the mechano-electrical transducer transfer function of outer hair cells (OHCs). Shape estimates of this transfer function can be derived from low-frequency-biased distortion product otoacoustic emissions (DPOAE). Here, we tracked the transfer function estimates before and after triggering a cochlear Bounce. Specifically, cubic DPOAEs, modulated by a low-frequency biasing tone, were followed over time before and after induction of the cochlear Bounce. Most subjects showed slow, biphasic changes of the transfer function estimates after low-frequency sound exposure relative to the preceding control period. Our data show that the operating point changes biphasically on the transfer function with an initial shift away from the inflection point followed by a shift towards the inflection point before returning to baseline values. Changes in transfer function and operating point lasted for about 180 s. Our results are consistent with the hypothesis that intense, low-frequency sound disturbs regulatory mechanisms in OHCs. The homeostatic readjustment of these mechanisms after low-frequency offset is reflected in slow oscillations of the estimated transfer functions.
    Temporal integration is a crucial feature of auditory temporal processing. We measured the psychophysical temporal integration of acoustic intensity in the echolocating bat Megaderma lyra using a two-alternative forced-choice procedure. A... more
    Temporal integration is a crucial feature of auditory temporal processing. We measured the psychophysical temporal integration of acoustic intensity in the echolocating bat Megaderma lyra using a two-alternative forced-choice procedure. A measuring paradigm was chosen in which the absolute threshold for pairs of short tone pips was determined as a function of the temporal separation between the pips. The time constants determined with this paradigm are a crucial characteristic of the sonar system of M. lyra, a species orientating in its environment by very short broadband sonar calls emitted at high rates. Two different carrier frequencies for the tone pips were used to obtain data from the lower and the higher half of the hearing area of M. lyra. Both in the lower and in the higher frequency range, M. lyra showed very short time constants of about 220 microseconds. Our results are comparable to data from the echolocating dolphin, Tursiops truncatus, showing click integration times of about 260 microseconds and to estimates of auditory temporal integration in the context of echo clutter interference in the big brown bat.
    >Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions... more
    >Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term 'bounce phenomenon' has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca(2+) influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca(2+) concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca(2+) concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz-8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support for the hypothesis that outer hair cell calcium homeostasis is the source of the bounce phenomenon.
    The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between... more
    The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory-motor interactions, and on possible optimization strategies underlying echolocation in humans.
    Echolocation is an active sense enabling bats and toothed whales to orient in darkness through echo returns from their ultrasonic signals. Immediately before prey capture, both bats and whales emit a buzz with such high emission rates... more
    Echolocation is an active sense enabling bats and toothed whales to orient in darkness through echo returns from their ultrasonic signals. Immediately before prey capture, both bats and whales emit a buzz with such high emission rates (≥180 Hz) and overall duration so short that its functional significance remains an enigma. To investigate sensory-motor control during the buzz of the insectivorous bat Myotis daubentonii, we removed prey, suspended in air or on water, before expected capture. The bats responded by shortening their echolocation buzz gradually; the earlier prey was removed down to approximately 100 ms (30 cm) before expected capture, after which the full buzz sequence was emitted both in air and over water. Bats trawling over water also performed the full capture behavior, but in-air capture motions were aborted, even at very late prey removals (<20 ms = 6 cm before expected contact). Thus, neither the buzz nor capture movements are stereotypical, but dynamically ad...
    Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex... more
    Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cort...
    Locomotion and foraging on the wing require precise navigation in more than just the horizontal plane. Navigation in three dimensions and, specifically, precise adjustment of flight height are essential for flying animals. Echolocating... more
    Locomotion and foraging on the wing require precise navigation in more than just the horizontal plane. Navigation in three dimensions and, specifically, precise adjustment of flight height are essential for flying animals. Echolocating bats drink from water surfaces in flight, which requires an exceptionally precise vertical navigation. Here, we exploit this behavior in the bat, Phyllostomus discolor, to understand the biophysical and neural mechanisms that allow for sonar-guided navigation in the vertical plane. In a set of behavioral experiments, we show that for echolocating bats, adjustment of flight height depends on the tragus in their outer ears. Specifically, the tragus imposes elevation-specific spectral interference patterns on the echoes of the bats' sonar emissions. Head-related transfer functions of our bats show that these interference patterns are most conspicuous in the frequency range ∼55 kHz. This conspicuousness is faithfully preserved in the frequency tuning ...
    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective... more
    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the 'Listening' experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the 'Echolocation' experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acou...
    To localize low-frequency sound sources in azimuth, the binaural system compares the timing of sound waves at the two ears with microsecond precision. A similarly high precision is also seen in the binaural processing of the envelopes of... more
    To localize low-frequency sound sources in azimuth, the binaural system compares the timing of sound waves at the two ears with microsecond precision. A similarly high precision is also seen in the binaural processing of the envelopes of high-frequency complex sounds. Both for low- and high-frequency sounds, interaural time difference (ITD) acuity is to a large extent independent of sound
    The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal... more
    The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal psychophysical experiments with sighted subjects trained to evaluate features of a virtual echo-acoustic space, allowing for rigid and fine-grain control of the stimulus parameters. The data show how subjects shape both their vocalisations and auditory analysis of the echoes to serve specific echo-acoustic tasks. First, we show that humans can echo-acoustically discriminate target distances with a resolution of less than 1 m for reference distances above 3.4 m. For a reference distance of 1.7 m, corresponding to an echo delay of only 10 ms, distance JNDs were typically around 0.5 m. Second, we explore the interplay between the precedence effect and echolocation. We show that the strong perceptual asymmetry between lead and lag is weakened during echolocation. Finally, we show that through the auditory analysis of self-generated sounds, subjects discriminate room-size changes as small as 10%.In summary, the current data confirm the practical efficacy of human echolocation, and they provide a rigid psychophysical basis for addressing its neural foundations.
    Research Interests:
    Recent temporal models of pitch and amplitude modulation perception converge on a relatively realistic implementation of cochlear processing followed by a temporal analysis of periodicity. However, for modulation perception, a modulation... more
    Recent temporal models of pitch and amplitude modulation perception converge on a relatively realistic implementation of cochlear processing followed by a temporal analysis of periodicity. However, for modulation perception, a modulation filterbank is applied whereas for pitch perception, autocorrelation is applied. Considering the large overlap in pitch and modulation perception, this is not parsimonious. Two experiments are presented to investigate the interaction between carrier periodicity, which produces strong pitch sensations, and envelope periodicity using broadband stimuli. Results show that in the presence of carrier periodicity, detection of amplitude modulation is impaired throughout the tested range (8-1000 Hz). On the contrary, detection of carrier periodicity in the presence of an additional amplitude modulation is impaired only for very low frequencies below the pitch range (<33 Hz). Predictions of a generic implementation of a modulation-filterbank model and an autocorrelation model are compared to the data. Both models were too insensitive to high-frequency envelope or carrier periodicity and to infra-pitch carrier periodicity. Additionally, both models simulated modulation detection quite well but underestimated the detrimental effect of carrier periodicity on modulation detection. It is suggested that a hybrid model consisting of bandpass envelope filters with a ripple in their passband may provide a functionally successful and physiologically plausible basis for a unified model of auditory periodicity extraction.
    Navigating on the wing in complete darkness is a challenging task for echolocating bats. It requires the detailed analysis of spatial and temporal information gained through echolocation. Thus neural encoding of spatiotemporal echo... more
    Navigating on the wing in complete darkness is a challenging task for echolocating bats. It requires the detailed analysis of spatial and temporal information gained through echolocation. Thus neural encoding of spatiotemporal echo information is a major function in the bat auditory system. In this study we presented echoes in virtual acoustic space and used a reverse-correlation technique to investigate the spatiotemporal response characteristics of units in the inferior colliculus (IC) and the auditory cortex (AC) of the bat Phyllostomus discolor. Spatiotemporal response maps (STRMs) of IC units revealed an organization of suppressive and excitatory regions that provided pronounced contrast enhancement along both the time and azimuth axes. Most IC units showed either spatially centralized short-latency excitation spatiotemporally imbedded in strong suppression, or the opposite, i.e., central short-latency suppression imbedded in excitation. This complementary arrangement of excitation and suppression was very rarely seen in AC units. In contrast, STRMs in the AC revealed much less suppression, sharper spatiotemporal tuning, and often a special spatiotemporal arrangement of two excitatory regions. Temporal separation of excitatory regions ranged up to 25 ms and was thus in the range of temporal delays occurring in target ranging in bats in a natural situation. Our data indicate that spatiotemporal processing of echo information in the bat auditory midbrain and cortex serves very different purposes: Whereas the spatiotemporal contrast enhancement provided by the IC contributes to echo-feature extraction, the AC reflects the result of this processing in terms of a high selectivity and task-oriented recombination of the extracted features.

    And 14 more