Search tips
Search criteria

Results 1-25 (706762)

Clipboard (0)

Related Articles

1.  The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise 
Ear and hearing  2011;32(4):468-484.
Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones.
HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues.
The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants.
This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones.
PMCID: PMC3120920  PMID: 21412155
Hearing in Noise Test; cochlear implant; binaural benefits
2.  Sound Localization Cues in the Marmoset Monkey 
Hearing research  2009;260(1-2):96.
The most important acoustic cues available to the brain for sound localization are produced by the interaction of sound with the animal's head and external ears. As a first step in understanding the relation between these cues and their neural representation in a vocal new-world primate, we measured head related transfer functions (HRTFs) across frequency for a wide range of sound locations in three anesthetized marmoset monkeys. The HRTF magnitude spectrum has a broad resonance peak at 6-12 kHz that coincides with the frequency range of the major call types of this species. A prominent first spectral notch (FN) in the HRTF magnitude above this resonance was observed at most source locations. The center frequency of the FN increased monotonically from ∼12-26 kHz with increases in elevation in the lateral field. In the frontal field FN frequency changed in a less orderly fashion with source position. From the HRTFs we derived interaural time (ITDs) and level differences (ILDs). ITDs and ILDs (below 12 kHz) varied as a function of azimuth between +/- 250 μs and +/-20 dB, respectively. A reflexive orienting behavioral paradigm was used to confirm that marmosets can orient to sound sources.
PMCID: PMC2819082  PMID: 19963054
ITD; ILD; spectral notch; sound localization; HRTF; marmoset behavior
3.  Acoustic Cues for Sound Source Distance and Azimuth in Rabbits, a Racquetball and a Rigid Spherical Model 
There are numerous studies measuring the transfer functions representing signal transformation between a source and each ear canal, i.e., the head-related transfer functions (HRTFs), for various species. However, only a handful of these address the effects of sound source distance on HRTFs. This is the first study of HRTFs in the rabbit where the emphasis is on the effects of sound source distance and azimuth on HRTFs. With the rabbit placed in an anechoic chamber, we made acoustic measurements with miniature microphones placed deep in each ear canal to a sound source at different positions (10–160 cm distance, ±150° azimuth). The sound was a logarithmically swept broadband chirp. For comparisons, we also obtained the HRTFs from a racquetball and a computational model for a rigid sphere. We found that (1) the spectral shape of the HRTF in each ear changed with sound source location; (2) interaural level difference (ILD) increased with decreasing distance and with increasing frequency. Furthermore, ILDs can be substantial even at low frequencies when distance is close; and (3) interaural time difference (ITD) decreased with decreasing distance and generally increased with decreasing frequency. The observations in the rabbit were reproduced, in general, by those in the racquetball, albeit greater in magnitude in the rabbit. In the sphere model, the results were partly similar and partly different than those in the racquetball and the rabbit. These findings refute the common notions that ILD is negligible at low frequencies and that ITD is constant across frequency. These misconceptions became evident when distance-dependent changes were examined.
PMCID: PMC2975892  PMID: 20526728
head-related transfer function (HRTF); sound localization; acoustics; auditory distance; interaural time difference (ITD) cues; interaural level difference (ILD) cues; spectral cues
4.  Acoustic Cues for Sound Source Distance and Azimuth in Rabbits, a Racquetball and a Rigid Spherical Model 
There are numerous studies measuring the transfer functions representing signal transformation between a source and each ear canal, i.e., the head-related transfer functions (HRTFs), for various species. However, only a handful of these address the effects of sound source distance on HRTFs. This is the first study of HRTFs in the rabbit where the emphasis is on the effects of sound source distance and azimuth on HRTFs. With the rabbit placed in an anechoic chamber, we made acoustic measurements with miniature microphones placed deep in each ear canal to a sound source at different positions (10–160 cm distance, ±150° azimuth). The sound was a logarithmically swept broadband chirp. For comparisons, we also obtained the HRTFs from a racquetball and a computational model for a rigid sphere. We found that (1) the spectral shape of the HRTF in each ear changed with sound source location; (2) interaural level difference (ILD) increased with decreasing distance and with increasing frequency. Furthermore, ILDs can be substantial even at low frequencies when distance is close; and (3) interaural time difference (ITD) decreased with decreasing distance and generally increased with decreasing frequency. The observations in the rabbit were reproduced, in general, by those in the racquetball, albeit greater in magnitude in the rabbit. In the sphere model, the results were partly similar and partly different than those in the racquetball and the rabbit. These findings refute the common notions that ILD is negligible at low frequencies and that ITD is constant across frequency. These misconceptions became evident when distance-dependent changes were examined.
PMCID: PMC2975892  PMID: 20526728
head-related transfer function (HRTF); sound localization; acoustics; auditory distance; interaural time difference (ITD) cues; interaural level difference (ILD) cues; spectral cues
5.  Spatial cue reliability drives frequency tuning in the barn Owl's midbrain 
eLife  null;3:e04854.
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.
eLife digest
The ability to locate where a sound is coming from is an essential survival skill for both prey and predator species. A major cue used by the brain to infer the sound's location is the difference in arrival time of the sound at the left and right ears; for example, a sound coming from the left side will reach the left ear before the right ear.
We are exposed to a variety of sounds of different intensities (loud or soft), and pitch (high or low) emitted from many different directions. The cacophony that surrounds us makes it a challenge to detect where individual sounds come from because other sounds from different directions corrupt the signals coming from the target. This background noise can profoundly affect the reliability of the sensory cue.
When sounds reach the ears, the head and external ears transform the sound in a direction-dependent manner so that some pitches are amplified more than other pitches for specific directions. However, the consequence of this filtering is that the directional information about a sound may be altered. For example, if two sounds of a similar pitch but from different locations are heard at the same time, they will add up at the ears and change the directional information. The group of neurons that respond to that range of pitches will be activated by both sounds so they cannot provide reliable information about the direction of the individual sounds. The degree to which the directional information is altered depends on the pitch that is being detected by the neurons; therefore detection of a different pitch within the sound may be a more reliable cue.
Cazettes et al. used the known filtering properties of the owl's head to predict the reliability of the timing cue for sounds coming from different directions in a noisy environment. This analysis showed that for each direction, there was a range of pitches that carried the most reliable cues. The study then focused on whether the neurons that represent hearing space in the owl's brain were sensitive to this range.
The experiments found a remarkable correlation between the pitch preferred by each neuron and the range that carried the most reliable cue for each direction. This finding challenges the common view of sensory neurons as simple processors by showing that they are also selective to high-order properties relating to the reliability of the cue.
Besides selecting the cues that are likely to be the most reliable, the brain must capture changes in the reliability of the sensory cues. In addition, this reliability must be incorporated into the information carried by neurons and used when deciding how best to act in uncertain situations. Future research will be required to unravel how the brain does this.
PMCID: PMC4291741  PMID: 25531067
barn owl; neural coding; cue reliability; sound localization; other
6.  Do you hear where I hear?: isolating the individualized sound localization cues 
It is widely acknowledged that individualized head-related transfer function (HRTF) measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250 ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.
PMCID: PMC4249451  PMID: 25520607
head-related transfer function; spatial hearing; individual differences; auditory display
7.  Sound pressure transformations by the head and pinnae of the adult Chinchilla (Chinchilla lanigera) 
Hearing research  2010;272(1-2):135-147.
There are three main cues to sound location: the interaural differences in time (ITD) and level (ILD) as well as the monaural spectral shape cues. These cues are generated by the spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although the chinchilla has been used for decades to study the anatomy, physiology, and psychophysics of audition, including binaural and spatial hearing, little is actually known about the sound pressure transformations by the head and pinnae and the resulting sound localization cues available to them. Here, we measured the directional transfer functions (DTFs), the directional components of the head-related transfer functions, for 9 adult chinchillas. The resulting localization cues were computed from the DTFs. In the frontal hemisphere, spectral notch cues were present for frequencies from ~6–18 kHz. In general, the frequency corresponding to the notch increased with increases in source elevation as well as in azimuth towards the ipsilateral ear. The ILDs demonstrated a strong correlation with source azimuth and frequency. The maximum ILDs were < 10 dB for frequencies < 5 kHz, and ranged from 10–30 dB for the frequencies > 5 kHz. The maximum ITDs were dependent on frequency, yielding 236 μs at 4 kHz and 336 μs at 250 Hz. Removal of the pinnae eliminated the spectral notch cues, reduced the acoustic gain and the ILDs, altered the acoustic axis, and reduced the ITDs.
PMCID: PMC3039070  PMID: 20971180
sound localization; interaural time difference; interaural level difference; head related transfer function; directional transfer functions
8.  Emergence of Multiplicative Auditory Responses in the Midbrain of the Barn Owl 
Journal of neurophysiology  2007;98(3):1181-1193.
Space-specific neurons in the barn owl’s auditory space map gain spatial selectivity through tuning to combinations of the interaural time difference (ITD) and interaural level difference (ILD). The combination of ITD and ILD in the subthreshold responses of space-specific neurons in the external nucleus of the inferior colliculus (ICx) is well described by a multiplication of ITD- and ILD-dependent components. It is unknown, however, how ITD and ILD are combined at the site of ITD and ILD convergence in the lateral shell of the central nucleus of the inferior colliculus (ICcl) and therefore whether ICx is the first site in the auditory pathway where multiplicative tuning to ITD-and ILD-dependent signals occurs. We used extracellular re-cording of single neurons to determine how ITD and ILD are combined in ICcl of the anesthetized barn owl (Tyto alba). A comparison of additive, multiplicative, and linear-threshold models of neural responses shows that ITD and ILD are combined nonlinearly in ICcl, but the interaction of ITD and ILD is not uniformly multiplicative over the sample. A subset (61%) of the neural responses is well described by the multiplicative model, indicating that ICcl is the first site where multiplicative tuning to ITD- and ILD-dependent signals occurs. ICx, however, is the first site where multiplicative tuning is observed consistently. A network model shows that a linear combination of ICcl responses to ITD–ILD pairs is sufficient to produce the multiplicative subthreshold responses to ITD and ILD seen in ICx.
PMCID: PMC2532518  PMID: 17615132
9.  Perceptual factors contribute more than acoustical factors to sound localization abilities with virtual sources 
Human sound localization abilities rely on binaural and spectral cues. Spectral cues arise from interactions between the sound wave and the listener's body (head-related transfer function, HRTF). Large individual differences were reported in localization abilities, even in young normal-hearing adults. Several studies have attempted to determine whether localization abilities depend mostly on acoustical cues or on perceptual processes involved in the analysis of these cues. These studies have yielded inconsistent findings, which could result from methodological issues. In this study, we measured sound localization performance with normal and modified acoustical cues (i.e., with individual and non-individual HRTFs, respectively) in 20 naïve listeners. Test conditions were chosen to address most methodological issues from past studies. Procedural training was provided prior to sound localization tests. The results showed no direct relationship between behavioral results and an acoustical metrics (spectral-shape prominence of individual HRTFs). Despite uncertainties due to technical issues with the normalization of the HRTFs, large acoustical differences between individual and non-individual HRTFs appeared to be needed to produce behavioral effects. A subset of 15 listeners then trained in the sound localization task with individual HRTFs. Training included either visual correct-answer feedback (for the test group) or no feedback (for the control group), and was assumed to elicit perceptual learning for the test group only. Few listeners from the control group, but most listeners from the test group, showed significant training-induced learning. For the test group, learning was related to pre-training performance (i.e., the poorer the pre-training performance, the greater the learning amount) and was retained after 1 month. The results are interpreted as being in favor of a larger contribution of perceptual factors than of acoustical factors to sound localization abilities with virtual sources.
PMCID: PMC4310278  PMID: 25688182
sound localization; perceptual learning; procedural learning; head-related transfer function; individual differences
10.  Acoustic Basis of Directional Acuity in Laboratory Mice 
The acoustic basis of auditory spatial acuity was investigated in CBA/129 mice by relating patterns of behavioral errors to directional features of the head-related transfer function (HRTF). Behavioral performance was assessed by training the mice to lick a water spout during sound presentations from a “safe” location and to suppress the response during presentations from “warning” locations. Minimum audible angles (MAAs) were determined by delivering the safe and warning sounds from different locations in the inter-aural horizontal and median vertical planes. HRTFs were measured at the same locations by implanting a miniature microphone and recording the gain of sound energy near the ear drum relative to free field. Mice produced an average MAA of 31° when sound sources were located in the horizontal plane. Acoustic measures indicated that binaural inter-aural level differences (ILDs) and monaural spectral features of the HRTF change systematically with horizontal location and therefore may have contributed to the accuracy of behavioral performance. Subsequent manipulations of the auditory stimuli and the directional properties of the ear produced errors that suggest the mice primarily relied on ILD cues when discriminating changes in azimuth. The MAA increased beyond 80° when the importance of ILD cues was minimized by testing in the median vertical plane. Although acoustic measures demonstrated a less robust effect of vertical location on spectral features of the HRTF, this poor performance provides further evidence for the insensitivity to spectral cues that was noted during behavioral testing in the horizontal plane.
PMCID: PMC3173556  PMID: 21717290
spatial acuity; minimum audible angle; head-related transfer function; inter-aural level differences; spectral cues
11.  Single-sided deafness and directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear 
Direction-specific interactions of sound waves with the head, torso, and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs) and sound level (interaural level differences, or ILDs). Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD), their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, <1.5 kHz), high-pass filtered (HP, >3kHz), and broadband (BB, 0.5–20 kHz) noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45–65 dB SPL). Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.
PMCID: PMC4082092  PMID: 25071433
azimuth; head-shadow effect; mold; single-sided deaf(ness); spectral pinna-cues
12.  Modeling sound-source localization in sagittal planes for human listeners 
Monaural spectral features are important for human sound-source localization in sagittal planes, including front-back discrimination and elevation perception. These directional features result from the acoustic filtering of incoming sounds by the listener’s morphology and are described by listener-specific head-related transfer functions (HRTFs). This article proposes a probabilistic, functional model of sagittal-plane localization that is based on human listeners’ HRTFs. The model approximates spectral auditory processing, accounts for acoustic and non-acoustic listener specificity, allows for predictions beyond the median plane, and directly predicts psychoacoustic measures of localization performance. The predictive power of the listener-specific modeling approach was verified under various experimental conditions: The model predicted effects on localization performance of band limitation, spectral warping, non-individualized HRTFs, spectral resolution, spectral ripples, and high-frequency attenuation in speech. The functionalities of vital model components were evaluated and discussed in detail. Positive spectral gradient extraction, sensorimotor mapping, and binaural weighting of monaural spatial information were addressed in particular. Potential applications of the model include predictions of psychophysical effects, for instance, in the context of virtual acoustics or hearing assistive devices.
PMCID: PMC4582445  PMID: 25096113
13.  Effects of Signal Level and Background Noise on Spectral Representations in the Auditory Nerve of the Domestic Cat 
Background noise poses a significant obstacle for auditory perception, especially among individuals with hearing loss. To better understand the physiological basis of this perceptual impediment, the present study evaluated the effects of background noise on the auditory nerve representation of head-related transfer functions (HRTFs). These complex spectral shapes describe the directional filtering effects of the head and torso. When a broadband sound passes through the outer ear en route to the tympanic membrane, the HRTF alters its spectrum in a manner that establishes the perceived location of the sound source. HRTF-shaped noise shares many of the acoustic features of human speech, while communicating biologically relevant localization cues that are generalized across mammalian species. Previous studies have used parametric manipulations of random spectral shapes to elucidate HRTF coding principles at various stages of the cat’s auditory system. This study extended that body of work by examining the effects of sound level and background noise on the quality of spectral coding in the auditory nerve. When fibers were classified by their spontaneous rates, the coding properties of the more numerous low-threshold, high-spontaneous rate fibers were found to degrade at high presentation levels and in low signal-to-noise ratios. Because cats are known to maintain accurate directional hearing under these challenging listening conditions, behavioral performance may be disproportionally based on the enhanced dynamic range of the less common high-threshold, low-spontaneous rate fibers.
PMCID: PMC3015029  PMID: 20824483
spectral integration; auditory nerve; rate representation; sound localization; background noise
14.  Modeling the direction-continuous time-of-arrival in head-related transfer functions 
Head-related transfer functions (HRTFs) describe the filtering of the incoming sound by the torso, head, and pinna. As a consequence of the propagation path from the source to the ear, each HRTF contains a direction-dependent, broadband time-of-arrival (TOA). TOAs are usually estimated independently for each direction from HRTFs, a method prone to artifacts and limited by the spatial sampling. In this study, a continuous-direction TOA model combined with an outlier-removal algorithm is proposed. The model is based on a simplified geometric representation of the listener, and his/her arbitrary position within the HRTF measurement. The outlier-removal procedure uses the extreme studentized deviation test to remove implausible TOAs. The model was evaluated for numerically calculated HRTFs of sphere, torso, and pinna under various conditions. The accuracy of estimated parameters was within the resolution given by the sampling rate. Applied to acoustically measured HRTFs of 172 listeners, the estimated parameters were consistent with realistic listener geometry. The outlier removal further improved the goodness-of-fit, particularly for some problematic fits. The comparison with a simpler model that fixed the listener position to the center of the measurement geometry showed a clear advantage of listener position as an additional free model parameter.
PMCID: PMC4582460  PMID: 24606268
15.  The acoustical cues to sound location in the rat: measurements of Directional Transfer Functions 
The acoustical cues for sound location are generated by spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although rats have been a common model system for anatomy, physiology, and psychophysics of localization, there have been few studies of the acoustical cues available to rats. Here, directional transfer functions (DTFs), the directional components of the head-related transfer functions, were measured in 6 adult rats. The cues to location were computed from the DTFs. In the frontal hemisphere, spectral notches were present for frequencies from ∼16-30 kHz; in general, the frequency corresponding to the notch increased with increases in source elevation and in azimuth towards the ipsilateral ear. The maximum high-frequency envelope-based interaural time differences (ITDs) were 130 μs whearas low-frequency (< 3.5 kHz) fine-structure ITDs were 160 μs; both types of ITDs were larger than predicted from spherical head models. Interaural level differences (ILDs) depended strongly on location and frequency. Maximum ILDs were < 10 dB for frequencies < 8 kHz, and were as large as 20-40 dB for frequencies > 20 kHz. Removal of the pinna eliminated the spectral notches, reduced the acoustic gain and ILDs, altered the acoustical axis, and reduced the ITDs.
PMCID: PMC2579256  PMID: 18537381
16.  Change in the coding of interaural time difference along the tonotopic axis of the chicken nucleus laminaris 
Interaural time differences (ITDs) are an important cue for the localization of sounds in azimuthal space. Both birds and mammals have specialized, tonotopically organized nuclei in the brain stem for the processing of ITD: medial superior olive in mammals and nucleus laminaris (NL) in birds. The specific way in which ITDs are derived was long assumed to conform to a delay-line model in which arrays of systematically arranged cells create a representation of auditory space with different cells responding maximally to specific ITDs. This model was supported by data from barn owl NL taken from regions above 3 kHz and from chicken above 1 kHz. However, data from mammals often do not show defining features of the Jeffress model such as a systematic topographic representation of best ITDs or the presence of axonal delay lines, and an alternative has been proposed in which neurons are not topographically arranged with respect to ITD and coding occurs through the assessment of the overall response of two large neuron populations, one in each hemisphere. Modeling studies have suggested that the presence of different coding systems could be related to the animal’s head size and frequency range rather than their phylogenetic group. Testing this hypothesis requires data from across the tonotopic range of both birds and mammals. The aim of this study was to obtain in vivo recordings from neurons in the low-frequency range (<1000 Hz) of chicken NL. Our data argues for the presence of a modified Jeffress system that uses the slopes of ITD-selective response functions instead of their peaks to topographically represent ITD at mid- to high frequencies. At low frequencies, below several 100 Hz, the data did not support any current model of ITD coding. This is different to what was previously shown in the barn owl and suggests that constraints in optimal ITD processing may be associated with the particular demands on sound localization determined by the animal’s ecological niche in the same way as other perceptual systems such as field of best vision.
PMCID: PMC4542463  PMID: 26347616
interaural time differences; chickens; auditory brainstem; nucleus laminaris; in vivo electrophysiology
17.  Acoustic and non-acoustic factors in modeling listener-specific performance of sagittal-plane sound localization 
The ability of sound-source localization in sagittal planes (along the top-down and front-back dimension) varies considerably across listeners. The directional acoustic spectral features, described by head-related transfer functions (HRTFs), also vary considerably across listeners, a consequence of the listener-specific shape of the ears. It is not clear whether the differences in localization ability result from differences in the encoding of directional information provided by the HRTFs, i.e., an acoustic factor, or from differences in auditory processing of those cues (e.g., spectral-shape sensitivity), i.e., non-acoustic factors. We addressed this issue by analyzing the listener-specific localization ability in terms of localization performance. Directional responses to spatially distributed broadband stimuli from 18 listeners were used. A model of sagittal-plane localization was fit individually for each listener by considering the actual localization performance, the listener-specific HRTFs representing the acoustic factor, and an uncertainty parameter representing the non-acoustic factors. The model was configured to simulate the condition of complete calibration of the listener to the tested HRTFs. Listener-specifically calibrated model predictions yielded correlations of, on average, 0.93 with the actual localization performance. Then, the model parameters representing the acoustic and non-acoustic factors were systematically permuted across the listener group. While the permutation of HRTFs affected the localization performance, the permutation of listener-specific uncertainty had a substantially larger impact. Our findings suggest that across-listener variability in sagittal-plane localization ability is only marginally determined by the acoustic factor, i.e., the quality of directional cues found in typical human HRTFs. Rather, the non-acoustic factors, supposed to represent the listeners' efficiency in processing directional cues, appear to be important.
PMCID: PMC4006033  PMID: 24795672
sound localization; localization model; sagittal plane; listener-specific factors; head-related transfer functions
18.  The representation of sound localization cues in the barn owl's inferior colliculus 
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation.
PMCID: PMC3394089  PMID: 22798945
sound localization; central nucleus of the inferior colliculus; auditory; plasticity; adaptation; interaural time difference; interaural level difference; frequency tuning
19.  Multiplicative Auditory Spatial Receptive Fields Created by a Hierarchy of Population Codes 
PLoS ONE  2009;4(11):e8015.
A multiplicative combination of tuning to interaural time difference (ITD) and interaural level difference (ILD) contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.
PMCID: PMC2776990  PMID: 19956693
20.  The Conductive Hearing Loss Due to an Experimentally Induced Middle Ear Effusion Alters the Interaural Level and Time Difference Cues to Sound Location 
Otitis media with effusion (OME) is a pathologic condition of the middle ear that leads to a mild to moderate conductive hearing loss as a result of fluid in the middle ear. Recurring OME in children during the first few years of life has been shown to be associated with poor detection and recognition of sounds in noisy environments, hypothesized to result due to altered sound localization cues. To explore this hypothesis, we simulated a middle ear effusion by filling the middle ear space of chinchillas with different viscosities and volumes of silicone oil to simulate varying degrees of OME. While the effects of middle ear effusions on the interaural level difference (ILD) cue to location are known, little is known about whether and how middle ear effusions affect interaural time differences (ITDs). Cochlear microphonic amplitudes and phases were measured in response to sounds delivered from several locations in azimuth before and after filling the middle ear with fluid. Significant attenuations (20–40 dB) of sound were observed when the middle ear was filled with at least 1.0 ml of fluid with a viscosity of 3.5 Poise (P) or greater. As expected, ILDs were altered by ~30 dB. Additionally, ITDs were shifted by ~600 μs for low frequency stimuli (<4 kHz) due to a delay in the transmission of sound to the inner ear. The data show that in an experimental model of OME, ILDs and ITDs are shifted in the spatial direction of the ear without the experimental effusion.
PMCID: PMC3441957  PMID: 22648382
otitis media with effusion; conductive hearing loss; sound localization; cochlear microphonic
21.  Behavioral Sensitivity to Broadband Binaural Localization Cues in the Ferret 
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.
PMCID: PMC3705081  PMID: 23615803
sound localization; spatial hearing; psychometric function; interaural time difference; interaural level difference; azimuth
22.  Discrimination and identification of azimuth using spectral shapea) 
Monaural measurements of minimum audible angle (MAA) (discrimination between two locations) and absolute identification (AI) of azimuthal locations in the frontal horizontal plane are reported. All experiments used roving-level fixed-spectral-shape stimuli processed with nonindividualized head-related transfer functions (HRTFs) to simulate the source locations. Listeners were instructed to maximize percent correct, and correct-answer feedback was provided after every trial. Measurements are reported for normal-hearing subjects, who listened with only one ear, and effectively monaural subjects, who had substantial unilateral hearing impairments (i.e., hearing losses greater than 60 dB) and listened with their normal ears. Both populations behaved similarly; the monaural experience of the unilaterally impaired listeners was not beneficial for these monaural localization tasks. Performance in the AI experiments was similar with both 7 and 13 source locations. The average root-mean-squared deviation between the virtual source location and the reported location was 35°, the average slopes of the best fitting line was 0.82, and the average bias was 2°. The best monaural MAAs were less than 5°. The MAAs were consistent with a theoretical analysis of the HRTFs, which suggests that monaural azimuthal discrimination is related to spectral-shape discrimination.
PMCID: PMC2597187  PMID: 19045798
23.  Developmental Changes Underlying the Formation of the Specialized Time Coding Circuits in Barn Owls (Tyto alba) 
The Journal of Neuroscience  2002;22(17):7671-7679.
Barn owls are capable of great accuracy in detecting the interaural time differences (ITDs) that underlie azimuthal sound localization. They compute ITDs in a circuit in nucleus laminaris (NL) that is reorganized with respect to birds like the chicken. The events that lead to the reorganization of the barn owl NL take place during embryonic development, shortly after the cochlear and laminaris nuclei have differentiated morphologically. At first the developing owl’s auditory brainstem exhibits morphology reminiscent of that of the developing chicken. Later, the two systems diverge, and the owl’s brainstem auditory nuclei undergo a secondary morphogenetic phase during which NL dendrites retract, the laminar organization is lost, and synapses are redistributed. These events lead to the restructuring of the ITD coding circuit and the consequent reorganization of the hindbrain map of ITDs and azimuthal space.
PMCID: PMC3260528  PMID: 12196590
avian development; morphogenesis; auditory; laminaris; evolution; interaural time difference
24.  Fast multipole boundary element method to calculate head-related transfer functions for a wide frequency range 
Head-related transfer functions (HRTFs) play an important role in spatial sound localization. The boundary element method (BEM) can be applied to calculate HRTFs from non-contact visual scans. Because of high computational complexity, HRTF simulations with BEM for the whole head and pinnae have only been performed for frequencies below 10 kHz. In this study, the fast multipole method (FMM) is coupled with BEM to simulate HRTFs for a wide frequency range. The basic approach of the FMM and its implementation are described. A mesh with over 70 000 elements was used to calculate HRTFs for one subject. With this mesh, the method allowed to calculate HRTFs for frequencies up to 35 kHz. Comparison to acoustically-measured HRTFs has been performed for frequencies up to 16 kHz, showing a good congruence below 7 kHz. Simulations with an additional shoulder mesh improved the congruence in the vertical direction. Reduction in the mesh size by 5% resulted in a substantially-worse representation of spectral cues. The effects of temperature and mesh perturbation were negligible. The FMM appears to be a promising approach for HRTF simulations. Further limitations and potential advantages of the FMM-coupled BEM are discussed.
PMCID: PMC3061451  PMID: 19739742
25.  The Avian Head Induces Cues for Sound Localization in Elevation 
PLoS ONE  2014;9(11):e112178.
Accurate sound source localization in three-dimensional space is essential for an animal’s orientation and survival. While the horizontal position can be determined by interaural time and intensity differences, localization in elevation was thought to require external structures that modify sound before it reaches the tympanum. Here we show that in birds even without external structures like pinnae or feather ruffs, the simple shape of their head induces sound modifications that depend on the elevation of the source. Based on a model of localization errors, we show that these cues are sufficient to locate sounds in the vertical plane. These results suggest that the head of all birds induces acoustic cues for sound localization in the vertical plane, even in the absence of external ears.
PMCID: PMC4229125  PMID: 25390036

Results 1-25 (706762)