PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (993002)

Clipboard (0)
None

Related Articles

1.  Asymmetric Excitatory Synaptic Dynamics Underlie Interaural Time Difference Processing in the Auditory System 
PLoS Biology  2010;8(6):e1000406.
In order to localize sounds in the environment, the auditory system detects and encodes differences in signals between each ear. The exquisite sensitivity of auditory brain stem neurons to the differences in rise time of the excitation signals from the two ears allows for neuronal encoding of microsecond interaural time differences.
Low-frequency sound localization depends on the neural computation of interaural time differences (ITD) and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO). We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP) slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.
Author Summary
Animals can locate the source of a sound by detecting microsecond differences in the arrival time of sound at the two ears. Neurons encoding these interaural time differences (ITDs) receive an excitatory synaptic input from each ear. They can perform a microsecond computation with excitatory synapses that have millisecond time scale because they are extremely sensitive to the input's “rise time,” the time taken to reach the peak of the synaptic input. Current theories assume that the biophysical properties of the two inputs are identical. We challenge this assumption by showing that the rise times of excitatory synaptic potentials driven by the ipsilateral ear are faster than those driven by the contralateral ear. Further, we present a computational model demonstrating that this disparity in rise times, together with the neurons' sensitivity to excitation's rise time, can endow ITD-encoding with microsecond resolution in the biologically relevant range. Our analysis also resolves a timing mismatch. The difference between contralateral and ipsilateral latencies is substantially larger than the relevant ITD range. We show how the rise time disparity compensates for this mismatch. Generalizing, we suggest that phasic-firing neurons—those that respond to rapidly, but not to slowly, changing stimuli—are selective to the temporal ordering of brief inputs. In a coincidence-detection computation the neuron will respond more robustly when a faster input leads a slower one, even if the inputs are brief and have similar amplitudes.
doi:10.1371/journal.pbio.1000406
PMCID: PMC2893945  PMID: 20613857
2.  Glycinergic inhibition tunes coincidence detection in the auditory brainstem 
Nature Communications  2014;5:3790.
Neurons in the medial superior olive (MSO) detect microsecond differences in the arrival time of sounds between the ears (interaural time differences or ITDs), a crucial binaural cue for sound localization. Synaptic inhibition has been implicated in tuning ITD sensitivity, but the cellular mechanisms underlying its influence on coincidence detection are debated. Here we determine the impact of inhibition on coincidence detection in adult Mongolian gerbil MSO brain slices by testing precise temporal integration of measured synaptic responses using conductance-clamp. We find that inhibition dynamically shifts the peak timing of excitation, depending on its relative arrival time, which in turn modulates the timing of best coincidence detection. Inhibitory control of coincidence detection timing is consistent with the diversity of ITD functions observed in vivo and is robust under physiologically relevant conditions. Our results provide strong evidence that temporal interactions between excitation and inhibition on microsecond timescales are critical for binaural processing.
Coincidence detector neurons in the mammalian brainstem encode interaural time differences (ITDs) that are implicated in auditory processing. Myoga et al. study a previously developed neuronal model and find that inhibition is crucial for sound localization, but more dynamically than previously thought.
doi:10.1038/ncomms4790
PMCID: PMC4024823  PMID: 24804642
3.  Spatial cue reliability drives frequency tuning in the barn Owl's midbrain 
eLife  null;3:e04854.
The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability.
DOI: http://dx.doi.org/10.7554/eLife.04854.001
eLife digest
The ability to locate where a sound is coming from is an essential survival skill for both prey and predator species. A major cue used by the brain to infer the sound's location is the difference in arrival time of the sound at the left and right ears; for example, a sound coming from the left side will reach the left ear before the right ear.
We are exposed to a variety of sounds of different intensities (loud or soft), and pitch (high or low) emitted from many different directions. The cacophony that surrounds us makes it a challenge to detect where individual sounds come from because other sounds from different directions corrupt the signals coming from the target. This background noise can profoundly affect the reliability of the sensory cue.
When sounds reach the ears, the head and external ears transform the sound in a direction-dependent manner so that some pitches are amplified more than other pitches for specific directions. However, the consequence of this filtering is that the directional information about a sound may be altered. For example, if two sounds of a similar pitch but from different locations are heard at the same time, they will add up at the ears and change the directional information. The group of neurons that respond to that range of pitches will be activated by both sounds so they cannot provide reliable information about the direction of the individual sounds. The degree to which the directional information is altered depends on the pitch that is being detected by the neurons; therefore detection of a different pitch within the sound may be a more reliable cue.
Cazettes et al. used the known filtering properties of the owl's head to predict the reliability of the timing cue for sounds coming from different directions in a noisy environment. This analysis showed that for each direction, there was a range of pitches that carried the most reliable cues. The study then focused on whether the neurons that represent hearing space in the owl's brain were sensitive to this range.
The experiments found a remarkable correlation between the pitch preferred by each neuron and the range that carried the most reliable cue for each direction. This finding challenges the common view of sensory neurons as simple processors by showing that they are also selective to high-order properties relating to the reliability of the cue.
Besides selecting the cues that are likely to be the most reliable, the brain must capture changes in the reliability of the sensory cues. In addition, this reliability must be incorporated into the information carried by neurons and used when deciding how best to act in uncertain situations. Future research will be required to unravel how the brain does this.
DOI: http://dx.doi.org/10.7554/eLife.04854.002
doi:10.7554/eLife.04854
PMCID: PMC4291741  PMID: 25531067
barn owl; neural coding; cue reliability; sound localization; other
4.  From microseconds to seconds and minutes—time computation in insect hearing 
The computation of time in the auditory system of insects is of relevance at rather different time scales, covering a large range from microseconds to several minutes. At the one end of this range, only a few microseconds of interaural time differences are available for directional hearing, due to the small distance between the ears, usually considered too small to be processed reliably by simple nervous systems. Synapses of interneurons in the afferent auditory pathway are, however, very sensitive to a time difference of only 1–2 ms provided by the latency shift of afferent activity with changing sound direction. At a much larger time scale of several tens of milliseconds to seconds, time processing is important in the context species recognition, but also for those insects where males produce acoustic signals within choruses, and the temporal relationship between song elements strongly deviates from a random distribution. In these situations, some species exhibit a more or less strict phase relationship of song elements, based on phase response properties of their song oscillator. Here we review evidence on how this may influence mate choice decisions. In the same dimension of some tens of milliseconds we find species of katydids with a duetting communication scheme, where one sex only performs phonotaxis to the other sex if the acoustic response falls within a very short time window after its own call. Such time windows show some features unique to insects, and although its neuronal implementation is unknown so far, the similarity with time processing for target range detection in bat echolocation will be discussed. Finally, the time scale being processed must be extended into the range of many minutes, since some acoustic insects produce singing bouts lasting quite long, and female preferences may be based on total signaling time.
doi:10.3389/fphys.2014.00138
PMCID: PMC3990047  PMID: 24782783
interaural time difference; directional hearing; signal timing; chorus synchrony; mate choice; precedence effect; time window
5.  Resolution of interaural time differences in the avian sound localization circuit—a modeling study 
Interaural time differences (ITDs) are a main cue for sound localization and sound segregation. A dominant model to study ITD detection is the sound localization circuitry in the avian auditory brainstem. Neurons in nucleus laminaris (NL) receive auditory information from both ears via the avian cochlear nucleus magnocellularis (NM) and compare the relative timing of these inputs. Timing of these inputs is crucial, as ITDs in the microsecond range must be discriminated and encoded. We modeled ITD sensitivity of single NL neurons based on previously published data and determined the minimum resolvable ITD for neurons in NL. The minimum resolvable ITD is too large to allow for discrimination by single NL neurons of naturally occurring ITDs for very low frequencies. For high frequency NL neurons (>1 kHz) our calculated ITD resolutions fall well within the natural range of ITDs and approach values of below 10 μs. We show that different parts of the ITD tuning function offer different resolution in ITD coding, suggesting that information derived from both parts may be used for downstream processing. A place code may be used for sound location at frequencies above 500 Hz, but our data suggest the slope of the ITD tuning curve ought to be used for ITD discrimination by single NL neurons at the lowest frequencies. Our results provide an important measure of the necessary temporal window of binaural inputs for future studies on the mechanisms and development of neuronal computation of temporally precise information in this important system. In particular, our data establish the temporal precision needed for conduction time regulation along NM axons.
doi:10.3389/fncom.2014.00099
PMCID: PMC4143899  PMID: 25206329
sound localization; interaural time differences; avian brainstem; nucleus laminaris; ITD resolution
6.  Directional hearing: from biophysical binaural cues to directional hearing outdoors 
When insects communicate by sound, or use acoustic cues to escape predators or detect prey or hosts they have to localize the sound in most cases, to perform adaptive behavioral responses. In the case of particle velocity receivers such as the antennae of mosquitoes, directionality is no problem because such receivers are inherently directional. Insects equipped with bilateral pairs of tympanate ears could principally make use of binaural cues for sound localization, like all other animals with two ears. However, their small size is a major problem to create sufficiently large binaural cues, with respect to both interaural time differences (ITDs, because interaural distances are so small), but also with respect to interaural intensity differences (IIDs), since the ratio of body size to the wavelength of sound is rather unfavorable for diffractive effects. In my review, I will only shortly cover these biophysical aspects of directional hearing. Instead, I will focus on aspects of directional hearing which received relatively little attention previously, the evolution of a pressure difference receiver, 3D-hearing, directional hearing outdoors, and directional hearing for auditory scene analysis.
doi:10.1007/s00359-014-0939-6
PMCID: PMC4282874  PMID: 25231204
Directional hearing; Pressure difference receiver; Binaural cues; Interaural time difference; Interaural intensity difference
7.  Differential Conduction Velocity Regulation in Ipsilateral and Contralateral Collaterals Innervating Brainstem Coincidence Detector Neurons 
The Journal of Neuroscience  2014;34(14):4914-4919.
Information processing in the brain relies on precise timing of signal propagation. The highly conserved neuronal network for computing spatial representations of acoustic signals resolves microsecond timing of sounds processed by the two ears. As such, it provides an excellent model for understanding how precise temporal regulation of neuronal signals is achieved and maintained. The well described avian and mammalian brainstem circuit for computation of interaural time differences is composed of monaural cells in the cochlear nucleus (CN; nucleus magnocellularis in birds) projecting to binaurally innervated coincidence detection neurons in the medial superior olivary nucleus (MSO) in mammals or nucleus laminaris (NL) in birds. Individual axons from CN neurons issue a single axon that bifurcates into an ipsilateral branch and a contralateral branch that innervate segregated dendritic regions of the MSO/NL coincidence detector neurons. We measured conduction velocities of the ipsilateral and contralateral branches of these bifurcating axon collaterals in the chicken by antidromic stimulation of two sites along each branch and whole-cell recordings in the parent neurons. At the end of each experiment, the individual CN neuron and its axon collaterals were filled with dye. We show that the two collaterals of a single axon adjust the conduction velocities individually to achieve the specific conduction velocities essential for precise temporal integration of information from the two ears, as required for sound localization. More generally, these results suggest that individual axonal segments in the CNS interact locally with surrounding neural structures to determine conduction velocity.
doi:10.1523/JNEUROSCI.5460-13.2014
PMCID: PMC3972718  PMID: 24695710
conduction velocity regulation; myelin plasticity; sound localization
8.  Neuronal specializations for the processing of interaural difference cues in the chick 
Sound information is encoded as a series of spikes of the auditory nerve fibers (ANFs), and then transmitted to the brainstem auditory nuclei. Features such as timing and level are extracted from ANFs activity and further processed as the interaural time difference (ITD) and the interaural level difference (ILD), respectively. These two interaural difference cues are used for sound source localization by behaving animals. Both cues depend on the head size of animals and are extremely small, requiring specialized neural properties in order to process these cues with precision. Moreover, the sound level and timing cues are not processed independently from one another. Neurons in the nucleus angularis (NA) are specialized for coding sound level information in birds and the ILD is processed in the posterior part of the dorsal lateral lemniscus nucleus (LLDp). Processing of ILD is affected by the phase difference of binaural sound. Temporal features of sound are encoded in the pathway starting in nucleus magnocellularis (NM), and ITD is processed in the nucleus laminaris (NL). In this pathway a variety of specializations are found in synapse morphology, neuronal excitability, distribution of ion channels and receptors along the tonotopic axis, which reduces spike timing fluctuation in the ANFs-NM synapse, and imparts precise and stable ITD processing to the NL. Moreover, the contrast of ITD processing in NL is enhanced over a wide range of sound level through the activity of GABAergic inhibitory systems from both the superior olivary nucleus (SON) and local inhibitory neurons that follow monosynaptic to NM activity.
doi:10.3389/fncir.2014.00047
PMCID: PMC4023016  PMID: 24847212
brainstem auditory nucleus; interaural difference cues; SON; tonic inhibition; phasic inhibition
9.  Adaptive Reweighting of Auditory Localization Cues in Response to Chronic Unilateral Ear-plugging in Humans 
Localizing a sound source involves the detection and integration of various spatial cues present in the sound waves at each ear. Previous studies indicate that the brain circuits underlying sound localization are calibrated by experience of the cues available to each individual. Plasticity in spatial hearing is most pronounced during development, but can also be demonstrated during adulthood under certain circumstances. Investigations into whether adult humans can adjust to reduced input in one ear and learn a new correspondence between interaural differences cues and directions in space have produced conflicting results. Here we show that humans of both sexes can relearn to localize broadband sounds with a “flat” spectrum in the horizontal plane after altering the spatial cues available by plugging one ear. In subjects who received daily training, localization accuracy progressively shifted back toward their pre-plug performance after one week of ear-plugging, whereas no improvement was seen if all trials were carried out on the same day. However, localization performance did not improve on a task that employed stimuli in which the source spectrum was randomized from trial to trial, indicating that monaural spectral cues are needed for plasticity. We also characterized the effects of the earplug on sensitivity to interaural time and level differences, and found no clear evidence for adaptation to these cues as the free-field localization performance improved. These findings suggest that the mature auditory system can accommodate abnormal inputs and maintain a stable spatial percept by reweighting different cues according to how informative they are.
doi:10.1523/JNEUROSCI.5488-09.2010
PMCID: PMC4225134  PMID: 20371808
Auditory; Binaural; Plasticity; Human; Spatial perception; Hearing; Training
10.  An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners 
Trends in Hearing  2014;18:2331216514560442.
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention.
doi:10.1177/2331216514560442
PMCID: PMC4271773  PMID: 25492094
spatial hearing; hearing impairment; hearing aids; vision; evolution
11.  Single-sided deafness and directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear 
Direction-specific interactions of sound waves with the head, torso, and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs) and sound level (interaural level differences, or ILDs). Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD), their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, <1.5 kHz), high-pass filtered (HP, >3kHz), and broadband (BB, 0.5–20 kHz) noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45–65 dB SPL). Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.
doi:10.3389/fnins.2014.00188
PMCID: PMC4082092  PMID: 25071433
azimuth; head-shadow effect; mold; single-sided deaf(ness); spectral pinna-cues
12.  Sound lateralisation in patients with left or right cerebral hemispheric lesions: relation with unilateral visuospatial neglect 
OBJECTIVES—To localise the brain lesion that causes disturbances of sound lateralisation and to examine the correlation between such deficit and unilateral visuospatial neglect.
METHOD—There were 29 patients with right brain damage, 15 patients with left brain damage, and 22 healthy controls, who had normal auditory and binaural thresholds. A device was used that delivered sound to the left and right ears with an interaural time difference using headphones. The amplitude (an index of ability to detect sound image shifts from the centre) and midpoint (an index of deviation of the interaural time difference range perceived as the centre) parameters of interaural time difference were analysed in each subject using 10 consecutive stable saw toothed waves.
RESULTS—The amplitude of interaural time difference was significantly higher in patients with right brain damage than in controls. The midpoint of the interaural time difference was significantly more deviated in patients with right brain damage than in those with left brain damage and controls (p<0.05). Patients with right brain damage with lesions affecting both the parietal lobe and auditory pathway showed a significantly higher amplitude and deviated midpoint than the controls, whereas right brain damage with involvement of only the parietal lobe showed a midpoint significantly deviated from the controls (p<0.05). Abnormal sound lateralisation correlated with unilateral visuospatial neglect (p<0.05).
CONCLUSIONS—The right parietal lobe plays an important part in sound lateralisation. Sound lateralisation is also influenced by lesions of the right auditory pathway, although the effect of such lesions is less than that of the right parietal lobe. Disturbances of sound lateralisation correlate with unilateral visuospatial neglect.


PMCID: PMC1736561  PMID: 10486395
13.  The contribution of high frequencies to human brain activity underlying horizontal localization of natural spatial sounds 
BMC Neuroscience  2007;8:78.
Background
In the field of auditory neuroscience, much research has focused on the neural processes underlying human sound localization. A recent magnetoencephalography (MEG) study investigated localization-related brain activity by measuring the N1m event-related response originating in the auditory cortex. It was found that the dynamic range of the right-hemispheric N1m response, defined as the mean difference in response magnitude between contralateral and ipsilateral stimulation, reflects cortical activity related to the discrimination of horizontal sound direction. Interestingly, the results also suggested that the presence of realistic spectral information within horizontally located spatial sounds resulted in a larger right-hemispheric N1m dynamic range. Spectral cues being predominant at high frequencies, the present study further investigated the issue by removing frequencies from the spatial stimuli with low-pass filtering. This resulted in a stepwise elimination of direction-specific spectral information. Interaural time and level differences were kept constant. The original, unfiltered stimuli were broadband noise signals presented from five frontal horizontal directions and binaurally recorded for eight human subjects with miniature microphones placed in each subject's ear canals. Stimuli were presented to the subjects during MEG registration and in a behavioral listening experiment.
Results
The dynamic range of the right-hemispheric N1m amplitude was not significantly affected even when all frequencies above 600 Hz were removed. The dynamic range of the left-hemispheric N1m response was significantly diminished by the removal of frequencies over 7.5 kHz. The subjects' behavioral sound direction discrimination was only affected by the removal of frequencies over 600 Hz.
Conclusion
In accord with previous psychophysical findings, the current results indicate that frontal horizontal sound localization and related right-hemispheric cortical processes are insensitive to the presence of high-frequency spectral information. The previously described changes in localization-related brain activity, reflected in the enlarged N1m dynamic range elicited by natural spatial stimuli, can most likely be attributed to the processing of individualized spatial cues present already at relatively low frequencies. The left-hemispheric effect could be an indication of left-hemispheric processing of high-frequency sound information unrelated to sound localization. Taken together, these results provide converging evidence for a hemispheric asymmetry in sound localization.
doi:10.1186/1471-2202-8-78
PMCID: PMC2045670  PMID: 17897443
14.  Are Interaural Time and Level Differences Represented by Independent or Integrated Codes in the Human Auditory Cortex? 
Sound localization is important for orienting and focusing attention and for segregating sounds from different sources in the environment. In humans, horizontal sound localization mainly relies on interaural differences in sound arrival time and sound level. Despite their perceptual importance, the neural processing of interaural time and level differences (ITDs and ILDs) remains poorly understood. Animal studies suggest that, in the brainstem, ITDs and ILDs are processed independently by different specialized circuits. The aim of the current study was to investigate whether, at higher processing levels, they remain independent or are integrated into a common code of sound laterality. For that, we measured late auditory cortical potentials in response to changes in sound lateralization elicited by perceptually matched changes in ITD and/or ILD. The responses to the ITD and ILD changes exhibited significant morphological differences. At the same time, however, they originated from overlapping areas of the cortex and showed clear evidence for functional coupling. These results suggest that the auditory cortex contains an integrated code of sound laterality, but also retains independent information about ITD and ILD cues. This cue-related information might be used to assess how consistent the cues are, and thus, how likely they would have arisen from the same source.
doi:10.1007/s10162-013-0421-0
PMCID: PMC3901864  PMID: 24218332
electroencephalography (EEG); adaptation; horizontal sound localization; spatial hearing
15.  Behavioral Sensitivity to Broadband Binaural Localization Cues in the Ferret 
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.
doi:10.1007/s10162-013-0390-3
PMCID: PMC3705081  PMID: 23615803
sound localization; spatial hearing; psychometric function; interaural time difference; interaural level difference; azimuth
16.  Perception of Binaural Cues Develops in Children Who Are Deaf through Bilateral Cochlear Implantation 
PLoS ONE  2014;9(12):e114841.
There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing.
doi:10.1371/journal.pone.0114841
PMCID: PMC4273969  PMID: 25531107
17.  Behavioral Sensitivity to Interaural Time Differences in the Rabbit 
Hearing research  2007;235(1-2):134-142.
An important cue for sound localization and separation of signals from noise is the interaural time difference (ITD). Humans are able to localize sounds within 1–2° and can detect very small changes in the ITD (10–20 μs). In contrast, many animals localize sounds with less precision than humans. Rabbits, for example, have sound localization thresholds of ~22°. There is only limited information about behavioral ITD discrimination in animals with poor sound localization acuity that are typically used for the neural recordings. For this study, we measured behavioral discrimination of ITDs in the rabbit for a range of reference ITDs from 0 to ± 300 μs. The behavioral task was conditioned avoidance and the stimulus was band-limited noise (500–1500 Hz). Across animals, the average discrimination threshold was 50–60 μs for reference ITDs of 0 to ± 200 μs. There was no trend in the thresholds across this range of reference ITDs. For a reference ITD of ± 300 μs, which is near the limit of the physiological window defined by the head width in this species, the discrimination threshold increased to ~100 μs. The ITD discrimination in rabbits less acute than in cats, which have a similar head size. This result supports the suggestion that ITD discrimination, like sound localization (see Heffner, 1997, Acta Otolaryngol Suppl 532:46–53, 1997) is determined by factors other than head size.
doi:10.1016/j.heares.2007.11.003
PMCID: PMC2692955  PMID: 18093767
Sound localization; animal psychoacoustics; neural discrimination
18.  Low-Frequency Envelope Sensitivity Produces Asymmetric Binaural Tuning Curves 
Journal of Neurophysiology  2008;100(4):2381-2396.
Neurons in the auditory midbrain are sensitive to differences in the timing of sounds at the two ears—an important sound localization cue. We used broadband noise stimuli to investigate the interaural-delay sensitivity of low-frequency neurons in two midbrain nuclei: the inferior colliculus (IC) and the dorsal nucleus of the lateral lemniscus. Noise-delay functions showed asymmetries not predicted from a linear dependence on interaural correlation: a stretching along the firing-rate dimension (rate asymmetry), and a skewing along the interaural-delay dimension (delay asymmetry). These asymmetries were produced by an envelope-sensitive component to the response that could not entirely be accounted for by monaural or binaural nonlinearities, instead indicating an enhancement of envelope sensitivity at or after the level of the superior olivary complex. In IC, the skew-like asymmetry was consistent with intermediate-type responses produced by the convergence of ipsilateral peak-type inputs and contralateral trough-type inputs. This suggests a stereotyped pattern of input to the IC. In the course of this analysis, we were also able to determine the contribution of time and phase components to neurons' internal delays. These findings have important consequences for the neural representation of interaural timing differences and interaural correlation—cues critical to the perception of acoustic space.
doi:10.1152/jn.90393.2008
PMCID: PMC2576218  PMID: 18753329
19.  Interaural Correlation Fails to Account for Detection in a Classic Binaural Task: Dynamic ITDs Dominate N0Sπ Detection 
Binaural signal detection in an NoSπ task relies on interaural disparities introduced by adding an antiphasic signal to diotic noise. What metric of interaural disparity best predicts performance? Some models use interaural correlation; others differentiate between dynamic interaural time differences (ITDs) and interaural level differences (ILDs) of the effective stimulus. To examine the relative contributions of ITDs and ILDs in binaural detection, we developed a novel signal processing technique that selectively degrades different aspects (potential cues) of binaural stimuli (e.g., only ITDs are scrambled). Degrading a particular cue will affect performance only if that cue is relevant to the binaural processing underlying detection. This selective scrambling technique was applied to the stimuli of a classic N0Sπ task in which the listener had to detect an antiphasic 500-Hz signal in the presence of a diotic wideband noise masker. Data obtained from five listeners showed that (1) selective scrambling of ILDs had little effect on binaural detection, (2) selective scrambling of ITDs significantly degraded detection, and (3) combined scrambling of ILDs and ITDs had the same effect as exclusive scrambling of ITDs. Regarding the question which stimulus properties determine detection, we conclude that for this binaural task (1) dynamic ITDs dominate detection performance, (2) ILDs are largely irrelevant, and (3) interaural correlation of the stimulus is a poor predictor of detection. Two simple stimulus-based models that each reproduce all binaural aspects of the data quite well are described: (1) a single-parameter detection model using ITD variance as detection criterion and (2) a compressive transformation followed by a crosscorrelation analysis. The success of both of these contrasting models shows that our data alone cannot reveal the mechanisms underlying the dominance of ITD cues. The physiological implications of our findings are discussed.
doi:10.1007/s10162-009-0185-8
PMCID: PMC2820206  PMID: 19760461
binaural detection; masking; ITD; ILD; MLD; binaural modulation
20.  Interaural Correlation Fails to Account for Detection in a Classic Binaural Task: Dynamic ITDs Dominate N0Sπ Detection 
Binaural signal detection in an NoSπ task relies on interaural disparities introduced by adding an antiphasic signal to diotic noise. What metric of interaural disparity best predicts performance? Some models use interaural correlation; others differentiate between dynamic interaural time differences (ITDs) and interaural level differences (ILDs) of the effective stimulus. To examine the relative contributions of ITDs and ILDs in binaural detection, we developed a novel signal processing technique that selectively degrades different aspects (potential cues) of binaural stimuli (e.g., only ITDs are scrambled). Degrading a particular cue will affect performance only if that cue is relevant to the binaural processing underlying detection. This selective scrambling technique was applied to the stimuli of a classic N0Sπ task in which the listener had to detect an antiphasic 500-Hz signal in the presence of a diotic wideband noise masker. Data obtained from five listeners showed that (1) selective scrambling of ILDs had little effect on binaural detection, (2) selective scrambling of ITDs significantly degraded detection, and (3) combined scrambling of ILDs and ITDs had the same effect as exclusive scrambling of ITDs. Regarding the question which stimulus properties determine detection, we conclude that for this binaural task (1) dynamic ITDs dominate detection performance, (2) ILDs are largely irrelevant, and (3) interaural correlation of the stimulus is a poor predictor of detection. Two simple stimulus-based models that each reproduce all binaural aspects of the data quite well are described: (1) a single-parameter detection model using ITD variance as detection criterion and (2) a compressive transformation followed by a crosscorrelation analysis. The success of both of these contrasting models shows that our data alone cannot reveal the mechanisms underlying the dominance of ITD cues. The physiological implications of our findings are discussed.
doi:10.1007/s10162-009-0185-8
PMCID: PMC2820206  PMID: 19760461
binaural detection; masking; ITD; ILD; MLD; binaural modulation
21.  Mutation in the Kv3.3 Voltage-Gated Potassium Channel Causing Spinocerebellar Ataxia 13 Disrupts Sound-Localization Mechanisms 
PLoS ONE  2013;8(10):e76749.
Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13). This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.
doi:10.1371/journal.pone.0076749
PMCID: PMC3792041  PMID: 24116147
22.  Decoding neural responses to temporal cues for sound localization 
eLife  2013;2:e01312.
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies.
DOI: http://dx.doi.org/10.7554/eLife.01312.001
eLife digest
Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate.
Several theories have been proposed for how the brain calculates position from interaural time differences. According to the hemispheric theory, the activities of particular binaurally sensitive neurons in each of side of the brain are added together: adding signals in this way has been shown to maximize sensitivity to time differences under simple, controlled circumstances. The peak decoding theory proposes that the brain can work out the location of a sound on the basis of which neurons responded most strongly to the sound.
Both theories have their potential advantages, and there is evidence in support of each. Now, Goodman et al. have used computational simulations to compare the models under ecologically relevant circumstances. The simulations show that the results predicted by both models are inconsistent with those observed in real animals, and they propose that the brain must use the full pattern of neural responses to calculate the location of a sound.
One of the parts of the brain that is responsible for locating sounds is the inferior colliculus. Studies in cats and humans have shown that damage to the inferior colliculus on one side of the brain prevents accurate localization of sounds on the opposite side of the body, but the animals are still able to locate sounds on the same side. This finding is difficult to explain using the hemispheric model, but Goodman et al. show that it can be explained with pattern-based models.
DOI: http://dx.doi.org/10.7554/eLife.01312.002
doi:10.7554/eLife.01312
PMCID: PMC3844708  PMID: 24302571
sound localization; neural coding; audition; None
23.  The Conductive Hearing Loss Due to an Experimentally Induced Middle Ear Effusion Alters the Interaural Level and Time Difference Cues to Sound Location 
Otitis media with effusion (OME) is a pathologic condition of the middle ear that leads to a mild to moderate conductive hearing loss as a result of fluid in the middle ear. Recurring OME in children during the first few years of life has been shown to be associated with poor detection and recognition of sounds in noisy environments, hypothesized to result due to altered sound localization cues. To explore this hypothesis, we simulated a middle ear effusion by filling the middle ear space of chinchillas with different viscosities and volumes of silicone oil to simulate varying degrees of OME. While the effects of middle ear effusions on the interaural level difference (ILD) cue to location are known, little is known about whether and how middle ear effusions affect interaural time differences (ITDs). Cochlear microphonic amplitudes and phases were measured in response to sounds delivered from several locations in azimuth before and after filling the middle ear with fluid. Significant attenuations (20–40 dB) of sound were observed when the middle ear was filled with at least 1.0 ml of fluid with a viscosity of 3.5 Poise (P) or greater. As expected, ILDs were altered by ~30 dB. Additionally, ITDs were shifted by ~600 μs for low frequency stimuli (<4 kHz) due to a delay in the transmission of sound to the inner ear. The data show that in an experimental model of OME, ILDs and ITDs are shifted in the spatial direction of the ear without the experimental effusion.
doi:10.1007/s10162-012-0335-2
PMCID: PMC3441957  PMID: 22648382
otitis media with effusion; conductive hearing loss; sound localization; cochlear microphonic
24.  Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization 
For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms.
doi:10.1111/ejn.12402
PMCID: PMC4063341  PMID: 24256073
auditory localization; interaural level difference; interaural time difference; phase ambiguity; spatial hearing; training
25.  Dichotic sound localization properties of duration-tuned neurons in the inferior colliculus of the big brown bat 
Electrophysiological studies on duration-tuned neurons (DTNs) from the mammalian auditory midbrain have typically evoked spiking responses from these cells using monaural or free-field acoustic stimulation focused on the contralateral ear, with fewer studies devoted to examining the electrophysiological properties of duration tuning using binaural stimulation. Because the inferior colliculus (IC) receives convergent inputs from lower brainstem auditory nuclei that process sounds from each ear, many midbrain neurons have responses shaped by binaural interactions and are selective to binaural cues important for sound localization. In this study, we used dichotic stimulation to vary interaural level difference (ILD) and interaural time difference (ITD) acoustic cues and explore the binaural interactions and response properties of DTNs and non-DTNs from the IC of the big brown bat (Eptesicus fuscus). Our results reveal that both DTNs and non-DTNs can have responses selective to binaural stimulation, with a majority of IC neurons showing some type of ILD selectivity, fewer cells showing ITD selectivity, and a number of neurons showing both ILD and ITD selectivity. This study provides the first demonstration that the temporally selective responses of DTNs from the vertebrate auditory midbrain can be selective to binaural cues used for sound localization in addition to having spiking responses that are selective for stimulus frequency, amplitude, and duration.
doi:10.3389/fphys.2014.00215
PMCID: PMC4050336  PMID: 24959149
auditory neurophysiology; binaural hearing; dichotic stimulation; Eptesicus fuscus; sound localization

Results 1-25 (993002)