PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (759457)

Clipboard (0)
None

Related Articles

1.  Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants 
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
doi:10.1007/s10162-006-0069-0
PMCID: PMC1907379  PMID: 17225976
binaural hearing; electric stimulation; neural prosthesis; cochlear implant; inferior colliculus
2.  Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants 
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
doi:10.1007/s10162-006-0069-0
PMCID: PMC1907379  PMID: 17225976
binaural hearing; electric stimulation; neural prosthesis; cochlear implant; inferior colliculus
3.  Studies on Bilateral Cochlear Implants at the University of Wisconsin’s Binaural Hearing and Speech Lab 
This report highlights research projects relevant to binaural and spatial hearing in adults and children. In the past decade we have made progress in understanding the impact of bilateral cochlear implants (BiCIs) on performance in adults and children. However, BiCI users typically do not perform as well as normal hearing (NH) listeners. In this paper we describe the benefits from BiCIs compared with a single CI, focusing on measures of spatial hearing and speech understanding in noise. We highlight the fact that in BiCI listening the devices in the two ears are not coordinated, thus binaural spatial cues that are available to NH listeners are not available to BiCI users. Through the use of research processors that carefully control the stimulus delivered to each electrode in each ear, we are able to preserve binaural cues and deliver them with fidelity to BiCI users. Results from those studies are discussed as well, with a focus on the effect of age at onset of deafness and plasticity of binaural sensitivity. Our work with children has expanded both in number of subjects tested and age range included. We have now tested dozens of children ranging in age from 2-14 years. Our findings suggest that spatial hearing abilities emerge with bilateral experience. While we originally focused on studying performance in free-field, where real world listening experiments are conducted, more recently we have begun to conduct studies under carefully controlled binaural stimulation conditions with children as well. We have also studied language acquisition and speech perception and production in young CI users. Finally, a running theme of this research program is the systematic investigation of the numerous factors that contribute to spatial and binaural hearing in BiCI users. By using CI simulations (with vocoders) and studying NH listeners under degraded listening conditions, we are able to tease apart limitations due to the hardware/software of the CI systems from limitations due to neural pathology.
doi:10.3766/jaaa.23.6.9
PMCID: PMC3517294  PMID: 22668767
4.  Dichotic sound localization properties of duration-tuned neurons in the inferior colliculus of the big brown bat 
Electrophysiological studies on duration-tuned neurons (DTNs) from the mammalian auditory midbrain have typically evoked spiking responses from these cells using monaural or free-field acoustic stimulation focused on the contralateral ear, with fewer studies devoted to examining the electrophysiological properties of duration tuning using binaural stimulation. Because the inferior colliculus (IC) receives convergent inputs from lower brainstem auditory nuclei that process sounds from each ear, many midbrain neurons have responses shaped by binaural interactions and are selective to binaural cues important for sound localization. In this study, we used dichotic stimulation to vary interaural level difference (ILD) and interaural time difference (ITD) acoustic cues and explore the binaural interactions and response properties of DTNs and non-DTNs from the IC of the big brown bat (Eptesicus fuscus). Our results reveal that both DTNs and non-DTNs can have responses selective to binaural stimulation, with a majority of IC neurons showing some type of ILD selectivity, fewer cells showing ITD selectivity, and a number of neurons showing both ILD and ITD selectivity. This study provides the first demonstration that the temporally selective responses of DTNs from the vertebrate auditory midbrain can be selective to binaural cues used for sound localization in addition to having spiking responses that are selective for stimulus frequency, amplitude, and duration.
doi:10.3389/fphys.2014.00215
PMCID: PMC4050336  PMID: 24959149
auditory neurophysiology; binaural hearing; dichotic stimulation; Eptesicus fuscus; sound localization
5.  Monaural Deprivation Disrupts Development of Binaural Selectivity in Auditory Midbrain and Cortex 
Neuron  2010;65(5):718-731.
SUMMARY
Degraded sensory experience during critical periods of development can have adverse effects on brain function. In the auditory system, conductive hearing loss associated with childhood ear infections can produce long-lasting deficits in auditory perceptual acuity, much like amblyopia in the visual system. Here we explore the neural mechanisms that may underlie “amblyaudio” by inducing reversible monaural deprivation (MD) in infant, juvenile and adult rats. MD distorted tonotopic maps, weakened the deprived ear’s representation, strengthened the open ear’s representation and disrupted binaural integration of interaural level differences (ILD). Bidirectional plasticity effects were strictly governed by critical periods, were more strongly expressed in primary auditory cortex than inferior colliculus, and directly impacted neural coding accuracy. These findings highlight a remarkable degree of competitive plasticity between aural representations and suggest that the enduring perceptual sequelae of childhood hearing loss might be traced to maladaptive plasticity during critical periods of auditory cortex development.
doi:10.1016/j.neuron.2010.02.019
PMCID: PMC2849994  PMID: 20223206
6.  Binaural Unmasking with Bilateral Cochlear Implants 
Nearly 100,000 deaf patients worldwide have had their hearing restored by a cochlear implant (CI) fitted to one ear. However, although many patients understand speech well in quiet, even the most successful experience difficulty in noisy situations. In contrast, normal-hearing (NH) listeners achieve improved speech understanding in noise by processing the differences between the waveforms reaching the two ears. Here we show that a form of binaural processing can be achieved by patients fitted with an implant in each ear, leading to substantial improvements in signal detection in the presence of competing sounds. The stimulus in each ear consisted of a narrowband noise masker, to which a tonal signal was sometimes added; this mixture was half-wave rectified, lowpass-filtered, and then used to modulate a 1000-pps biphasic pulse train. All four CI users tested showed significantly better signal detection when the signal was presented out of phase at the two ears than when it was in phase. This advantage occurred even though subjects only received information about the slowly varying sound envelope to be presented, contrary to previous reports that waveform fine structure dominates binaural processing. If this advantage generalizes to multichannel situations, it would demonstrate that envelope-based CI speech-processing strategies may allow patients to exploit binaural unmasking in order to improve speech understanding in noise. Furthermore, because the tested patients had been deprived of binaural hearing for eight or more years, our results show that some sensitivity to time-varying interaural cues can persist over extended periods of binaural deprivation.
doi:10.1007/s10162-006-0049-4
PMCID: PMC2504627  PMID: 16941078
cochlear implants; binaural hearing; masking level difference; signal detection
7.  Contralateral cochlear effects of ipsilateral damage: no evidence for interaural coupling 
Hearing research  2009;260(1-2):70.
Lesion studies of the olivocochlear efferents have suggested that feedback via this neuronal pathway normally maintains an appropriate binaural balance in excitability of the two cochlear nerves (Darrow et al., 2006). If true, a decrease in cochlear nerve output from one ear, due to conductive or sensorineural hearing loss, should change cochlear nerve response in the opposite ear via modulation in olivocochlear feedback. To investigate this putative efferent-mediated interaural coupling, we measured cochlear responses repeatedly from both ears in groups of mice for several weeks before, and for up to 5 weeks after, a unilateral manipulation causing either conductive or sensorineural hearing loss. Response measures included amplitude-vs.-level functions for distortion product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs), evoked at 7 log-spaced frequencies. Ipsilateral manipulations included either tympanic membrane removal or an acoustic overstimulation designed to produce a reversible or irreversible threshold shift over a restricted frequency range. None of these ipsilateral manipulations produced systematic changes in contralateral cochlear responses, either at threshold or suprathreshold levels, either in ABRs or DPOAEs. Thus, we find no evidence for compensatory contralateral changes following ipsilateral hearing loss. We did, however, find evidence for age-related increases in DPOAE amplitudes as animals mature from 6 to 12 weeks and evidence for a slow apical spread of noise-induced threshold shifts, which continues for several days post-exposure.
doi:10.1016/j.heares.2009.11.011
PMCID: PMC2815182  PMID: 19944141
hearing loss; feedback; acoustic injury
8.  Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications 
Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and show that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore propose that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical implications of this.
doi:10.3389/fnsys.2013.00123
PMCID: PMC3873525  PMID: 24409125
auditory localization; binaural; monaural; conductive hearing loss; adaptation; learning; cortex; midbrain
9.  Responses of Auditory Nerve and Anteroventral Cochlear Nucleus Fibers to Broadband and Narrowband Noise: Implications for the Sensitivity to Interaural Delays 
The quality of temporal coding of sound waveforms in the monaural afferents that converge on binaural neurons in the brainstem limits the sensitivity to temporal differences at the two ears. The anteroventral cochlear nucleus (AVCN) houses the cells that project to the binaural nuclei, which are known to have enhanced temporal coding of low-frequency sounds relative to auditory nerve (AN) fibers. We applied a coincidence analysis within the framework of detection theory to investigate the extent to which AVCN processing affects interaural time delay (ITD) sensitivity. Using monaural spike trains to a 1-s broadband or narrowband noise token, we emulated the binaural task of ITD discrimination and calculated just noticeable differences (jnds). The ITD jnds derived from AVCN neurons were lower than those derived from AN fibers, showing that the enhanced temporal coding in the AVCN improves binaural sensitivity to ITDs. AVCN processing also increased the dynamic range of ITD sensitivity and changed the shape of the frequency dependence of ITD sensitivity. Bandwidth dependence of ITD jnds from AN as well as AVCN fibers agreed with psychophysical data. These findings demonstrate that monaural preprocessing in the AVCN improves the temporal code in a way that is beneficial for binaural processing and may be crucial in achieving the exquisite sensitivity to ITDs observed in binaural pathways.
doi:10.1007/s10162-011-0268-1
PMCID: PMC3123442  PMID: 21567250
coincidence detection; interaural time difference; discrimination; binaural; sound localization
10.  Responses of Auditory Nerve and Anteroventral Cochlear Nucleus Fibers to Broadband and Narrowband Noise: Implications for the Sensitivity to Interaural Delays 
The quality of temporal coding of sound waveforms in the monaural afferents that converge on binaural neurons in the brainstem limits the sensitivity to temporal differences at the two ears. The anteroventral cochlear nucleus (AVCN) houses the cells that project to the binaural nuclei, which are known to have enhanced temporal coding of low-frequency sounds relative to auditory nerve (AN) fibers. We applied a coincidence analysis within the framework of detection theory to investigate the extent to which AVCN processing affects interaural time delay (ITD) sensitivity. Using monaural spike trains to a 1-s broadband or narrowband noise token, we emulated the binaural task of ITD discrimination and calculated just noticeable differences (jnds). The ITD jnds derived from AVCN neurons were lower than those derived from AN fibers, showing that the enhanced temporal coding in the AVCN improves binaural sensitivity to ITDs. AVCN processing also increased the dynamic range of ITD sensitivity and changed the shape of the frequency dependence of ITD sensitivity. Bandwidth dependence of ITD jnds from AN as well as AVCN fibers agreed with psychophysical data. These findings demonstrate that monaural preprocessing in the AVCN improves the temporal code in a way that is beneficial for binaural processing and may be crucial in achieving the exquisite sensitivity to ITDs observed in binaural pathways.
doi:10.1007/s10162-011-0268-1
PMCID: PMC3123442  PMID: 21567250
coincidence detection; interaural time difference; discrimination; binaural; sound localization
11.  Neural and Behavioral Sensitivity to Interaural Time Differences Using Amplitude Modulated Tones with Mismatched Carrier Frequencies 
Bilateral cochlear implantation is intended to provide the advantages of binaural hearing, including sound localization and better speech recognition in noise. In most modern implants, temporal information is carried by the envelope of pulsatile stimulation, and thresholds to interaural time differences (ITDs) are generally high compared to those obtained in normal hearing observers. One factor thought to influence ITD sensitivity is the overlap of neural populations stimulated on each side. The present study investigated the effects of acoustically stimulating bilaterally mismatched neural populations in two related paradigms: rabbit neural recordings and human psychophysical testing. The neural coding of interaural envelope timing information was measured in recordings from neurons in the inferior colliculus of the unanesthetized rabbit. Binaural beat stimuli with a 1-Hz difference in modulation frequency were presented at the best modulation frequency and intensity as the carrier frequencies at each ear were varied. Some neurons encoded envelope ITDs with carrier frequency mismatches as great as several octaves. The synchronization strength was typically nonmonotonically related to intensity. Psychophysical data showed that human listeners could also make use of binaural envelope cues for carrier mismatches of up to 2–3 octaves. Thus, the physiological and psychophysical data were broadly consistent, and suggest that bilateral cochlear implants should provide information sufficient to detect envelope ITDs even in the face of bilateral mismatch in the neural populations responding to stimulation. However, the strongly nonmonotonic synchronization to envelope ITDs suggests that the limited dynamic range with electrical stimulation may be an important consideration for ITD encoding.
doi:10.1007/s10162-007-0088-5
PMCID: PMC2538436  PMID: 17657543
sound localization; binaural; inferior colliculus; psychophysics
12.  Hair cell heterogeneity and ultrasonic hearing: recent advances in understanding fish hearing. 
The past decade has seen a wealth of new data on the auditory capabilities and mechanisms of fishes. We now have a significantly better appreciation of the structure and function of the auditory system in fishes with regard to their peripheral and central anatomy, physiology, behaviour, sound source localization and hearing capabilities. This paper deals with two of the newest of these findings, hair cell heterogeneity and the detection of ultrasound. As a result of this recent work, we now know that fishes have several different types of sensory hair cells in both the ear and lateral line and there is a growing body of evidence to suggest that these hair cell types arose very early in the evolution of the octavolateralis system. There is also some evidence to suggest that the differences in the hair cell types have functional implications for the way the ear and lateral line of fishes detect and process stimuli. Behavioural studies have shown that, whereas most fishes can only detect sound to 1-3 kHz, several species of the genus Alosa (Clupeiformes, i.e. herrings and their relatives) can detect sounds up to 180 kHz (or even higher). It is suggested that this capability evolved so that these fishes can detect one of their major predators, echolocating dolphins. The mechanism for ultrasound detection remains obscure, though it is hypothesized that the highly derived utricle of the inner ear in these species is involved.
PMCID: PMC1692857  PMID: 11079414
13.  Improved Horizontal Directional Hearing in Bone Conduction Device Users with Acquired Unilateral Conductive Hearing Loss 
We examined horizontal directional hearing in patients with acquired severe unilateral conductive hearing loss (UCHL). All patients (n = 12) had been fitted with a bone conduction device (BCD) to restore bilateral hearing. The patients were tested in the unaided (monaural) and aided (binaural) hearing condition. Five listeners without hearing loss were tested as a control group while listening with a monaural plug and earmuff, or with both ears (binaural). We randomly varied stimulus presentation levels to assess whether listeners relied on the acoustic head-shadow effect (HSE) for horizontal (azimuth) localization. Moreover, to prevent sound localization on the basis of monaural spectral shape cues from head and pinna, subjects were exposed to narrow band (1/3 octave) noises. We demonstrate that the BCD significantly improved sound localization in 8/12 of the UCHL patients. Interestingly, under monaural hearing (BCD off), we observed fairly good unaided azimuth localization performance in 4/12 of the patients. Our multiple regression analysis shows that all patients relied on the ambiguous HSE for localization. In contrast, acutely plugged control listeners did not employ the HSE. Our data confirm and further extend results of recent studies on the use of sound localization cues in chronic and acute monaural listening.
doi:10.1007/s10162-010-0235-2
PMCID: PMC3015026  PMID: 20838845
bone conduction; head movements; head-shadow effect; perceptual learning; sound localization
14.  Improved Horizontal Directional Hearing in Bone Conduction Device Users with Acquired Unilateral Conductive Hearing Loss 
We examined horizontal directional hearing in patients with acquired severe unilateral conductive hearing loss (UCHL). All patients (n = 12) had been fitted with a bone conduction device (BCD) to restore bilateral hearing. The patients were tested in the unaided (monaural) and aided (binaural) hearing condition. Five listeners without hearing loss were tested as a control group while listening with a monaural plug and earmuff, or with both ears (binaural). We randomly varied stimulus presentation levels to assess whether listeners relied on the acoustic head-shadow effect (HSE) for horizontal (azimuth) localization. Moreover, to prevent sound localization on the basis of monaural spectral shape cues from head and pinna, subjects were exposed to narrow band (1/3 octave) noises. We demonstrate that the BCD significantly improved sound localization in 8/12 of the UCHL patients. Interestingly, under monaural hearing (BCD off), we observed fairly good unaided azimuth localization performance in 4/12 of the patients. Our multiple regression analysis shows that all patients relied on the ambiguous HSE for localization. In contrast, acutely plugged control listeners did not employ the HSE. Our data confirm and further extend results of recent studies on the use of sound localization cues in chronic and acute monaural listening.
doi:10.1007/s10162-010-0235-2
PMCID: PMC3015026  PMID: 20838845
bone conduction; head movements; head-shadow effect; perceptual learning; sound localization
15.  Corticofugal Modulation of Initial Neural Processing of Sound Information from the Ipsilateral Ear in the Mouse 
PLoS ONE  2010;5(11):e14038.
Background
Cortical neurons implement a high frequency-specific modulation of subcortical nuclei that includes the cochlear nucleus. Anatomical studies show that corticofugal fibers terminating in the auditory thalamus and midbrain are mostly ipsilateral. Differently, corticofugal fibers terminating in the cochlear nucleus are bilateral, which fits to the needs of binaural hearing that improves hearing quality. This leads to our hypothesis that corticofugal modulation of initial neural processing of sound information from the contralateral and ipsilateral ears could be equivalent or coordinated at the first sound processing level.
Methodology/Principal Findings
With the focal electrical stimulation of the auditory cortex and single unit recording, this study examined corticofugal modulation of the ipsilateral cochlear nucleus. The same methods and procedures as described in our previous study of corticofugal modulation of contralateral cochlear nucleus were employed simply for comparison. We found that focal electrical stimulation of cortical neurons induced substantial changes in the response magnitude, response latency and receptive field of ipsilateral cochlear nucleus neurons. Cortical stimulation facilitated auditory response and shortened the response latency of physiologically matched neurons whereas it inhibited auditory response and lengthened the response latency of unmatched neurons. Finally, cortical stimulation shifted the best frequencies of cochlear neurons towards those of stimulated cortical neurons.
Conclusion
Our data suggest that cortical neurons enable a high frequency-specific remodelling of sound information processing in the ipsilateral cochlear nucleus in the same manner as that in the contralateral cochlear nucleus.
doi:10.1371/journal.pone.0014038
PMCID: PMC2987806  PMID: 21124980
16.  Control of responding by the location of sound: role of binaural cues. 
In auditory localization experiments, where the subject observes from a fixed position, both relative sound intensity and arrival time at the two ears determine the extent of localization performance. The present experiment investigated the role of binaural cues in a different context, the sound-position discrimination task, where the subject is free to move and interact with the sound source. The role of binaural cues was investigated in rats by producing an interaural imbalance through unilateral removal of the middle auditory ossicle (incus) prior to discrimination training. Discrete trial go-right/go-left sound-position discrimination of unilaterally incudectomised rats was then compared with that of normal rats and of rats with the incus of both sides removed. While bilateral incus removal affected binaural intensity and arrival times, the symmetry of sound input between the two ears was preserved. Percentage of correct responses and videotaped observations of sound approach and exploration showed that the unilateral rats failed to localize the sounding speaker. Rats with symmetrical binaural input (normal and bilaterally incudectomised rats) accurately discriminated sound position for the duration of the experiment. Previously reported monaural localization based upon following the intensity gradient to the sound source was not observed in the unilaterally incudectomised rats of the present experiment. It is concluded that sound-position discrimination depends upon the use of binaural cues.
doi:10.1901/jeab.1985.43-315
PMCID: PMC1348144  PMID: 4020321
17.  Interaural time discrimination of envelopes carried on high-frequency tones as a function of level and interaural carrier mismatch 
Ear and hearing  2008;29(5):674-683.
Objectives
The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side.
Design
Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation.
Results
Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears.
Conclusions
The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally.
doi:10.1097/AUD.0b013e3181775e03
PMCID: PMC2648125  PMID: 18596646
binaural hearing; cochlear implant; localization; ITD
18.  Systematic representation of sound locations in the primary auditory cortex 
The primary auditory cortex (A1) is involved in sound localization. A consistent observation in A1 is a clustered representation of binaural properties, but how spatial tuning varies within binaural clusters is unknown. Here, this issue was addressed in A1 of the pallid bat, a species that relies on passive hearing (as opposed to echolocation) to localize prey. Evidence is presented for systematic representations of sound azimuth within two binaural clusters in the pallid bat A1: the binaural inhibition (EI) and peaked (P) binaural interaction clusters. The representation is not a ‘point-to-point’ space map as seen in the superior colliculus, but in the form of a systematic increase in the area of activated cortex as azimuth changes from ipsilateral to contralateral locations. The underlying substrate in the EI cluster is a systematic representation of the medial boundary of azimuth receptive fields. The P cluster is activated mostly for sounds near the midline, providing a spatial acoustic fovea. Activity in the P cluster falls off systematically as the sound is moved to more lateral locations. Sensitivity to interaural intensity differences (IID) predicts azimuth tuning in the vast majority of neurons. Azimuth receptive field properties are relatively stable across intensity over a moderate range (20–40 dB above threshold) of intensities. This suggests the maps will be similar across the intensities tested. These results challenge the current view that no systematic representation of azimuth is present in A1 and show that such representations are present locally within individual binaural clusters.
doi:10.1523/JNEUROSCI.1937-11.2011
PMCID: PMC3219787  PMID: 21957247
19.  Human Neuromagnetic Steady-State Responses to Amplitude-Modulated Tones, Speech, and Music 
Ear and Hearing  2014;35(4):461-467.
Objectives:
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears’ inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs.
Design:
MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales.
Results:
The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.
The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth.
Conclusions:
The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Auditory steady state responses to pure tones have been used to study subcortical and cortical processing, to scrutinize binaural interaction, and to evaluate hearing in an objective way. In daily lives, sounds that are physically much more complex sounds are encountered, such as music and speech. This study demonstrates that not only pure tones but also amplitude-modulated speech and music, both perceived to have tolerable sound quality, can elicit reliable magnetoencephalographic steady state fields. The strengths and hemispheric lateralization of the responses differed between the carrier sounds. The results indicate that steady state responses could be used to study the early cortical processing of natural sounds.
doi:10.1097/AUD.0000000000000033
PMCID: PMC4072443  PMID: 24603544
Amplitude modulation; Auditory; Frequency tagging; Magnetoencephalography; Natural stimuli
20.  Sound pressure transformations by the head and pinnae of the adult Chinchilla (Chinchilla lanigera) 
Hearing research  2010;272(1-2):135-147.
There are three main cues to sound location: the interaural differences in time (ITD) and level (ILD) as well as the monaural spectral shape cues. These cues are generated by the spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although the chinchilla has been used for decades to study the anatomy, physiology, and psychophysics of audition, including binaural and spatial hearing, little is actually known about the sound pressure transformations by the head and pinnae and the resulting sound localization cues available to them. Here, we measured the directional transfer functions (DTFs), the directional components of the head-related transfer functions, for 9 adult chinchillas. The resulting localization cues were computed from the DTFs. In the frontal hemisphere, spectral notch cues were present for frequencies from ~6–18 kHz. In general, the frequency corresponding to the notch increased with increases in source elevation as well as in azimuth towards the ipsilateral ear. The ILDs demonstrated a strong correlation with source azimuth and frequency. The maximum ILDs were < 10 dB for frequencies < 5 kHz, and ranged from 10–30 dB for the frequencies > 5 kHz. The maximum ITDs were dependent on frequency, yielding 236 μs at 4 kHz and 336 μs at 250 Hz. Removal of the pinnae eliminated the spectral notch cues, reduced the acoustic gain and the ILDs, altered the acoustic axis, and reduced the ITDs.
doi:10.1016/j.heares.2010.10.007
PMCID: PMC3039070  PMID: 20971180
sound localization; interaural time difference; interaural level difference; head related transfer function; directional transfer functions
21.  Combination of Spectral and Binaurally Created Harmonics in a Common Central Pitch Processor 
A fundamental attribute of human hearing is the ability to extract a residue pitch from harmonic complex sounds such as those produced by musical instruments and the human voice. However, the neural mechanisms that underlie this processing are unclear, as are the locations of these mechanisms in the auditory pathway. The ability to extract a residue pitch corresponding to the fundamental frequency from individual harmonics, even when the fundamental component is absent, has been demonstrated separately for conventional pitches and for Huggins pitch (HP), a stimulus without monaural pitch information. HP is created by presenting the same wideband noise to both ears, except for a narrowband frequency region where the noise is decorrelated across the two ears. The present study investigated whether residue pitch can be derived by combining a component derived solely from binaural interaction (HP) with a spectral component for which no binaural processing is required. Fifteen listeners indicated which of two sequentially presented sounds was higher in pitch. Each sound consisted of two “harmonics,” which independently could be either a spectral or a HP component. Component frequencies were chosen such that the relative pitch judgement revealed whether a residue pitch was heard or not. The results showed that listeners were equally likely to perceive a residue pitch when one component was dichotic and the other was spectral as when the components were both spectral or both dichotic. This suggests that there exists a single mechanism for the derivation of residue pitch from binaurally created components and from spectral components, and that this mechanism operates at or after the level of the dorsal nucleus of the lateral lemniscus (brainstem) or the inferior colliculus (midbrain), which receive inputs from the medial superior olive where temporal information from the two ears is first combined.
doi:10.1007/s10162-010-0250-3
PMCID: PMC3046332  PMID: 21086147
residue pitch; dichotic pitch; diotic pitch; Huggins pitch
22.  Combination of Spectral and Binaurally Created Harmonics in a Common Central Pitch Processor 
A fundamental attribute of human hearing is the ability to extract a residue pitch from harmonic complex sounds such as those produced by musical instruments and the human voice. However, the neural mechanisms that underlie this processing are unclear, as are the locations of these mechanisms in the auditory pathway. The ability to extract a residue pitch corresponding to the fundamental frequency from individual harmonics, even when the fundamental component is absent, has been demonstrated separately for conventional pitches and for Huggins pitch (HP), a stimulus without monaural pitch information. HP is created by presenting the same wideband noise to both ears, except for a narrowband frequency region where the noise is decorrelated across the two ears. The present study investigated whether residue pitch can be derived by combining a component derived solely from binaural interaction (HP) with a spectral component for which no binaural processing is required. Fifteen listeners indicated which of two sequentially presented sounds was higher in pitch. Each sound consisted of two “harmonics,” which independently could be either a spectral or a HP component. Component frequencies were chosen such that the relative pitch judgement revealed whether a residue pitch was heard or not. The results showed that listeners were equally likely to perceive a residue pitch when one component was dichotic and the other was spectral as when the components were both spectral or both dichotic. This suggests that there exists a single mechanism for the derivation of residue pitch from binaurally created components and from spectral components, and that this mechanism operates at or after the level of the dorsal nucleus of the lateral lemniscus (brainstem) or the inferior colliculus (midbrain), which receive inputs from the medial superior olive where temporal information from the two ears is first combined.
doi:10.1007/s10162-010-0250-3
PMCID: PMC3046332  PMID: 21086147
residue pitch; dichotic pitch; diotic pitch; Huggins pitch
23.  Mutation in the Kv3.3 Voltage-Gated Potassium Channel Causing Spinocerebellar Ataxia 13 Disrupts Sound-Localization Mechanisms 
PLoS ONE  2013;8(10):e76749.
Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13). This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.
doi:10.1371/journal.pone.0076749
PMCID: PMC3792041  PMID: 24116147
24.  Testing a Method for Quantifying the Output of Implantable Middle Ear Hearing Devices 
Audiology & neuro-otology  2007;12(4):265-276.
This report describes tests of a standard practice for quantifying the performance of implantable middle ear hearing devices (also known as implantable hearing aids). The standard and these tests were initiated by the Food and Drug Administration of the United States Government. The tests involved measurements on two hearing devices, one commercially available and the other home built, that were implanted into ears removed from human cadavers. The tests were conducted to investigate the utility of the practice and its outcome measures: the equivalent ear canal sound pressure transfer function that relates electrically driven middle ear velocities to the equivalent sound pressure needed to produce those velocities, and the maximum effective ear canal sound pressure. The practice calls for measurements in cadaveric ears in order to account for the varied anatomy and function of different human middle ears.
doi:10.1159/000101474
PMCID: PMC2596735  PMID: 17406105
Implantable hearing aids; Middle ear transfer function; Sound-induced stapes velocity
25.  Phoenix Is Required for Mechanosensory Hair Cell Regeneration in the Zebrafish Lateral Line 
PLoS Genetics  2009;5(4):e1000455.
In humans, the absence or irreversible loss of hair cells, the sensory mechanoreceptors in the cochlea, accounts for a large majority of acquired and congenital hearing disorders. In the auditory and vestibular neuroepithelia of the inner ear, hair cells are accompanied by another cell type called supporting cells. This second cell population has been described as having stem cell-like properties, allowing efficient hair cell replacement during embryonic and larval/fetal development of all vertebrates. However, mammals lose their regenerative capacity in most inner ear neuroepithelia in postnatal life. Remarkably, reptiles, birds, amphibians, and fish are different in that they can regenerate hair cells throughout their lifespan. The lateral line in amphibians and in fish is an additional sensory organ, which is used to detect water movements and is comprised of neuroepithelial patches, called neuromasts. These are similar in ultra-structure to the inner ear's neuroepithelia and they share the expression of various molecular markers. We examined the regeneration process in hair cells of the lateral line of zebrafish larvae carrying a retroviral integration in a previously uncharacterized gene, phoenix (pho). Phoenix mutant larvae develop normally and display a morphologically intact lateral line. However, after ablation of hair cells with copper or neomycin, their regeneration in pho mutants is severely impaired. We show that proliferation in the supporting cells is strongly decreased after damage to hair cells and correlates with the reduction of newly formed hair cells in the regenerating phoenix mutant neuromasts. The retroviral integration linked to the phenotype is in a novel gene with no known homologs showing high expression in neuromast supporting cells. Whereas its role during early development of the lateral line remains to be addressed, in later larval stages phoenix defines a new class of proteins implicated in hair cell regeneration.
Author Summary
By screening for regeneration deficient zebrafish mutations, we identified a zebrafish mutant line deficient in a highly specific regeneration process, the renewal of hair cells in the lateral line. Although this organ is specific to fish and amphibians, it contains essentially the same mechanosensory cells (the hair cells) that function in the ear for sound and balance detection in all vertebrates. Mammals are unusual vertebrates in that they have lost the ability to regenerate functional hair cells after damage by sound or chemical exposure. All other vertebrates retain their ability to regenerate their hair cells after damage, but this process is not well understood at the molecular level. The retroviral insertion linked to the phoenix mutation is in a new gene family class that is specifically required for the supporting cells to enter into mitosis after hair cell damage. What is particularly unusual about this mutation is that it appears not to affect the normal development and differentiation pathways, but only seems to affect the cells' post-differentiation regeneration.
doi:10.1371/journal.pgen.1000455
PMCID: PMC2662414  PMID: 19381250

Results 1-25 (759457)