The blood vessels that supply the inner ear form a barrier between the blood and the inner ear fluids to control the exchange of solutes, protein, and water. This barrier, called the blood-labyrinth barrier (BLB) is analogous to the blood-brain barrier (BBB), which plays a critical role in limiting the entry of inflammatory and infectious agents into the central nervous system. We have developed an in vivo method to assess the functional integrity of the BLB by injecting sodium fluorescein into the systemic circulation of mice and measuring the amount of fluorescein that enters perilymph in live animals. In these experiments, perilymph was collected from control and experimental mice in sequential samples taken from the posterior semicircular canal approximately 30 min after systemic fluorescein administration. Perilymph fluorescein concentrations in control mice were compared with perilymph fluorescein concentrations after lipopolysaccharide (LPS) treatment (1 mg/kg IP daily for 2 days). The concentration of perilymphatic fluorescein, normalized to serum fluorescein, was significantly higher in LPS-treated mice compared to controls. In order to assess the contributions of perilymph and endolymph in our inner ear fluid samples, sodium ion concentration of the inner ear fluid was measured using ion-selective electrodes. The sampled fluid from the posterior semicircular canal demonstrated an average sodium concentration of 145 mM, consistent with perilymph. These experiments establish a novel technique to assess the functional integrity of the BLB using quantitative methods and to provide a comparison of the BLB to the BBB.
LPS; cochlear fluid; blood-perilymph barrier; vascular permeability; inflammation
The mouse has become an important animal model in understanding cochlear function. Structures, such as the tectorial membrane or hair cells, have been changed by gene manipulation, and the resulting effect on cochlear function has been studied. To contrast those findings, physical properties of the basilar membrane (BM) and tectorial membrane (TM) in mice without gene mutation are of great importance. Using the hemicochlea of CBA/CaJ mice, we have demonstrated that tectorial membrane (TM) and basilar membrane (BM) revealed a stiffness gradient along the cochlea. While a simple spring mass resonator predicts the change in the characteristic frequency of the BM, the spring mass model does not predict the frequency change along the TM. Plateau stiffness values of the TM were 0.6 ± 0.5, 0.2 ± 0.1, and 0.09 ± 0.09 N/m for the basal, middle, and upper turns, respectively. The BM plateau stiffness values were 3.7 ± 2.2, 1.2 ± 1.2, and 0.5 ± 0.5 N/m for the basal, middle, and upper turns, respectively. Estimations of the TM Young’s modulus (in kPa) revealed 24.3 ± 25.2 for the basal turns, 5.1 ± 4.5 for the middle turns, and 1.9 ± 1.6 for the apical turns. Young’s modulus determined at the BM pectinate zone was 76.8 ± 72, 23.9 ± 30.6, and 9.4 ± 6.2 kPa for the basal, middle, and apical turns, respectively. The reported stiffness values of the CBA/CaJ mouse TM and BM provide basic data for the physical properties of its organ of Corti.
cochlea; stiffness; basilar membrane; tectorial membrane; best frequency; mice
Ketamine is a dissociative anaesthetic, analgesic drug as well as an N-methyl-d-aspartate receptor antagonist and has been reported to influence otoacoustic emission amplitudes. In the present study, we assess the effect of ketamine–xylazine on high-frequency distortion-product otoacoustic emissions (DPOAE) in the bat species Carollia perspicillata, which serves as model for sensitive high-frequency hearing. Cubic DPOAE provide information about the nonlinear gain of the cochlear amplifier, whereas quadratic DPOAE are used to assess the symmetry of cochlear amplification and potential efferent influence on the operating state of the cochlear amplifier. During anaesthesia, maximum cubic DPOAE levels can increase by up to 35 dB within a medium stimulus level range from 35 to 60 dB SPL. Close to the -10 dB SPL threshold, at stimulus levels below about 20-30 dB SPL, anaesthesia reduces cubic DPOAE amplitudes and raises cubic DPOAE thresholds. This makes DPOAE growth functions steeper. Additionally, ketamine increases the optimum stimulus frequency ratio which is indicative of a reduction of cochlear tuning sharpness. The effect of ketamine on cubic DPOAE thresholds becomes stronger at higher stimulus frequencies and is highly significant for f2 frequencies above 40 kHz. Quadratic DPOAE levels are increased by up to 25 dB by ketamine at medium stimulus levels. In contrast to cubic DPOAEs, quadratic DPOAE threshold changes are variable and there is no significant loss of sensitivity during anaesthesia. We discuss that ketamine effects could be caused by modulation of middle ear function or a release from ipsilateral efferent modulation that mainly affects the gain of cochlear amplification.
DPOAE; otoacoustic emissions; bat; anaesthesia; cochlear amplifier; high-frequency hearing
In natural environments, many sounds are amplitude-modulated. Amplitude modulation is thought to be a signal that aids auditory object formation. A previous study of the detection of signals in noise found that when tones or noise were amplitude-modulated, the noise was a less effective masker, and detection thresholds for tones in noise were lowered. These results suggest that the detection of modulated signals in modulated noise would be enhanced. This paper describes the results of experiments investigating how detection is modified when both signal and noise were amplitude-modulated. Two monkeys (Macaca mulatta) were trained to detect amplitude-modulated tones in continuous, amplitude-modulated broadband noise. When the phase difference of otherwise similarly amplitude-modulated tones and noise were varied, detection thresholds were highest when the modulations were in phase and lowest when the modulations were anti-phase. When the depth of the modulation of tones or noise was varied, detection thresholds decreased if the modulations were anti-phase. When the modulations were in phase, increasing the depth of tone modulation caused an increase in tone detection thresholds, but increasing depth of noise modulations did not affect tone detection thresholds. Changing the modulation frequency of tone or noise caused changes in threshold that saturated at modulation frequencies higher than 20 Hz; thresholds decreased when the tone and noise modulations were in phase and decreased when they were anti-phase. The relationship between reaction times and tone level were not modified by manipulations to the nature of temporal variations in the signal or noise. The changes in behavioral threshold were consistent with a model where the brain subtracted noise from signal. These results suggest that the parameters of the modulation of signals and maskers heavily influence detection in very predictable ways. These results are consistent with some results in humans and avians and form the baseline for neurophysiological studies of mechanisms of detection in noise.
amplitude modulation; detection; behavior; comodulation
Noise reduction (NR) systems are commonplace in modern digital hearing aids. Though not improving speech intelligibility, NR helps the hearing-aid user in terms of lowering noise annoyance, reducing cognitive load and improving ease of listening. Previous psychophysical work has shown that NR does in fact improve the ability of normal-hearing (NH) listeners to discriminate the slow amplitude-modulation (AM) cues representative of those found in speech. The goal of this study was to assess whether this improvement of AM discrimination with NR can also be observed for hearing-impaired (HI) listeners. AM discrimination was measured at two audio frequencies of 500 Hz and 2 kHz in a background noise with a signal-to-noise ratio of 12 dB. Discrimination was measured for ten HI and ten NH listeners with and without NR processing. The HI listeners had a moderate sensorineural hearing loss of about 50 dB HL at 2 kHz and normal hearing (≤20 dB HL) at 500 Hz. The results showed that most of the HI listeners tended to benefit from NR at 500 Hz but not at 2 kHz. However, statistical analyses showed that HI listeners did not benefit significantly from NR at any frequency region. In comparison, the NH listeners showed a significant benefit from NR at both frequencies. For each condition, the fidelity of AM transmission was quantified by a computational model of early auditory processing. The parameters of the model were adjusted separately for the two groups (NH and HI) of listeners. The AM discrimination performance of the HI group (with and without NR) was best captured by a model simulating the loss of the fast-acting amplitude compression applied by the normal cochlea. This suggests that the lack of benefit from NR for HI listeners results from loudness recruitment.
noise reduction; spectral subtraction; amplitude modulation; pattern discrimination; hearing impairment
Although differences in fundamental frequencies (F0s) between vowels are beneficial for their segregation and identification, listeners can still segregate and identify simultaneous vowels that have identical F0s, suggesting that additional cues are contributing, including formant frequency differences. The current perception and computational modeling study was designed to assess the contribution of F0 and formant difference cues for concurrent vowel identification. Younger adults with normal hearing listened to concurrent vowels over a wide range of levels (25–85 dB SPL) for conditions in which F0 was the same or different between vowel pairs. Vowel identification scores were poorer at the lowest and highest levels for each F0 condition, and F0 benefit was reduced at the lowest level as compared to higher levels. To understand the neural correlates underlying level-dependent changes in vowel identification, a computational auditory-nerve model was used to estimate formant and F0 difference cues under the same listening conditions. Template contrast and average localized synchronized rate predicted level-dependent changes in the strength of phase locking to F0s and formants of concurrent vowels, respectively. At lower levels, poorer F0 benefit may be attributed to poorer phase locking to both F0s, which resulted from lower firing rates of auditory-nerve fibers. At higher levels, poorer identification scores may relate to poorer phase locking to the second formant, due to synchrony capture by lower formants. These findings suggest that concurrent vowel identification may be partly influenced by level-dependent changes in phase locking of auditory-nerve fibers to F0s and formants of both vowels.
concurrent vowels; vowel identification; fundamental frequency cue; formant frequency cue; auditory-nerve fibers; phase locking
Noise-induced hearing loss (NIHL) is a prevalent health risk. Inbred mouse strains 129S6/SvEvTac (129S6) and MOLF/EiJ (MOLF) show strong NIHL resistance (NR) relative to CBA/CaJ (CBACa). In this study, we developed quantitative trait locus (QTL) maps for NR. We generated F1 animals by intercrossing (129S6 × CBACa) and (MOLF × CBACa). In each intercross, NR was recessive. N2 animals were produced by backcrossing F1s to their respective parental strain. The 232 N2-129S6 and 225 N2-MOLF progenies were evaluated for NR using auditory brainstem response. In 129S6, five QTL were identified on chromosomes (Chr) 17, 18, 14, 11, and 4, referred to as loci nr1, nr2, nr3, nr4, and nr5, respectively. In MOLF, four QTL were found on Chr 4, 17, 6, and 12, referred to as nr7, nr8, nr9, and nr10, respectively. Given that NR QTL were discovered on Chr 4 and 17 in both the N2-129S6 and N2-MOLF cross, we generated two consomic strains by separately transferring 129S6-derived Chr 4 and 17 into an otherwise CBACa background and a double-consomic strain by crossing the two strains. Phenotypic analysis of the consomic strains indicated that whole 129S6 Chr 4 contributes strongly to mid-frequency NR, while whole 129S6 Chr 17 contributes markedly to high-frequency NR. Therefore, we anticipated that the double-consomic strain containing Chr 4 and 17 would demonstrate NR across the mid- and high-frequency range. However, whole 129S6 Chr 17 masks the expression of mid-frequency NR from whole 129S6 Chr 4. To further dissect NR on 129S6 Chr 4 and 17, CBACa.129S6 congenic strains were generated for each chromosome. Phenotypic analysis of the Chr 17 CBACa.129S6 congenic strains further defined the NR region on proximal Chr 17, uncovered another NR locus (nr6) on distal Chr 17, and revealed an epistatic interaction between proximal and distal 129S6 Chr 17.
QTL; 129S6; MOLF; noise-induced hearing loss
Literature often refers to a 300 pps limit for cochlear implant (CI) electrical stimulation, above which pulse rate discrimination deteriorates or above which rate pitch is not perceived to increase. The present study investigated the effect on pulse rate difference limens (PRDLs) when using compound stimuli in which identical pulse trains were applied to multiple electrodes across the length of the electrode array and compared the results to those of single-electrode stimuli. PRDLs of seven CI users were determined in two stimulus pulse phase conditions, one in which the phase delays between pulses on different electrodes were minimised (burst mode) and a second in which they were maximised (spread mode). PRDLs were measured at base rates of 100 to 600 pps in 100 pps intervals, using compound stimuli on one, two, five, nine and 18 electrodes. As smaller PRDLs were expected to reflect improved rate pitch perception, 18-electrode spread mode stimuli were also included in a pitch ranking task. PRDLs improved markedly when multi-electrode compound stimuli were used, with average spread mode PRDLs across listeners between 6 and 8 % of the base rate in the whole range tested (i.e. up to 600 pps). PRDLs continued to improve as more electrodes were included, up to at least nine electrodes in the compound stimulus. Stimulus pulse phase had a significant influence on the results, with PRDLs being smaller in spread mode. Results indicate that pulse rate discrimination may be manipulated with stimulus parameter choice so that previously observed deterioration of PRDLs at 300 pps probably does not reflect a fundamental limitation to rate discrimination. However, rate pitch perception did not improve in the conditions that resulted in smaller PRDLs. This may indicate that listeners used cues other than pitch to perform the rate discrimination task or may reflect limitations in the electrically evoked neural excitation patterns presented to a rate pitch extraction mechanism.
cochlear implants; rate pitch; rate discrimination thresholds; multi-electrode stimuli; across-channel integration
Although localization of sound in elevation is believed to depend on spectral cues, it has been shown with human listeners that the temporal features of sound can also greatly affect localization performance. Of particular interest is a phenomenon known as the negative level effect, which describes the deterioration of localization ability in elevation with increasing sound level and is observed only with impulsive or short-duration sound. The present study uses the gaze positions of domestic cats as measures of perceived locations of sound targets varying in azimuth and elevation. The effects of sound level on localization in terms of accuracy, precision, and response latency were tested for sound with different temporal features, such as a click train, a single click, a continuous sound that had the same frequency spectrum of the click train, and speech segments. In agreement with previous human studies, negative level effects were only observed with click-like stimuli and only in elevation. In fact, localization of speech sounds in elevation benefited significantly when the sound level increased. Our findings indicate that the temporal continuity of a sound can affect the frequency analysis performed by the auditory system, and the variation in the frequency spectrum contained in speech sound does not interfere much with the spectral coding for its location in elevation.
localization; spectral cues; clicks; speech; level
The afferent encoding of vestibular stimuli depends on molecular mechanisms that regulate membrane potential, concentration gradients, and ion and neurotransmitter clearance at both afferent and efferent relays. In many cell types, the Na,K-ATPase (NKA) is essential for establishing hyperpolarized membrane potentials and mediating both primary and secondary active transport required for ion and neurotransmitter clearance. In vestibular sensory epithelia, a calyx nerve ending envelopes each type I hair cell, isolating it over most of its surface from support cells and posing special challenges for ion and neurotransmitter clearance. We used immunofluorescence and high-resolution confocal microscopy to examine the cellular and subcellular patterns of NKAα subunit expression within the sensory epithelia of semicircular canals as well as an otolith organ (the utricle). Results were similar for both kinds of vestibular organ. The neuronal NKAα3 subunit was detected in all afferent endings—both the calyx afferent endings on type I hair cells and bouton afferent endings on type II hair cells—but was not detected in efferent terminals. In contrast to previous results in the cochlea, the NKAα1 subunit was detected in hair cells (both type I and type II) but not in supporting cells. The expression of distinct NKAα subunits by vestibular hair cells and their afferent endings may be needed to support and shape the high rates of glutamatergic neurotransmission and spike initiation at the unusual type I-calyx synapse.
vestibular hair cells; calyx terminals; Na,K-ATPase
The frequency extent over which fine structure is coded in the auditory nerve has been physiologically characterized in laboratory animals but is unknown in humans. Knowledge of the upper frequency limit in humans would inform the debate regarding the role of fine structure in human hearing. Of the presently available techniques, only the recording of mass neural potentials offers the promise to provide a physiological estimate of neural phase locking in humans. A challenge is to disambiguate neural phase locking from the receptor potentials. We studied mass potentials recorded on the cochlea and auditory nerve of cat and used several experimental manipulations to isolate the neural contribution to these potentials. We find a surprisingly large neural contribution in the signal recorded on the cochlear round window, and this contribution is in many aspects similar to the potential measured on the auditory nerve. The results suggest that recording of mass potentials through the middle ear is a promising approach to examine neural phase locking in humans.
phase locking; neurophonic; microphonic; auditory nerve; round window; temporal fine structure
Niemann–Pick disease, type C1 (NPC1) is a rare lysosomal lipidosis that is most often the result of biallelic mutations in NPC1, and is characterized by a fatal neurological degeneration. The pathophysiology is complex, and the natural history of the disease is poorly understood. Recent findings from patients with NPC1 and hearing loss suggest that multiple steps along the auditory pathway are affected. The current study was undertaken to determine the auditory phenotype in the Npc1nih mutant mouse model, to extend analyses to histologic evaluation of the inner ear, and to compare our findings to those reported from human patients. Auditory testing revealed a progressive high-frequency hearing loss in Npc1−/− mice that is present as early as postnatal day 20 (P20), well before the onset of overt neurological symptoms, with evidence of abnormalities involving the cochlea, auditory nerve, and brainstem auditory centers. Distortion product otoacoustic emission amplitude and auditory brainstem response latency data provided evidence for a disruption in maturational development of the auditory system in Npc1−/− mice. Anatomical study demonstrated accumulation of lysosomes in neurons, hair cells, and supporting cells of the inner ear in P30 Npc1−/− mice, as well as increased numbers of inclusion bodies, myelin figures, and swollen nerve endings in older (P50–P70) mutant animals. These findings add unique perspective to the pathophysiology of NPC disease and suggest that hearing loss is an early and sensitive marker of disease progression.
NPC; hearing; auditory maturation; auditory brainstem response (ABR)
The objective of this large population-based cross-sectional study was to evaluate the association between smoking, passive smoking, alcohol consumption, and hearing loss. The study sample was a subset of the UK Biobank Resource, 164,770 adults aged between 40 and 69 years who completed a speech-in-noise hearing test (the Digit Triplet Test). Hearing loss was defined as speech recognition in noise in the better ear poorer than 2 standard deviations below the mean with reference to young normally hearing listeners. In multiple logistic regression controlling for potential confounders, current smokers were more likely to have a hearing loss than non-smokers (odds ratio (OR) 1.15, 95 % confidence interval (CI) 1.09–1.21). Among non-smokers, those who reported passive exposure to tobacco smoke were more likely to have a hearing loss (OR 1.28, 95 %CI 1.21–1.35). For both smoking and passive smoking, there was evidence of a dose-response effect. Those who consume alcohol were less likely to have a hearing loss than lifetime teetotalers. The association was similar across three levels of consumption by volume of alcohol (lightest 25 %, OR 0.61, 95 %CI 0.57–0.65; middle 50 % OR 0.62, 95 %CI 0.58–0.66; heaviest 25 % OR 0.65, 95 %CI 0.61–0.70). The results suggest that lifestyle factors may moderate the risk of hearing loss. Alcohol consumption was associated with a protective effect. Quitting or reducing smoking and avoiding passive exposure to tobacco smoke may also help prevent or moderate age-related hearing loss.
age-related hearing loss; presbycusis; smoking; passive smoking; alcohol
In previous studies, 3D motion of the middle-ear ossicles in cat and human was explored, but models for hearing research have shifted in the last few decades to smaller mammals, and gerbil, in particular, has become a popular hearing model. In the present study, we have measured with an optical interferometer the 3D motion of the malleus and incus in anesthetized gerbil for sound of moderate intensity (90-dB sound pressure level) over a broad frequency range. To access the ossicles, the pars flaccida was removed exposing the neck and head of the malleus and the incus from the malleus-incus joint to the plate of the lenticular process. Vibration measurements were done at six to eight points per ossicle while the angle of observation was varied over approximately 30 ° to enable calculation of the 3D rigid-body velocity components. These components were expressed in an intrinsic reference frame, with one axis along the anatomical suspension axis of the malleus-incus block and a second axis along the stapes piston direction. Another way of describing the motion that does not assume an a priori rotation axis is to calculate the instantaneous rotation axis (screw axis) of the malleus/incus motion. Only at frequencies below a few kilohertz did the screw axis have a maximum rotation in a direction close to that of the ligament axis. A slight slippage in the malleus-incus joint developed with increasing frequency. Our findings are useful in determining the sound transfer characteristics through the middle ear and serve as a reference for validation of mathematical middle-ear models. Last but not least, comparing our present results in gerbil with those of previously measured species (human and cat) exposes similarities and dissimilarities among them.
middle ear; ossicular; chain; motion; measurement
Aminoglycoside antibiotics are highly effective agents against gram-negative bacterial infections, but they cause adverse effects on hearing and balance dysfunction as a result of toxicity to hair cells of the cochlea and vestibular organs. While ototoxicity has been comprehensively studied, the contributions of the immune system, which controls the host response to infection, have not been studied in antibiotic ototoxicity. Recently, it has been shown that an inflammatory response is induced by hair cell injury. In this study, we found that lipopolysaccharide (LPS), an important component of bacterial endotoxin, when given in combination with kanamycin and furosemide, augmented the inflammatory response to hair cell injury and exacerbated hearing loss and hair cell injury. LPS injected into the peritoneum of experimental mice induced a brisk cochlear inflammatory response with recruitment of mononuclear phagocytes into the spiral ligament, even in the absence of ototoxic agents. While LPS alone did not affect hearing, animals that received LPS prior to ototoxic agents had worse hearing loss compared to those that did not receive LPS pretreatment. The poorer hearing outcome in LPS-treated mice did not correlate to changes in endocochlear potential. However, LPS-treated mice demonstrated an increased number of CCR2+ inflammatory monocytes in the inner ear when compared with mice treated with ototoxic agents alone. We conclude that LPS and its associated inflammatory response are harmful to the inner ear when coupled with ototoxic medications and that the immune system may contribute to the final hearing outcome in subjects treated with ototoxic agents.
LPS; monocyte; macrophage; cochlea; inflammation; ototoxicity
Interaural timing cues are important for sound source localization and for binaural unmasking of speech that is spatially separated from interfering sounds. Users of a cochlear implant (CI) with residual hearing in the non-implanted ear (bimodal listeners) can only make very limited use of interaural timing cues with their clinical devices. Previous studies showed that bimodal listeners can be sensitive to interaural time differences (ITDs) for simple single- and three-channel stimuli. The modulation enhancement strategy (MEnS) was developed to improve the ITD perception of bimodal listeners. It enhances temporal modulations on all stimulated electrodes, synchronously with modulations in the acoustic signal presented to the non-implanted ear, based on measurement of the amplitude peaks occurring at the rate of the fundamental frequency in voiced phonemes. In the first experiment, ITD detection thresholds were measured using the method of constant stimuli for five bimodal listeners for an artificial vowel, processed with either the advanced combination encoder (ACE) strategy or with MEnS. With MEnS, detection thresholds were significantly lower, and for four subjects well within the physically relevant range. In the second experiment, the extent of lateralization was measured in three subjects with both strategies, and ITD sensitivity was determined using an adaptive procedure. All subjects could lateralize sounds based on ITD and sensitivity was significantly better with MEnS than with ACE. The current results indicate that ITD cues can be provided to bimodal listeners with modified sound processing.
cochlear implant; hearing aid; localization; bimodal stimulation; electric acoustic stimulation; interaural time difference; MEnS; modulation enhancement strategy
Hearing thresholds and wave amplitudes measured using auditory brainstem responses (ABRs) to brief sounds are the predominantly used clinical measures to objectively assess auditory function. However, frequency-following responses (FFRs) to tonal carriers and to the modulation envelope (envelope-following responses or EFRs) to longer and spectro-temporally modulated stimuli are rapidly gaining prominence as a measure of complex sound processing in the brainstem and midbrain. In spite of numerous studies reporting changes in hearing thresholds, ABR wave amplitudes, and the FFRs and EFRs under neurodegenerative conditions, including aging, the relationships between these metrics are not clearly understood. In this study, the relationships between ABR thresholds, ABR wave amplitudes, and EFRs are explored in a rodent model of aging. ABRs to broadband click stimuli and EFRs to sinusoidally amplitude-modulated noise carriers were measured in young (3–6 months) and aged (22–25 months) Fischer-344 rats. ABR thresholds and amplitudes of the different waves as well as phase-locking amplitudes of EFRs were calculated. Age-related differences were observed in all these measures, primarily as increases in ABR thresholds and decreases in ABR wave amplitudes and EFR phase-locking capacity. There were no observed correlations between the ABR thresholds and the ABR wave amplitudes. Significant correlations between the EFR amplitudes and ABR wave amplitudes were observed across a range of modulation frequencies in the young. However, no such significant correlations were found in the aged. The aged click ABR amplitudes were found to be lower than would be predicted using a linear regression model of the young, suggesting altered gain mechanisms in the relationship between ABRs and FFRs with age. These results suggest that ABR thresholds, ABR wave amplitudes, and EFRs measure complementary aspects of overlapping neurophysiological processes and the relationships between these measurements changes asymmetrically with age. Hence, measuring all three metrics provides a more complete assessment of auditory function, especially under pathological conditions like aging.
frequency-following response; colliculus; aging; brainstem; auditory; ASSR; AMFR; inhibition; central gain; synaptopathy; neuropathy; evoked potential
Morphological studies of inner hair cell (IHC) synapses with cochlear nerve terminals have suggested that high- and low-threshold fibers differ in the sizes of their pre- and postsynaptic elements as well as the position of their synapses around the hair cell circumference. Here, using high-power confocal microscopy, we measured sizes and spatial positions of presynaptic ribbons, postsynaptic glutamate receptor (GluR) patches, and olivocochlear efferent terminals at eight locations along the cochlear spiral in normal and surgically de-efferented mice. Results confirm a prior report suggesting a modiolar > pillar gradient in ribbon size and a complementary pillar > modiolar gradient in GluR-patch size. We document a novel habenular < cuticular gradient in GluR patch size and a complementary cuticular < habenular gradient in olivocochlear innervation density. All spatial gradients in synaptic elements collapse after cochlear de-efferentation, suggesting a major role of olivocochlear efferents in maintaining functional heterogeneity among cochlear nerve fibers. Our spatial analysis also suggests that adjacent IHCs may contain a different synaptic mix, depending on whether their tilt in the radial plane places their synaptic pole closer to the pillar cells or to the modiolus.
auditory nerve; inner ear; synaptic ribbon; glutamate receptor
One of the major contributors to the response profile of neurons in the auditory pathways is the Ih current. Its properties such as magnitude, activation, and kinetics not only vary among different types of neurons (Banks et al., J Neurophysiol 70:1420–1432, 1993; Fu et al., J Neurophysiol 78:2235–2245, 1997; Bal and Oertel, J Neurophysiol 84:806–817, 2000; Cao and Oertel, J Neurophysiol 94:821–832, 2005; Rodrigues and Oertel, J Neurophysiol 95:76–87, 2006; Yi et al., J Neurophysiol 103:2532–2543, 2010), but they also display notable diversity in a single population of spiral ganglion neurons (Mo and Davis, J Neurophysiol 78:3019–3027, 1997), the first neural element in the auditory periphery. In this study, we found from somatic recordings that part of the heterogeneity can be attributed to variation along the tonotopic axis because Ih in the apical neurons have more positive half-activation voltage levels than basal neurons. Even within a single cochlear region, however, Ih current properties are not uniform. To account for this heterogeneity, we provide immunocytochemical evidence for variance in the intracellular density of the hyperpolarization-activated cyclic nucleotide-gated channel α-subunit 1 (HCN1), which mediates Ih current. We also observed different combinations of HCN1 and HCN4 α-subunits from cell to cell. Lastly, based on the physiological data, we performed kinetic analysis for the Ih current and generated a mathematical model to better understand varied Ih on spiral ganglion function. Regardless of whether Ih currents are recorded at the nerve terminals (Yi et al., J Neurophysiol 103:2532-2543, 2010) or at the somata of spiral ganglion neurons, they have comparable mean half-activation voltage and induce similar resting membrane potential changes, and thus our model may also provide insights into the impact of Ih on synaptic physiology.
Ih; HCN; spiral ganglion neuron; tonotopic; heterogeneity
The plasma membrane Ca2+ ATPase 2 (PMCA2) is necessary for auditory transduction and serves as the primary Ca2+ extrusion mechanism in auditory stereocilia bundles. To date, studies examining PMCA2 in auditory function using mutant mice have focused on the phenotype of late adolescent and adult mice. Here, we focus on the changes of PMCA2 in the maturation of auditory sensitivity by comparing auditory responses to RNA and protein expression levels in haploinsufficient PMCA2 and wild-type mice from P16 into adulthood. Auditory sensitivity in wild-type mice improves between P16 and 3 weeks of age, when it becomes stable through adolescence. In haploinsufficient mice, there are frequency-dependent loss of sensitivity and subsequent recovery of thresholds between P16 and adulthood. RNA analysis demonstrates that α-Atp2b2 transcript levels increase in both wild-type and heterozygous cochleae between P16 and 5 weeks. The increases reported for the α-Atp2b2 transcript type during this stage in development support the requisite usage of this transcript for mature auditory transduction. PMCA2 expression also increases in wild-type cochleae between P16 and 5 weeks suggesting that this critical auditory protein may be involved in normal maturation of auditory sensitivity after the onset of hearing. We also characterize expression levels of two long noncoding RNA genes, Gm15082 (lnc82) and Gm15083 (lnc83), which are transcribed on the opposite strand in the 5′ region of Atp2b2 and propose that the lnc83 transcript may be involved in regulating α-Atp2b2 expression.
Atp2b2; hearing; development; cochlea; noncoding RNA
Multiple calcium-binding proteins (CaBPs) are expressed at high levels and in complementary patterns in the auditory pathways of birds, mammals, and other vertebrates, but whether specific members of the CaBP family can be used to identify neuronal subpopulations is unclear. We used double immunofluorescence labeling of calretinin (CR) in combination with neuronal markers to investigate the distribution of CR-expressing neurons in brainstem sections of the cochlear nucleus in the chicken (Gallus gallus domesticus). While CR was homogeneously expressed in cochlear nucleus magnocellularis, CR expression was highly heterogeneous in cochlear nucleus angularis (NA), a nucleus with diverse cell types analogous in function to neurons in the mammalian ventral cochlear nucleus. To quantify the distribution of CR in the total NA cell population, we used antibodies against neuronal nuclear protein (NeuN), a postmitotic neuron-specific nuclear marker. In NA neurons, NeuN label was variably localized to the cell nucleus and the cytoplasm, and the intensity of NeuN immunoreactivity was inversely correlated with the intensity of CR immunoreactivity. The percentage of CR + neurons in NA increased from 31 % in embryonic (E)17/18 chicks, to 44 % around hatching (E21), to 51 % in postnatal day (P) 8 chicks. By P8, the distribution of CR + neurons was uniform, both rostrocaudal and in the tonotopic (dorsoventral) axis. Immunoreactivity for the voltage-gated potassium ion channel Kv1.1, used as a marker for physiological type, showed broad and heterogeneous postsynaptic expression in NA, but did not correlate with CR expression. These results suggest that CR may define a subpopulation of neurons within nucleus angularis.
Electronic supplementary material
The online version of this article (doi:10.1007/s10162-014-0453-0) contains supplementary material, which is available to authorized users.
calretinin; NeuN; cochlear nucleus; avian; calcium binding protein; Kv1.1; potassium channel
The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this “enhancement” phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a target tone that was not enhanced. In order to record neural responses originating in the brainstem, the ASSR was elicited by amplitude modulating the target tone at a frequency close to 80 Hz. The results did not show evidence of an amplified neural response to enhanced tones. In a control condition, we measured the ASSR to a target tone that, instead of being perceptually enhanced by a precursor sound, was acoustically increased in level. This level increase matched the magnitude of enhancement estimated psychophysically with a forward masking paradigm in a previous experimental phase. We found that the ASSR to the tone acoustically increased in level was significantly greater than the ASSR to the tone enhanced by the precursor sound. Overall, our results suggest that the enhancement effect cannot be explained by an amplified neural response at the level of the brainstem. However, an alternative possibility is that brainstem neurons with enhanced responses do not contribute to the scalp-recorded ASSR.
auditory enhancement; perceptual pop-out; ASSR; intensity coding