Previous cochlear implant studies using isolated electrical stimulus pulses in animal models have reported that intracochlear monopolar stimulus configurations elicit broad extents of neuronal activation within the central auditory system—much broader than the activation patterns produced by bipolar electrode pairs or acoustic tones. However, psychophysical and speech reception studies that use sustained pulse trains do not show clear performance differences for monopolar versus bipolar configurations. To test whether monopolar intracochlear stimulation can produce selective activation of the inferior colliculus, we measured activation widths along the tonotopic axis of the inferior colliculus for acoustic tones and 1,000-pulse/s electrical pulse trains in guinea pigs and cats. Electrical pulse trains were presented using an array of 6–12 stimulating electrodes distributed longitudinally on a space-filling silicone carrier positioned in the scala tympani of the cochlea. We found that for monopolar, bipolar, and acoustic stimuli, activation widths were significantly narrower for sustained responses than for the transient response to the stimulus onset. Furthermore, monopolar and bipolar stimuli elicited similar activation widths when compared at stimulus levels that produced similar peak spike rates. Surprisingly, we found that in guinea pigs, monopolar and bipolar stimuli produced narrower sustained activation than 60 dB sound pressure level acoustic tones when compared at stimulus levels that produced similar peak spike rates. Therefore, we conclude that intracochlear electrical stimulation using monopolar pulse trains can produce activation patterns that are at least as selective as bipolar or acoustic stimulation.
cochlear implant; auditory midbrain; neurophysiology; electrical stimulation
A component of a test sound consisting of simultaneous pure tones perceptually “pops out” if the test sound is preceded by a copy of itself with that component attenuated. Although this “enhancement” effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.
auditory enhancement; perceptual pop-out; neural adaptation; spectral processing; intensity discrimination
Many non-mammalian vertebrates produce hair cells throughout life and recover from hearing and balance deficits through regeneration. In contrast, embryonic production of hair cells declines sharply in mammals where deficits from hair cell losses are typically permanent. Hair cell density estimates recently suggested that the vestibular organs of mice continue to add hair cells after birth, so we undertook comprehensive counting in murine utricles at different ages. The counts show that 51 % of the hair cells in adults arise during the 2 weeks after birth. Immature hair cells are most common near the neonatal macula’s peripheral edge and striola, where anti-Ki-67 labels cycling nuclei in zones that appear to contain niches for supporting-cell-like stem cells. In vivo lineage tracing in a novel reporter mouse where tamoxifen-inducible supporting cell-specific Cre expression switched tdTomato fluorescence to eGFP fluorescence showed that proteolipid-protein-1-expressing supporting cells are an important source of the new hair cells. To assess the contributions of postnatal cell divisions, we gave mice an injection of BrdU or EdU on the day of birth. The labels were restricted to supporting cells 1 day later, but by 12 days, 31 % of the labeled nuclei were in myosin-VIIA-positive hair cells. Thus, hair cell populations in neonatal mouse utricles grow appreciably through two processes: the progressive differentiation of cells generated before birth and the differentiation of new cells arising from divisions of progenitors that progress through S phase soon after birth. Subsequent declines in these processes coincide with maturational changes that appear unique to mammalian supporting cells.
ear; hair cell; regeneration; proliferation; vestibular; sensory
Echolocation is typically associated with bats and toothed whales. To date, only few studies have investigated echolocation in humans. Moreover, these experiments were conducted with real objects in real rooms; a configuration in which features of both vocal emissions and perceptual cues are difficult to analyse and control. We investigated human sonar target-ranging in virtual echo-acoustic space, using a short-latency, real-time convolution engine. Subjects produced tongue clicks, which were picked up by a headset microphone, digitally delayed, convolved with individual head-related transfer functions and played back through earphones, thus simulating a reflecting surface at a specific range in front of the subject. In an adaptive 2-AFC paradigm, we measured the perceptual sensitivity to changes of the range for reference ranges of 1.7, 3.4 or 6.8 m. In a follow-up experiment, a second simulated surface at a lateral position and a fixed range was added, expected to act either as an interfering masker or a useful reference. The psychophysical data show that the subjects were well capable to discriminate differences in the range of a frontal reflector. The range–discrimination thresholds were typically below 1 m and, for a reference range of 1.7 m, they were typically below 0.5 m. Performance improved when a second reflector was introduced at a lateral angle of 45°. A detailed analysis of the tongue clicks showed that the subjects typically produced short, broadband palatal clicks with durations between 3 and 15 ms, and sound levels between 60 and 108 dB. Typically, the tongue clicks had relatively high peak frequencies around 6 to 8 kHz. Through the combination of highly controlled psychophysical experiments in virtual space and a detailed analysis of both the subjects’ performance and their emitted tongue clicks, the current experiments provide insights into both vocal motor and sensory processes recruited by humans that aim to explore their environment by echolocation.
echolocation; pitch; temporal processing; vocalizations; binaural hearing
The sensory systems of the New Zealand kiwi appear to be uniquely adapted to occupy a nocturnal ground-dwelling niche. In addition to well-developed tactile and olfactory systems, the auditory system shows specializations of the ear, which are maintained along the central nervous system. Here, we provide a detailed description of the auditory nerve, hair cells, and stereovillar bundle orientation of the hair cells in the North Island brown kiwi. The auditory nerve of the kiwi contained about 8,000 fibers. Using the number of hair cells and innervating nerve fibers to calculate a ratio of average innervation density showed that the afferent innervation ratio in kiwi was denser than in most other birds examined. The average diameters of cochlear afferent axons in kiwi showed the typical gradient across the tonotopic axis. The kiwi basilar papilla showed a clear differentiation of tall and short hair cells. The proportion of short hair cells was higher than in the emu and likely reflects a bias towards higher frequencies represented on the kiwi basilar papilla. The orientation of the stereovillar bundles in the kiwi basilar papilla showed a pattern similar to that in most other birds but was most similar to that of the emu. Overall, many features of the auditory nerve, hair cells, and stereovilli bundle orientation in the kiwi are typical of most birds examined. Some features of the kiwi auditory system do, however, support a high-frequency specialization, specifically the innervation density and generally small size of hair-cell somata, whereas others showed the presumed ancestral condition similar to that found in the emu.
hair cell; basilar papilla; auditory nerve; Paleognathae
Age-related hearing loss, or presbyacusis, is a major public health problem that causes communication difficulties and is associated with diminished quality of life. Limited satisfaction with hearing aids, particularly in noisy listening conditions, suggests that central nervous system declines occur with presbyacusis and may limit the efficacy of interventions focused solely on improving audibility. This study of 49 older adults (M = 69.58, SD = 8.22 years; 29 female) was designed to examine the extent to which low and/or high frequency hearing loss was related to auditory cortex morphology. Low and high frequency hearing constructs were obtained from a factor analysis of audiograms from these older adults and 1,704 audiograms from an independent sample of older adults. Significant region of interest and voxel-wise gray matter volume associations were observed for the high frequency hearing construct. These effects occurred most robustly in a primary auditory cortex region (Te1.0) where there was also elevated cerebrospinal fluid with high frequency hearing loss, suggesting that auditory cortex atrophies with high frequency hearing loss. These results indicate that Te1.0 is particularly affected by high frequency hearing loss and may be a target for evaluating the efficacy of interventions for hearing loss.
presbyacusis; age-related hearing loss; auditory cortex
Otitis media with effusion (OME) is a pathologic condition of the middle ear that leads to a mild to moderate conductive hearing loss as a result of fluid in the middle ear. Recurring OME in children during the first few years of life has been shown to be associated with poor detection and recognition of sounds in noisy environments, hypothesized to result due to altered sound localization cues. To explore this hypothesis, we simulated a middle ear effusion by filling the middle ear space of chinchillas with different viscosities and volumes of silicone oil to simulate varying degrees of OME. While the effects of middle ear effusions on the interaural level difference (ILD) cue to location are known, little is known about whether and how middle ear effusions affect interaural time differences (ITDs). Cochlear microphonic amplitudes and phases were measured in response to sounds delivered from several locations in azimuth before and after filling the middle ear with fluid. Significant attenuations (20–40 dB) of sound were observed when the middle ear was filled with at least 1.0 ml of fluid with a viscosity of 3.5 Poise (P) or greater. As expected, ILDs were altered by ~30 dB. Additionally, ITDs were shifted by ~600 μs for low frequency stimuli (<4 kHz) due to a delay in the transmission of sound to the inner ear. The data show that in an experimental model of OME, ILDs and ITDs are shifted in the spatial direction of the ear without the experimental effusion.
otitis media with effusion; conductive hearing loss; sound localization; cochlear microphonic
Differentiating the relative importance of the various contributors to the audiometric loss (HLTOTAL) of a given hearing impaired listener and frequency region is becoming critical as more specific treatments are being developed. The aim of the present study was to assess the relative contribution of inner (IHC) and outer hair cell (OHC) dysfunction (HLIHC and HLOHC, respectively) to the audiometric loss of patients with mild to moderate cochlear hearing loss. It was assumed that HLTOTAL = HLOHC + HLIHC (all in decibels) and that HLOHC may be estimated as the reduction in maximum cochlear gain. It is argued that the latter may be safely estimated from compression threshold shifts of cochlear input/output (I/O) curves relative to normal hearing references. I/O curves were inferred behaviorally using forward masking for 26 test frequencies in 18 hearing impaired listeners. Data suggested that the audiometric loss for six of these 26 test frequencies was consistent with pure OHC dysfunction, one was probably consistent with pure IHC dysfunction, 13 were indicative of mixed IHC and OHC dysfunction, and five were uncertain (one more was excluded from the analysis). HLOHC and HLIHC contributed on average 60 and 40 %, respectively, to the audiometric loss, but variability was large across cases. Indeed, in some cases, HLIHC was up to 63 % of HLTOTAL, even for moderate losses. The repeatability of the results is assessed using Monte Carlo simulations and potential sources of bias are discussed.
hearing loss; cochlear nonlinearity; temporal masking curves; forward masking; basilar membrane; hearing aid; Monte Carlo simulations
The cochlear spiral ligament is a connective tissue that plays diverse roles in normal hearing. Spiral ligament fibrocytes are classified into functional sub-types that are proposed to carry out specialized roles in fluid homeostasis, the mediation of inflammatory responses to trauma, and the fine tuning of cochlear mechanics. We derived a secondary sub-culture from guinea pig spiral ligament, in which the cells expressed protein markers of type III or “tension” fibrocytes, including non-muscle myosin II (nmII), α-smooth muscle actin (αsma), vimentin, connexin43 (cx43), and aquaporin-1. The cells formed extensive stress fibers containing αsma, which were also associated intimately with nmII expression, and the cells displayed the mechanically contractile phenotype predicted by earlier modeling studies. cx43 immunofluorescence was evident within intercellular plaques, and the cells were coupled via dye-permeable gap junctions. Coupling was blocked by meclofenamic acid (MFA), an inhibitor of cx43-containing channels. The contraction of collagen lattice gels mediated by the cells could be prevented reversibly by blebbistatin, an inhibitor of nmII function. MFA also reduced the gel contraction, suggesting that intercellular coupling modulates contractility. The results demonstrate that these cells can impart nmII-dependent contractile force on a collagenous substrate, and support the hypothesis that type III fibrocytes regulate tension in the spiral ligament-basilar membrane complex, thereby determining auditory sensitivity.
actin; aquaporin; basilar membrane; connexin; myosin; stria vascularis
In a healthy cochlea stimulated with two tones f1 and f2, combination tones are generated by the cochlea's active process and its associated nonlinearity. These distortion tones travel “in reverse” through the middle ear. They can be detected with a sensitive microphone in the ear canal (EC) and are known as distortion product otoacoustic emissions. Comparisons of ossicular velocity and EC pressure responses at distortion product frequencies allowed us to evaluate the middle ear transmission in the reverse direction along the ossicular chain. In the current study, the gerbil ear was stimulated with two equal-intensity tones with fixed f2/f1 ratio of 1.05 or 1.25. The middle ear ossicles were accessed through an opening of the pars flaccida, and their motion was measured in the direction in line with the stapes piston-like motion using a laser interferometer. When referencing the ossicular motion to EC pressure, an additional amplitude loss was found in reverse transmission compared to the gain in forward transmission, similar to previous findings relating intracochlear and EC pressure. In contrast, sound transmission along the ossicular chain was quite similar in forward and reverse directions. The difference in middle ear transmission in forward and reverse directions is most likely due to the different load impedances—the cochlea in forward transmission and the EC in reverse transmission.
middle ear; ossicles; middle ear gain; otoacoustic emissions
Temporary hearing threshold shift (TTS) resulting from a “benign” noise exposure can cause irreversible auditory nerve afferent terminal damage and retraction. While hearing thresholds and acute tissue injury recover within 1–2 weeks after a noise overexposure, it is not clear if multiple TTS noise exposures would result in cumulative damage even though sufficient TTS recovery time is provided. Here, we tested whether repeated TTS noise exposures affected permanent hearing thresholds and examined how that related to inner ear histopathology. Despite a peak 35–40 dB TTS 24 hours after each noise exposure, a double dose (2 weeks apart) of 100 dB noise (8–16 kHz) exposures to young (4-week-old) CBA mice resulted in no permanent threshold shifts (PTS) and abnormal distortion product otoacoustic emissions (DPOAE). However, although auditory brainstem response (ABR) thresholds recovered fully in once- and twice-exposed animals, the growth function of ABR wave 1p-p amplitude (synchronized spiral ganglion cell activity) was significantly reduced to a similar extent, suggesting that damage resulting from a second dose of the exposure was not proportional to that observed after the initial exposure. Estimate of surviving inner hair cell afferent terminals using immunostaining of presynaptic ribbons revealed ribbon loss of ∼40 % at the ∼23 kHz region after the first round of noise exposure, but no additional loss of ribbons after the second exposure. In contrast, a third dose of the same noise exposure resulted in not only TTS, but also PTS even in regions where DPOAEs were not affected. The pattern of PTS seen was not entirely tonotopically related to the noise band used. Instead, it resembled more to that of age-related hearing loss, i.e., high frequency hearing impairment towards the base of the cochlea. Interestingly, after a 3rd dose of the noise exposure, additional loss of ribbons (another ∼25 %) was observed, suggesting a cumulative detrimental effect from individual “benign” noise exposures, which should result in a significant deficit in central temporal processing.
auditory; noise-induced hearing loss; temporary threshold shift; CBA mice
In the mature mammalian auditory system, inner hair cells are responsible for converting sound-evoked vibrations into graded electrical responses, resulting in release of neurotransmitter and neuronal transmission via the VIIIth cranial nerve to auditory centres in the central nervous system. Before the cochlea can reliably respond to sound, inner hair cells are not merely immature quiescent pre-hearing cells, but instead are capable of generating ‘spontaneous’ calcium-based action potentials. The resulting calcium signal promotes transmitter release that drives action potential firing in developing spiral ganglion neurones. These early signalling events that occur before sound-evoked activity are thought to be important in guiding and refining the initial phases of development of the auditory circuits. This review will summarise our current knowledge of the mechanisms that underlie spontaneous action potentials in developing inner hair cells and how these events are triggered and regulated.
action potentials; calcium; inner hair cells; cochlea; development; auditory; spiral ganglion neuron
Electrically evoked auditory steady-state responses (EASSRs) are EEG potentials in response to periodic electrical stimuli presented through a cochlear implant. For low-rate pulse trains in the 40-Hz range, electrophysiological thresholds derived from response amplitude growth functions correlate well with behavioral T levels at these rates. The aims of this study were: (1) to improve the correlation between electrophysiological thresholds and behavioral T levels at 900 pps by using amplitude-modulated (AM) and pulse-width-modulated (PWM) high-rate pulse trains, (2) to develop and evaluate the performance of a new statistical method for response detection which is robust in the presence of stimulus artifacts, and (3) to assess the ability of this statistical method to determine reliable electrophysiological thresholds without any stimulus artifact removal. For six users of a Nucleus cochlear implant and a total of 12 stimulation electrode pairs, EASSRs to symmetric biphasic bipolar pulse trains were recorded with seven scalp electrodes. Responses to six different stimuli were analyzed: two low-rate pulse trains with pulse rates in the 40-Hz range as well as two AM and two PWM high-rate pulse trains with a carrier rate of 900 pps and modulation frequencies in the 40-Hz range. Responses were measured at eight different stimulus intensities for each stimulus and stimulation electrode pair. Artifacts due to the electrical stimulation were removed from the recordings. To determine the presence of a neural response, a new statistical method based on a two-sample Hotelling T2 test was used. Measurements from different recording electrodes and adjacent stimulus intensities were combined to increase statistical power. The results show that EASSRs to modulated high-rate pulse trains account for some of the temporal effects at 900 pps and result in improved electrophysiological thresholds that correlate very well with behavioral T levels at 900 pps. The proposed statistical method for response detection based on a two-sample Hotelling T2 test has comparable performance to previously used one-sample tests and does not require stimulus artifacts to be removed from the EEG signal for the determination of reliable electrophysiological thresholds.
electrophysiology; objective measures; behavioral measures; electrical stimulus; apparent latency; EASSR
Despite high prevalence of tinnitus and its impact on quality life, there is no cure for tinnitus at present. Here, we report an effective means to temporarily suppress tinnitus by amplitude- and frequency-modulated tones. We systematically explored the interaction between subjective tinnitus and 17 external sounds in 20 chronic tinnitus sufferers. The external sounds included traditionally used unmodulated stimuli such as pure tones and white noise and dynamically modulated stimuli known to produce sustained neural synchrony in the central auditory pathway. All external sounds were presented in a random order to all subjects and at a loudness level that was just below tinnitus loudness. We found some tinnitus suppression in terms of reduced loudness by at least one of the 17 stimuli in 90% of the subjects, with the greatest suppression by amplitude-modulated tones with carrier frequencies near the tinnitus pitch for tinnitus sufferers with relatively normal loudness growth. Our results suggest that, in addition to a traditional masking approach using unmodulated pure tones and white noise, modulated sounds should be used for tinnitus suppression because they may be more effective in reducing hyperactive neural activities associated with tinnitus. The long-term effects of the modulated sounds on tinnitus and the underlying mechanisms remain to be investigated.
subjective tinnitus; hyperacusis; sound therapy; loudness; loudness recruitment; amplitude modulation; frequency modulation; pure tones; white noise
When driven at sound pressure levels greater than ~110 dB stimulus pressure level, the mammalian middle ear is known to produce subharmonic distortion. In this study, we simultaneously measured subharmonics in the ear canal pressure, intracochlear pressure, and basilar membrane or round window membrane velocity, in gerbil. Our primary objective was to quantify the relationship between the subharmonics measured in the ear canal and their intracochlear counterparts. We had two primary findings: (1) The subharmonics emerged suddenly, with a substantial amplitude in the ear canal and the cochlea; (2) at the stimulus level for which subharmonics emerged, the pressure in scala vestibuli/pressure in the ear canal amplitude relationship was similar for the subharmonic and fundamental components. These findings are important for experiments and clinical conditions in which high sound pressure level stimuli are used and could lead to confounding subharmonic stimulation.
tympanic membrane; intracochlear pressure; subharmonics; nonlinearity; hearing aid
Cochlear implants provide good speech discrimination ability despite highly limited amount of information they transmit compared with normal cochlea. Noise vocoded speech, simulating cochlear implants in normal hearing listeners, have demonstrated that spectrally and temporally degraded speech contains sufficient cues to provide accurate speech discrimination. We hypothesized that neural activity patterns generated in the primary auditory cortex by spectrally and temporally degraded speech sounds will account for the robust behavioral discrimination of speech. We examined the behavioral discrimination of noise vocoded consonants and vowels by rats and recorded neural activity patterns from rat primary auditory cortex (A1) for the same sounds. We report the first evidence of behavioral discrimination of degraded speech sounds by an animal model. Our results show that rats are able to accurately discriminate both consonant and vowel sounds even after significant spectral and temporal degradation. The degree of degradation that rats can tolerate is comparable to human listeners. We observed that neural discrimination based on spatiotemporal patterns (spike timing) of A1 neurons is highly correlated with behavioral discrimination of consonants and that neural discrimination based on spatial activity patterns (spike count) of A1 neurons is highly correlated with behavioral discrimination of vowels. The results of the current study indicate that speech discrimination is resistant to degradation as long as the degraded sounds generate distinct patterns of neural activity.
Electronic supplementary material
The online version of this article (doi:10.1007/s10162-012-0328-1) contains supplementary material, which is available to authorized users.
speech processing; neural code; noise vocoded speech; primary auditory cortex; cochlear implants; spatiotemporal patterns; spatial patterns
Rotations of the head evoke compensatory reflexive eye rotations in the orbit to stabilize images onto the fovea. In normal humans, the angular vestibulo-ocular reflex (aVOR) gain (eye/head velocity) changes depending on the head rotation plane. For pitch and yaw head rotations, the gain is near unity, but during roll head rotations, the aVOR gain is ∼0.7. The purpose of this study was to determine whether this physiological discrepancy affects dynamic visual acuity (DVA)—a functional measure of the aVOR that requires subjects to identify letters of varying acuities during head rotation. We used the scleral search coil technique to measure eye and head velocity during passive DVA testing in yaw, roll, and pitch head impulses in healthy controls and patients with unilateral vestibular hypofunction (UVH). For control subjects, the mean aVOR gain during roll impulses was significantly lower than the mean aVOR gain during yaw and pitch impulses; however, there was no difference in DVA between yaw, roll, or pitch. For subjects with UVH, only aVOR gain during head rotations toward the affected side (yaw) were asymmetric (ipsilesional, 0.32 ± 0.17, vs. contralesional, 0.95 ± 0.05), with no asymmetry during roll or pitch. Similarly, there was a large asymmetry for DVA only during yaw head rotations, with no asymmetry in roll or pitch. Interestingly, DVA during roll toward the affected ear was better than DVA during yaw toward the affected ear—even though the ipsilesional roll aVOR gain was 60 % lower. During roll, the axis of eye rotation remains nearly perpendicular to the fovea, resulting in minimal displacement between the fovea and fixation target image projected onto the back of the eye. For subjects with UVH, the DVA score during passive horizontal impulses is a better indicator of poor gaze stability than during passive roll or pitch.
vestibulo-ocular reflex; head impulses; dynamic visual acuity; gain
The precedence effect (PE) refers to the dominance of directional information carried by a direct sound (lead) over the spatial information contained in its multiple reflections (lags) in sound localization. Although the processes underlying the PE have been largely investigated, the extent to which peripheral versus central auditory processes contribute to this perceptual phenomenon has remained unclear. The present study investigated the contribution of peripheral processing to the PE through a comparison of physiological and psychoacoustical data in the same human listeners. The psychoacoustical experiments, comprising a fusion task, an interaural time difference detection task and a lateralization task, demonstrated a time range from 1 to 4.6–5 ms, in which the PE operated (precedence window). Click-evoked otoacoustic emissions (CEOAEs) were recorded in both ears to investigate the lead–lag interactions at the level of the basilar membrane (BM) in the cochlea. The CEOAE-derived peripheral and monaural lag suppression was largest for ICIs of 1–4 ms. Auditory-evoked brainstem responses (ABRs) were used to investigate monaural and binaural lag suppression at the brainstem level. The responses to monaural stimulation reflected the peripheral lag suppression observed in the CEOAE results, while the binaural brainstem responses did not show any substantial contribution of binaural processes to monaural lag suppression. The results demonstrated that the lag suppression occurring at the BM in a time range from 1 to 4 ms, as indicated by the suppression of the lag-CEOAE, was the source of the reduction in the lag-ABRs and a possible peripheral contributor to the PE for click stimuli.
precedence effect; lag suppression; periphery; click-evoked otoacoustic emission; auditory brainstem response
A large number of perivascular cells expressing both macrophage and melanocyte characteristics (named perivascular-resident macrophage-like melanocytes, PVM/Ms), previously found in the intra-strial fluid–blood barrier, are also found in the blood–labyrinth barrier area of the vestibular system in normal adult cochlea, including in the three ampullae of the semicircular canals (posterior, superior, and horizontal), utricle, and saccule. The cells were identified as PVM/Ms, positive for the macrophage and melanocyte marker proteins F4/80 and GSTα4. Similar to PVM/Ms present in the stria vascularis, the PVM/Ms in the vestibular system are closely associated with microvessels and structurally intertwined with endothelial cells and pericytes, with a density in normal (unstimulated) utricle of 225 ± 43/mm2; saccule 191 ± 25/mm2; horizontal ampullae 212 ± 36/mm2; anterior ampullae 238 ± 36/mm2; and posterior ampullae 223 ± 64/mm2. Injection of bacterial lipopolysaccharide into the middle ear through the tympanic membrane causes the PVM/Ms to activate and arrange in an irregular pattern along capillary walls in all regions within a 48-h period. The inflammatory response significantly increases vascular permeability and leakage. The results underscore the morphological complexity of the blood barrier in the vestibular system, with its surrounding basal lamina, pericytes, as well as second line of defense in PVM/Ms. PVM/Ms may be important to maintain blood barrier integrity and initiating local inflammatory response in the vestibular system.
mouse vestibular system; perivascular-resident macrophage-type melanocyte; inflammation; vascular permeability
The neural mechanisms of pitch coding have been debated for more than a century. The two main mechanisms are coding based on the profiles of neural firing rates across auditory nerve fibers with different characteristic frequencies (place-rate coding), and coding based on the phase-locked temporal pattern of neural firing (temporal coding). Phase locking precision can be partly assessed by recording the frequency-following response (FFR), a scalp-recorded electrophysiological response that reflects synchronous activity in subcortical neurons. Although features of the FFR have been widely used as indices of pitch coding acuity, only a handful of studies have directly investigated the relation between the FFR and behavioral pitch judgments. Furthermore, the contribution of degraded neural synchrony (as indexed by the FFR) to the pitch perception impairments of older listeners and those with hearing loss is not well known. Here, the relation between the FFR and pure-tone frequency discrimination was investigated in listeners with a wide range of ages and absolute thresholds, to assess the respective contributions of subcortical neural synchrony and other age-related and hearing loss-related mechanisms to frequency discrimination performance. FFR measures of neural synchrony and absolute thresholds independently contributed to frequency discrimination performance. Age alone, i.e., once the effect of subcortical neural synchrony measures or absolute thresholds had been partialed out, did not contribute to frequency discrimination. Overall, the results suggest that frequency discrimination of pure tones may depend both on phase locking precision and on separate mechanisms affected in hearing loss.
FFR; sensorineural hearing loss; pitch perception; neural phase locking; age
Various studies point to a crucial role of the high-affinity sodium-coupled glutamate aspartate transporter GLAST-1 for modulation of excitatory transmission as shown in the retina and the CNS. While 2–4-month-old GLAST-1 null mice did not show any functional vestibular abnormality, we observed profound circling behavior in older (7 months) animals lacking GLAST-1. An unchanged total number of otoferlin-positive vestibular hair cells (VHCs), similar ribbon numbers in VHCs, and an unchanged VGLUT3 expression in type II VHCs were detected in GLAST-1 null compared to wild-type mice. A partial loss of supporting cells and an apparent decline of a voltage-gated channel potassium subunit (KCNQ4) was observed in postsynaptic calyceal afferents contacting type I VHCs, together with a reduction of neurofilament- (NF200-) and vesicular glutamate transporter 1- (VGLUT1-) positive calyces in GLAST-1 null mice. Taken together, GLAST-1 deletion appeared to preferentially affect the maintenance of a normal postsynaptic/neuronal phenotype, evident only with increasing age.
GLAST-1; vestibular hair cells; supporting cells; otoferlin; KCNQ4; VGLUT
Directional asymmetries in vestibular reflexes have aided the diagnosis of vestibular lesions; however, potential asymmetries in vestibular perception have not been well defined. This investigation sought to measure potential asymmetries in human vestibular perception. Vestibular perception thresholds were measured in 24 healthy human subjects between the ages of 21 and 68 years. Stimuli consisted of a single cycle of sinusoidal acceleration in a single direction lasting 1 or 2 s (1 or 0.5 Hz), delivered in sway (left–right), surge (forward–backward), heave (up–down), or yaw rotation. Subject identified self-motion directions were analyzed using a forced choice technique, which permitted thresholds to be independently determined for each direction. Non-motion stimuli were presented to measure possible response bias. A significant directional asymmetry in the dynamic response occurred in 27% of conditions tested within subjects, and in at least one type of motion in 92% of subjects. Directional asymmetries were usually consistent when retested in the same subject but did not occur consistently in one direction across the population with the exception of heave at 0.5 Hz. Responses during null stimuli presentation suggested that asymmetries were not due to biased guessing. Multiple models were applied and compared to determine if sensitivities were direction specific. Using Akaike information criterion, it was found that the model with direction specific sensitivities better described the data in 86% of runs when compared with a model that used the same sensitivity for both directions. Mean thresholds for yaw were 1.3 ± 0.9°/s at 0.5 Hz and 0.9 ± 0.7°/s at 1 Hz and were independent of age. Thresholds for surge and sway were 1.7 ± 0.8 cm/s at 0.5 Hz and 0.7 ± 0.3 cm/s at 1.0 Hz for subjects <50 and were significantly higher in subjects >50 years old. Heave thresholds were higher and were independent of age.
vestibular; semi-circular canals; otoliths
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
attention; cross-modal enhancement; energetic masking; informational masking; spatial separation; visual cues
Atoh1 (also known as Math1, Hath1, and Cath1 in mouse, human, and chicken, respectively) is a proneural basic helix–loop–helix (bHLH) transcription factor that is required in a variety of developmental contexts. Atoh1 is involved in differentiation of neurons, secretory cells in the gut, and mechanoreceptors including auditory hair cells. Together with the two closely related bHLH genes, Neurog1 and NeuroD1, Atoh1 regulates neurosensory development in the ear as well as neurogenesis in the cerebellum. Atoh1 activity in the cochlea is both necessary and sufficient to drive auditory hair cell differentiation, in keeping with its known role as a regulator of various genes that are markers of terminal differentiation. Atoh1 is known in other fields as an oncogene and a tumor suppressor involved in regulation of cell cycle control and apoptosis. Aberrant Atoh1 activity in adult tissue is implicated in cancer progression, specifically in medullablastoma and adenomatous polyposis carcinoma. We demonstrate through protein sequence comparison that Atoh1 contains conserved phosphorylation sites outside the bHLH domain, which may allow regulation through post-translational modification. With such diverse roles, tight regulation of Atoh1 at both the transcriptional and protein level is essential.
Math1; bHLH; transcription factor; cochlea; organ of Corti; hair cells