This study investigates the effect of spectral degradation on cortical speech encoding in complex auditory scenes. Young normal-hearing listeners were simultaneously presented with two speech streams and were instructed to attend to only one of them. The speech mixtures were subjected to noise-channel vocoding to preserve the temporal envelope and degrade the spectral information of speech. Each subject was tested with five spectral resolution conditions (unprocessed speech, 64-, 32-, 16-, and 8-channel vocoder conditions) and two target-to-masker ratio (TMR) conditions (3 and 0 dB). Ongoing electroencephalographic (EEG) responses and speech comprehension were measured in each spectral and TMR condition for each subject. Neural tracking of each speech stream was characterized by cross-correlating the EEG responses with the envelope of each of the simultaneous speech streams at different time lags. Results showed that spectral degradation and TMR both significantly influenced how top-down attention modulated the EEG responses to the attended and unattended speech. That is, the EEG responses to the attended and unattended speech streams differed more for the higher (unprocessed, 64 ch, and 32 ch) than the lower (16 and 8 ch) spectral resolution conditions, as well as for the higher (3 dB) than the lower TMR (0 dB) condition. The magnitude of differential neural modulation responses to the attended and unattended speech streams significantly correlated with speech comprehension scores. These results suggest that severe spectral degradation and low TMR hinder speech stream segregation, making it difficult to employ top-down attention to differentially process different speech streams.
cortical entrainment; cochlear implant; spectral degradation; sound segregation; event-related potentials; auditory scene analysis
This study examined correlations between pitch and phoneme perception for nine cochlear implant users and nine normal hearing listeners. Pure tone frequency discrimination thresholds were measured for frequencies of 500, 1000, and 2000 Hz. Complex tone fundamental frequency (F0) discrimination thresholds were measured for F0s of 110, 220, and 440 Hz. The effects of amplitude and frequency roving were measured under the rationale that individuals who are robust to such perturbations would perform better on phoneme perception measures. Phoneme identification was measured using consonant and vowel materials in quiet, in stationary speech-shaped noise (SSN), in spectrally notched SSN, and in temporally gated SSN. Cochlear implant pure tone frequency discrimination thresholds ranged between 1.5 and 9.9 %, while cochlear implant complex tone F0 discrimination thresholds ranged between 2.6 and 28.5 %. On average, cochlear implant users had 5.3 dB of masking release for consonants and 8.4 dB of masking release for vowels when measured in temporally gated SSN compared to stationary SSN. Correlations with phoneme identification measures were generally higher for complex tone discrimination measures than for pure tone discrimination measures. Correlations with phoneme identification measures were also generally higher for pitch perception measures that included amplitude and frequency roving. The strongest correlations were observed for measures of complex tone F0 discrimination with phoneme identification in temporally gated SSN. The results of this study suggest that musical training or signal processing strategies that improve F0 discrimination should improve consonant identification in fluctuating noise.
cochlear implants; pitch perception; speech reception; psychophysics
>Human hearing is rather insensitive for very low frequencies (i.e. below 100 Hz). Despite this insensitivity, low-frequency sound can cause oscillating changes of cochlear gain in inner ear regions processing even much higher frequencies. These alterations outlast the duration of the low-frequency stimulation by several minutes, for which the term ‘bounce phenomenon’ has been coined. Previously, we have shown that the bounce can be traced by monitoring frequency and level changes of spontaneous otoacoustic emissions (SOAEs) over time. It has been suggested elsewhere that large receptor potentials elicited by low-frequency stimulation produce a net Ca2+ influx and associated gain decrease in outer hair cells. The bounce presumably reflects an underdamped, homeostatic readjustment of increased Ca2+ concentrations and related gain changes after low-frequency sound offset. Here, we test this hypothesis by activating the medial olivocochlear efferent system during presentation of the bounce-evoking low-frequency (LF) sound. The efferent system is known to modulate outer hair cell Ca2+ concentrations and receptor potentials, and therefore, it should modulate the characteristics of the bounce phenomenon. We show that simultaneous presentation of contralateral broadband noise (100 Hz–8 kHz, 65 and 70 dB SPL, 90 s, activating the efferent system) and ipsilateral low-frequency sound (30 Hz, 120 dB SPL, 90 s, inducing the bounce) affects the characteristics of bouncing SOAEs recorded after low-frequency sound offset. Specifically, the decay time constant of the SOAE level changes is shorter, and the transient SOAE suppression is less pronounced. Moreover, the number of new, transient SOAEs as they are seen during the bounce, are reduced. Taken together, activation of the medial olivocochlear system during induction of the bounce phenomenon with low-frequency sound results in changed characteristics of the bounce phenomenon. Thus, our data provide experimental support for the hypothesis that outer hair cell calcium homeostasis is the source of the bounce phenomenon.
cochlea; spontaneous otoacoustic emissions; low-frequency sound; medial olivocochlear system
Listeners with normal audiometric thresholds can still have suprathreshold deficits, for example, in the ability to discriminate sounds in complex acoustic scenes. One likely source of these deficits is cochlear neuropathy, a loss of auditory nerve (AN) fibers without hair cell damage, which can occur due to both aging and moderate acoustic overexposure. Since neuropathy can affect up to 50 % of AN fibers, its impact on suprathreshold hearing is likely profound, but progress is hindered by lack of a robust non-invasive test of neuropathy in humans. Reduction of suprathreshold auditory brainstem responses (ABRs) can be used to quantify neuropathy in inbred mice. However, ABR amplitudes are highly variable in humans, and thus more challenging to use. Since noise-induced neuropathy is selective for AN fibers with high thresholds, and because phase locking to temporal envelopes is particularly strong in these fibers, the envelope following response (EFR) might be a more robust measure. We compared EFRs to sinusoidally amplitude-modulated tones and ABRs to tone-pips in mice following a neuropathic noise exposure. EFR amplitude, EFR phase-locking value, and ABR amplitude were all reduced in noise-exposed mice. However, the changes in EFRs were more robust: the variance was smaller, thus inter-group differences were clearer. Optimum detection of neuropathy was achieved with high modulation frequencies and moderate levels. Analysis of group delays was used to confirm that the AN population was dominating the responses at these high modulation frequencies. Application of these principles in clinical testing can improve the differential diagnosis of sensorineural hearing loss.
hidden hearing loss; auditory neuropathy; acoustic overexposure; envelope following response; auditory steady-state response; auditory brainstem response; auditory nerve
Unilateral hearing loss (UHL) leads to an imbalanced input to the brain and results in cortical reorganization. In listeners with unilateral impairments, while the perceptual deficits associated with the impaired ear are well documented, less is known regarding the auditory processing in the unimpaired, clinically normal ear. It is commonly accepted that perceptual consequences are unlikely to occur in the normal ear for listeners with UHL. This study investigated whether the temporal resolution in the normal-hearing (NH) ear of listeners with long-standing UHL is similar to those in listeners with NH. Temporal resolution was assayed via measuring gap detection thresholds (GDTs) in within- and between-channel paradigms. GDTs were assessed in the normal ear of adults with long-standing, severe-to-profound UHL (N = 13) and age-matched, NH listeners (N = 22) at two presentation levels (30 and 55 dB sensation level). Analysis indicated that within-channel GDTs for listeners with UHL were not significantly different than those for the NH subject group, but the between-channel GDTs for listeners with UHL were poorer (by greater than a factor of 2) than those for the listeners with NH. The hearing thresholds in the normal or impaired ears were not associated with the elevated between-channel GDTs for listeners with UHL. Contrary to the common assumption that auditory processing capabilities are preserved for the normal ear in listeners with UHL, the current study demonstrated that a long-standing unilateral hearing impairment may adversely affect auditory perception—temporal resolution—in the clinically normal ear. From a translational perspective, these findings imply that the temporal processing deficits in the unimpaired ear of listeners with unilateral hearing impairments may contribute to their overall auditory perceptual difficulties.
temporal resolution; unilateral hearing loss/deafness; within- and between-channel gap detection
According to coherent reflection theory (CRT), stimulus frequency otoacoustic emissions (SFOAEs) arise from cochlear irregularities coherently reflecting energy from basilar membrane motion within the traveling-wave peak. This reflected energy arrives in the ear canal predominantly with a single delay at each frequency. However, data from humans and animals indicate that (1) SFOAEs can have multiple delay components, (2) low-frequency SFOAE delays are too short to be accounted for by CRT, and (3) “SFOAEs” obtained with a 2nd (“suppressor”) tone ≥2 octaves above the probe tone have been interpreted as arising from the area basal to the region of cochlear amplification. To explore these issues, we collected SFOAEs by the suppression method in guinea pigs and time-frequency analyzed these data, simulated SFOAEs, and published chinchilla SFOAEs. Time-frequency analysis revealed that most frequencies showed only one SFOAE delay component while other frequencies had multiple components including some with short delays. We found no systematic patterns in the occurrence of multiple delay components. Using a cochlear model that had significant basilar membrane motion only in the peak region of the traveling wave, simulated SFOAEs had single and multiple delay components similar to the animal SFOAEs. This result indicates that multiple components (including ones with short delays) can originate from cochlear mechanical irregularities in the SFOAE peak region and are not necessarily indicative of SFOAE sources in regions ≥2 octaves basal of the SFOAE peak region. We conclude that SFOAEs obtained with suppressors close to the probe frequency provide information primarily about the mechanical response in the region that receives amplification, and we attribute the too-short SFOAE delays at low frequencies to distortion-source SFOAEs and coherent reflection from multiple cochlear motions. Our findings suggest that CRT needs revision to include reflections from multiple motions in the cochlear apex.
Coherent reflection; SFOAE; TEOAE; Cochlear mechanics
We present a finite-element model of the gerbil middle ear that, using a set of baseline parameters based primarily on a priori estimates from the literature, generates responses that are comparable with responses we measured in vivo using multi-point vibrometry and with those measured by other groups. We investigated the similarity of numerous features (umbo, pars-flaccida and pars-tensa displacement magnitudes, the resonance frequency and break-up frequency, etc.) in the experimental responses with corresponding ones in the model responses, as opposed to simply computing frequency-by-frequency differences between experimental and model responses. The umbo response of the model is within the range of variability seen in the experimental data in terms of the low-frequency (i.e., well below the middle-ear resonance) magnitude and phase, the main resonance frequency and magnitude, and the roll-off slope and irregularities in the response above the resonance frequency, but is somewhat high for frequencies above the resonance frequency. At low frequencies, the ossicular axis of rotation of the model appears to correspond to the anatomical axis but the behaviour is more complex at high frequencies (i.e., above the pars-tensa break-up). The behaviour of the pars tensa in the model is similar to what is observed experimentally in terms of magnitudes, phases, the break-up frequency of the spatial vibration pattern, and the bandwidths of the high-frequency response features. A sensitivity analysis showed that the parameters that have the strongest effects on the model results are the Young’s modulus, thickness and density of the pars tensa; the Young’s modulus of the stapedial annular ligament; and the Young’s modulus and density of the malleus. Displacements of the tympanic membrane and manubrium and the low-frequency displacement of the stapes did not show large changes when the material properties of the incus, stapes, incudomallear joint, incudostapedial joint, and posterior incudal ligament were changed by ±10 % from their values in the baseline parameter set.
tympanic membrane; pars tensa; pars flaccida; vibration; ossicles; sound stimulus; dynamic model; frequency response; sensitivity analysis
Using laser vibrometry and a stimulation and signal analysis method based on multisines, we have measured the response and the nonlinearities in the vibration of the rabbit middle ear at the level of the umbo and the stapes. With our method, we were able to detect and quantify nonlinearities starting at sound pressure levels of 93-dB SPL. The current results show that no significant additional nonlinearity is generated as the vibration signal is passed through the middle ear chain. Nonlinearities are most prominent in the lower frequencies (125 Hz to 1 kHz), where their level is about 40 dB below the vibration response. The level of nonlinearities rises with a factor of nearly 2 as a function of sound pressure level, indicating that they may become important at very high sound pressure levels such as those used in high-power hearing aids.
middle ear; nonlinear distortions; laser vibrometry; multisine
Cyclodextrins are simple yet powerful molecules widely used in medicinal formulations and industry for their ability to stabilize and solubilize guest compounds. However, recent evidence shows that 2-hydroxypropyl-β-cyclodextrin (HPβCD) causes severe hearing loss in mice, selectively killing outer hair cells (OHC) within 1 week of subcutaneous drug treatment. In the current study, the impact of HPβCD on auditory physiology and pathology was explored further as a function of time and route of administration. When administered subcutaneously or directly into cerebrospinal fluid, single injections of HPβCD caused up to 60 dB threshold shifts and widespread OHC loss in a dose-dependent manner. Combined dosing caused no greater deficit, suggesting a common mode of action. After drug treatment, OHC loss progressed over time, beginning in the base and extending toward the apex, creating a sharp transition between normal and damaged regions of the cochlea. Administration into cerebrospinal fluid caused rapid ototoxicity when compared to subcutaneous delivery. Despite the devastating effect on the cochlea, HPβCD was relatively safe to other peripheral and central organ systems; specifically, it had no notable nephrotoxicity in contrast to other ototoxic compounds like aminoglycosides and platinum-based drugs. As cyclodextrins find expanding medicinal applications, caution should be exercised as these drugs possess a unique, poorly understood, ototoxic mechanism.
cochlea; hearing loss; ototoxicity; cyclodextrin; drug delivery
Auditory enhancement refers to the perceptual phenomenon that a target sound is heard out more readily from a background sound if the background is presented alone first. Here we used stimulus-frequency otoacoustic emissions (SFOAEs) to test the hypothesis that activation of the medial olivocochlear efferent system contributes to auditory enhancement effects. The SFOAEs were used as a tool to measure changes in cochlear responses to a target component and the neighboring components of a multitone background between conditions producing enhancement and conditions producing no enhancement. In the “enhancement” condition, the target and multitone background were preceded by a precursor stimulus with a spectral notch around the signal frequency; in the control (no-enhancement) condition, the target and multitone background were presented without the precursor. In an experiment using a wideband multitone stimulus known to produce significant psychophysical enhancement effects, SFOAEs showed no changes consistent with enhancement, but some aspects of the results indicated possible contamination of the SFOAE magnitudes by the activation of the middle-ear-muscle reflex. The same SFOAE measurements performed using narrower-band stimuli at lower sound levels also showed no SFOAE changes consistent with either absolute or relative enhancement despite robust psychophysical enhancement effects observed in the same listeners with the same stimuli. The results suggest that cochlear efferent control does not play a significant role in auditory enhancement effects.
medial olivocochlear reflex; cochlear gain; auditory enhancement
Individuals with sudden unilateral deafness offer a unique opportunity to study plasticity of the binaural auditory system in adult humans. Stimulation of the intact ear results in increased activity in the auditory cortex. However, there are no reports of changes at sub-cortical levels in humans. Therefore, the aim of the present study was to investigate changes in sub-cortical activity immediately before and after the onset of surgically induced unilateral deafness in adult humans. Click-evoked auditory brainstem responses (ABRs) to stimulation of the healthy ear were recorded from ten adults during the course of translabyrinthine surgery for the removal of a unilateral acoustic neuroma. This surgical technique always results in abrupt deafferentation of the affected ear. The results revealed a rapid (within minutes) reduction in latency of wave V (mean pre = 6.55 ms; mean post = 6.15 ms; p < 0.001). A latency reduction was also observed for wave III (mean pre = 4.40 ms; mean post = 4.13 ms; p < 0.001). These reductions in response latency are consistent with functional changes including disinhibition or/and more rapid intra-cellular signalling affecting binaurally sensitive neurons in the central auditory system. The results are highly relevant for improved understanding of putative physiological mechanisms underlying perceptual disorders such as tinnitus and hyperacusis.
disinhibition; deafferentation; auditory brainstem response; unilateral deafness; acoustic neuroma; neural plasticity
Humans, and many other species, exploit small differences in the timing of sounds at the two ears (interaural time difference, ITD) to locate their source and to enhance their detection in background noise. Despite their importance in everyday listening tasks, however, the neural representation of ITDs in human listeners remains poorly understood, and few studies have assessed ITD sensitivity to a similar resolution to that reported perceptually. Here, we report an objective measure of ITD sensitivity in electroencephalography (EEG) signals to abrupt modulations in the interaural phase of amplitude-modulated low-frequency tones. Specifically, we measured following responses to amplitude-modulated sinusoidal signals (520-Hz carrier) in which the stimulus phase at each ear was manipulated to produce discrete interaural phase modulations at minima in the modulation cycle—interaural phase modulation following responses (IPM-FRs). The depth of the interaural phase modulation (IPM) was defined by the sign and the magnitude of the interaural phase difference (IPD) transition which was symmetric around zero. Seven IPM depths were assessed over the range of ±22 ° to ±157 °, corresponding to ITDs largely within the range experienced by human listeners under natural listening conditions (120 to 841 μs). The magnitude of the IPM-FR was maximal for IPM depths in the range of ±67.6 ° to ±112.6 ° and correlated well with performance in a behavioural experiment in which listeners were required to discriminate sounds containing IPMs from those with only static IPDs. The IPM-FR provides a sensitive measure of binaural processing in the human brain and has a potential to assess temporal binaural processing.
objective measures; behavioural measures; interaural time difference; ethological range; interaural time sensitivity
The active cochlear mechanism amplifies responses to low-intensity sounds, compresses the range of input sound intensities to a smaller output range, and increases cochlear frequency selectivity. The gain of the active mechanism can be modulated by the medial olivocochlear (MOC) efferent system, creating the possibility of top-down control at the earliest level of auditory processing. In humans, MOC function has mostly been measured by the suppression of otoacoustic emissions (OAEs), typically as a result of MOC activation by a contralateral elicitor sound. The exact relationship between OAE suppression and cochlear gain reduction, however, remains unclear. Here, we measured the effect of a contralateral MOC elicitor on perceptual estimates of cochlear gain and compression, obtained using the established temporal masking curve (TMC) method. The measurements were taken at a signal frequency of 2 kHz and compared with measurements of click-evoked OAE suppression. The elicitor was a broadband noise, set to a sound pressure level of 54 dB to avoid triggering the middle ear muscle reflex. Despite its low level, the elicitor had a significant effect on the TMCs, consistent with a reduction in cochlear gain. The amount of gain reduction was estimated as 4.4 dB on average, corresponding to around 18 % of the without-elicitor gain. As a result, the compression exponent increased from 0.18 to 0.27.
medial olivocochlear reflex (MOCR); temporal masking curve (TMC); click-evoked otoacoustic emissions (CEOAEs); contralateral acoustic stimulation; cochlear amplification
Acoustic trauma damages the cochlea but secondarily modifies circuits of the central auditory system. Changes include decreases in inhibitory neurotransmitter systems, degeneration and rewiring of synaptic circuits, and changes in neural activity. Little is known about the consequences of these changes for the representation of complex sounds. Here, we show data from the dorsal cochlear nucleus (DCN) of rats with a moderate high-frequency hearing loss following acoustic trauma. Single-neuron recording was used to estimate the organization of neurons’ receptive fields, the balance of inhibition and excitation, and the representation of the spectra of complex broadband stimuli. The complex stimuli had random spectral shapes (RSSs), and the responses were fit with a model that allows the quality of the representation and its degree of linearity to be estimated. Tone response maps of DCN neurons in rat are like those in other species investigated previously, suggesting the same general organization of this nucleus. Following acoustic trauma, abnormal response types appeared. These can be interpreted as reflecting degraded tuning in auditory nerve fibers plus loss of inhibitory inputs in DCN. Abnormal types are somewhat more prevalent at later times (103–376 days) following the exposure, but not significantly so. Inhibition became weaker in post-trauma neurons that retained inhibitory responses but also disappeared in many neurons. The quality of the representation of spectral shape, measured by sensitivity to the spectral shapes of RSS stimuli, was decreased following trauma; in fact, neurons with abnormal response types responded mainly to overall stimulus level, and not spectral shape.
rat; dorsal cochlear nucleus; response map; acoustic trauma; inhibition; random spectral shape; model; neuropathy
Elastic properties of the human stapes annular ligament were determined in the physiological range of the ligament deflection using atomic force microscopy and temporal bone specimens. The annular ligament stiffness was determined based on the experimental load-deflection curves. The elastic modulus (Young’s modulus) for a simplified geometry was calculated using the Kirchhoff–Love theory for thin plates. The results obtained in this study showed that the annular ligament is a linear elastic material up to deflections of about 100 nm, with a stiffness of about 120 N/m and a calculated elastic modulus of about 1.1 MPa. These parameters can be used in numerical and physical models of the middle and/or inner ear.
annular ligament; stapes; elastic modulus; atomic force microscopy
Context effects in loudness have been observed in normal auditory perception and may reflect a general gain control of the auditory system. However, little is known about such effects in cochlear-implant (CI) users. Discovering whether and how CI users experience loudness context effects should help us better understand the underlying mechanisms. In the present study, we examined the effects of a long-duration (1-s) intense precursor on the loudness relations between shorter-duration (200-ms) target and comparison stimuli. The precursor and target were separated by a silent gap of 50 ms, and the target and comparison were separated by a silent gap of 2 s. For normal-hearing listeners, the stimuli were narrowband noises; for CI users, all stimuli were delivered as pulse trains directly to the implant. Significant changes in loudness were observed in normal-hearing listeners, in line with earlier studies. The CI users also experienced some loudness changes but, in contrast to the results from normal-hearing listeners, the effect did not increase with increasing level difference between precursor and target. A “dual-process” hypothesis, used to explain earlier data from normal-hearing listeners, may provide an account of the present data by assuming that one of the two mechanisms, involving “induced loudness reduction,” was absent or reduced in CI users.
auditory context effects; loudness recalibration; cochlear implants; loudness enhancement
The vibratory responses to tones of the stapes and incus were measured in the middle ears of deeply anesthetized chinchillas using a wide-band acoustic-stimulus system and a laser velocimeter coupled to a microscope. With the laser beam at an angle of about 40 ° relative to the axis of stapes piston-like motion, the sensitivity-vs.-frequency curves of vibrations at the head of the stapes and the incus lenticular process were very similar to each other but larger, in the range 15–30 kHz, than the vibrations of the incus just peripheral to the pedicle. With the laser beam aligned with the axis of piston-like stapes motion, vibrations of the incus just peripheral to its pedicle were very similar to the vibrations of the lenticular process or the stapes head measured at the 40 ° angle. Thus, the pedicle prevents transmission to the stapes of components of incus vibration not aligned with the axis of stapes piston-like motion. The mean magnitude curve of stapes velocities is fairly flat over a wide frequency range, with a mean value of about 0.19 mm.(s Pa−1), has a high-frequency cutoff of 25 kHz (measured at −3 dB re the mean value), and decreases with a slope of about −60 dB/octave at higher frequencies. According to our measurements, the chinchilla middle ear transmits acoustic signals into the cochlea at frequencies exceeding both the bandwidth of responses of auditory-nerve fibers and the upper cutoff of hearing. The phase lags of stapes velocity relative to ear-canal pressure increase approximately linearly, with slopes equivalent to pure delays of about 57–76 μs.
middle ear; stapes; incus; chinchilla
The contribution of human ear canal orientation to tympanic membrane (TM) surface motion and sound pressure distribution near the TM surface is investigated by using an artificial ear canal (aEC) similar in dimensions to the natural human ear canal. The aEC replaced the bony ear canal of cadaveric human temporal bones. The radial orientation of the aEC relative to the manubrium of the TM was varied. Tones of 0.2 to 18.4 kHz delivered through the aEC induced surface motions of the TM that were quantified using stroboscopic holography; the distribution of sound in the plane of the tympanic ring PTR was measured with a probe tube microphone. The results suggest that the ear canal orientation has no substantial effect on TM surface motions, but PTR at frequencies above 10 kHz is influenced by the ear canal orientation. The complex TM surface motion patterns observed at frequencies above a few kilohertz are not correlated with simpler variations in PTR distribution at the same frequencies, suggesting that the complex sound-induced TM motions are more related to the TM mechanical properties, shape, and boundary conditions rather than to spatial variations in the acoustic stimulus.
ear canal orientation; tympanic membrane motion; umbo displacement; sound pressure distribution; middle ear; stroboscopic holography
Temporal integration (TI; threshold versus stimulus duration) functions and multipulse integration (MPI; threshold versus pulse rate) functions were measured behaviorally in guinea pigs and humans with cochlear implants. Thresholds decreased with stimulus duration at a fixed pulse rate and with pulse rate at a fixed stimulus duration. The rates of threshold decrease (slopes) of the TI and MPI functions were not statistically different between the guinea pig and human subject groups. A characteristic of the integration functions that the two groups shared was that the slopes of the TI functions were similar in magnitude to slopes of the MPI function only at low pulse rates (< approximately 300 pulses per second). This is consistent with the notion that the TI functions and the MPI functions at the low rates are mediated by a mechanism of long-term integration described in the statistical “multiple looks” model. Histological analysis of the guinea pig cochleae suggested that the slopes of both the MPI and the TI functions were dependent on sensory and neural health near the stimulated regions. The strongest predictor for spiral ganglion cell densities measured near the stimulation sites was the slope of the MPI functions below 1,000 pps. Several mechanisms may be considered to account for the association of shallow integration functions with poor sensory and neural status. These mechanisms are related to abnormal across-fiber synchronization, increased refractoriness and adaptation with impaired neural function, and steep growth of neural excitation with current level associated with neural pathology. The slope of the integration functions can potentially be used as a non-invasive measure for identifying stimulation sites with poor neural health and selecting those sites for removal or rehabilitation, but these applications remain to be tested.
cochlear implants; cochlear health; temporal integration; multipulse integration
The dorsal cochlear nucleus (DCN) is a major subdivision of the mammalian cochlear nucleus (CN) that is thought to be involved in sound localization in the vertical plane and in feature extraction of sound stimuli. The main principal cell type (pyramidal cells) integrates auditory and non-auditory inputs, which are considered to be important in performing sound localization tasks. This study aimed to investigate the histological development of the CD-1 mouse DCN, focussing on the postnatal period spanning the onset of hearing (P12). Fluorescent Nissl staining revealed that the three layers of the DCN were identifiable as early as P6 with subsequent expansion of all layers with age. Significant increases in the size of pyramidal and cartwheel cells were observed between birth and P12. Immunohistochemistry showed substantial changes in synaptic distribution during the first two postnatal weeks with subsequent maturation of the presumed mossy fibre terminals. In addition, GFAP immunolabelling identified several glial cell types in the DCN including the observation of putative tanycytes for the first time. Each glial cell type had specific spatial and temporal patterns of maturation with apparent rapid development during the first two postnatal weeks but little change thereafter. The rapid maturation of the structural organization and DCN components prior to the onset of hearing possibly reflects an influence from spontaneous activity originating in the cochlea/auditory nerve. Further refinement of these connections and development of the non-auditory connections may result from the arrival of acoustic input and experience dependent mechanisms.
cochlear nucleus; dorsal cochlear nucleus; auditory brainstem; postnatal development; mouse
Hearing in noise is a challenge for all listeners, especially for those with hearing loss. This study compares cues used for detection of a low-frequency tone in noise by older listeners with and without hearing loss to those of younger listeners with normal hearing. Performance varies significantly across different reproducible, or “frozen,” masker waveforms. Analysis of these waveforms allows identification of the cues that are used for detection. This study included diotic (N0S0) and dichotic (N0Sπ) detection of a 500-Hz tone, with either narrowband or wideband masker waveforms. Both diotic and dichotic detection patterns (hit and false alarm rates) across the ensembles of noise maskers were predicted by envelope-slope cues, and diotic results were also predicted by energy cues. The relative importance of energy and envelope cues for diotic detection was explored with a roving-level paradigm that made energy cues unreliable. Most older listeners with normal hearing or mild hearing loss depended on envelope-related temporal cues, even for this low-frequency target. As hearing threshold at 500 Hz increased, the cues for diotic detection transitioned from envelope to energy cues. Diotic detection patterns for young listeners with normal hearing are best predicted by a model that combines temporal- and energy-related cues; in contrast, combining cues did not improve predictions for older listeners with or without hearing loss. Dichotic detection results for all groups of listeners were best predicted by interaural envelope cues, which significantly outperformed the classic cues based on interaural time and level differences or their optimal combination.
masked detection; sensorineural hearing loss
The 129S6/SvEvTac (129S6) inbred mouse is known for its resistance to noise-induced hearing loss (NIHL). However, less is understood of its unique age-related hearing loss (AHL) phenotype and its potential relationship with the resistance to NIHL. Here, we studied the physiological characteristics of hearing loss in 129S6 and asked if noise resistance (NR) and AHL are genetically linked to the same chromosomal region. We used auditory brainstem response (ABR) and distortion product otoacoustic emissions (DPOAE) to examine hearing sensitivity between 1 and 13 months of age of recombinant-inbred (congenic) mice with an NR phenotype. We identified a region of proximal chromosome (Chr) 17 (D17Mit143-D17Mit100) that contributes to a sensory, non-progressive hearing loss (NPHL) affecting exclusively the high-frequencies (>24 kHz) and maps to the nr1 locus on Chr 17. ABR experiments showed that 129S6 and CBA/CaJ F1 (CBACa) hybrid mice exhibit normal hearing, indicating that the hearing loss in 129S6 mice is inherited recessively. An allelic complementation test between the 129S6 and 101/H (101H) strains did not rescue hearing loss, suggesting genetic allelism between the nphl and phl1 loci of these strains, respectively. The hybrids had a milder hearing loss than either parental strain, which indicate a possible interaction with other genes in the mouse background or a digenic interaction between different genes that reside in the same genomic region. Our study defines a locus for nphl on Chr 17 affecting frequencies greater than 24 kHz.
auditory brainstem response (ABR); distortion product otoacoustic emission (DPOAE); age-related hearing loss (AHL); non-progressive hearing loss (NPHL); 129S6/SvEvTac (129S6); CBA/CaJ (CBACa); 101/H (101H)
Older adults, even those without hearing impairment, often experience increased difficulties understanding speech in the presence of background noise. This study examined the role of age-related declines in subcortical auditory processing in the perception of speech in different types of background noise. Participants included normal-hearing young (19 – 29 years) and older (60 – 72 years) adults. Normal hearing was defined as pure-tone thresholds of 25 dB HL or better at octave frequencies from 0.25 to 4 kHz in both ears and at 6 kHz in at least one ear. Speech reception thresholds (SRTs) to sentences were measured in steady-state (SS) and 10-Hz amplitude-modulated (AM) speech-shaped noise, as well as two-talker babble. In addition, click-evoked auditory brainstem responses (ABRs) and envelope following responses (EFRs) in response to the vowel /ɑ/ in quiet, SS, and AM noise were measured. Of primary interest was the relationship between the SRTs and EFRs. SRTs were significantly higher (i.e., worse) by about 1.5 dB for older adults in two-talker babble but not in AM and SS noise. In addition, the EFRs of the older adults were less robust compared to the younger participants in quiet, AM, and SS noise. Both young and older adults showed a “neural masking release,” indicated by a more robust EFR at the trough compared to the peak of the AM masker. The amount of neural masking release did not differ between the two age groups. Variability in SRTs was best accounted for by audiometric thresholds (pure-tone average across 0.5–4 kHz) and not by the EFR in quiet or noise. Aging is thus associated with a degradation of the EFR, both in quiet and noise. However, these declines in subcortical neural speech encoding are not necessarily associated with impaired perception of speech in noise, as measured by the SRT, in normal-hearing older adults.
auditory brainstem response; speech perception; aging; envelope following response; frequency following response
Studies with humans and other mammals have provided support for a two-channel representation of horizontal (“azimuthal”) space in the auditory system. In this representation, location-sensitive neurons contribute activity to one of two broadly tuned channels whose responses are compared to derive an estimate of sound-source location. One channel is maximally responsive to sounds towards the left and the other to sounds towards the right. However, recent psychophysical studies of humans, and physiological studies of other mammals, point to the presence of an additional channel, maximally responsive to the midline. In this study, we used electroencephalography to seek physiological evidence for such a midline channel in humans. We measured neural responses to probe stimuli presented from straight ahead (0 °) or towards the right (+30 ° or +90 °). Probes were preceded by adapter stimuli to temporarily suppress channel activity. Adapters came from 0 ° or alternated between left and right (−30 ° and +30 ° or −90 ° and +90 °). For the +90 ° probe, to which the right-tuned channel would respond most strongly, both accounts predict greatest adaptation when the adapters are at ±90 °. For the 0 ° probe, the two-channel account predicts greatest adaptation from the ±90 ° adapters, while the three-channel account predicts greatest adaptation when the adapters are at 0 ° because these adapters stimulate the midline-tuned channel which responds most strongly to the 0 ° probe. The results were consistent with the three-channel account. In addition, a computational implementation of the three-channel account fitted the probe response sizes well, explaining 93 % of the variance about the mean, whereas a two-channel implementation produced a poor fit and explained only 61 % of the variance.
sound localization; spatial tuning; opponent process; electroencephalography; EEG; auditory system
Cochlear implant (CI) users have poor temporal pitch perception, as revealed by two key outcomes of rate discrimination tests: (i) rate discrimination thresholds (RDTs) are typically larger than the corresponding frequency difference limen for pure tones in normal hearing listeners, and (ii) above a few hundred pulses per second (i.e. the “upper limit” of pitch), CI users cannot discriminate further increases in pulse rate. Both RDTs at low rates and the upper limit of pitch vary across listeners and across electrodes in a given listener. Here, we compare across-electrode and across-subject variation in these two measures with the variation in performance on another temporal processing task, gap detection, in order to explore the limitations of temporal processing in CI users. RDTs were obtained for 4–5 electrodes in each of 10 Advanced Bionics CI users using two interleaved adaptive tracks, corresponding to standard rates of 100 and 400 pps. Gap detection was measured using the adaptive procedure and stimuli described by Bierer et al. (JARO 16:273-284, 2015), and for the same electrodes and listeners as for the rate discrimination measures. Pitch ranking was also performed using a mid-point comparison technique. There was a marginal across-electrode correlation between gap detection and rate discrimination at 400 pps, but neither measure correlated with rate discrimination at 100 pps. Similarly, there was a highly significant across-subject correlation between gap detection and rate discrimination at 400, but not 100 pps, and these two correlations differed significantly from each other. Estimates of low-rate sensitivity and of the upper limit of pitch, obtained from the pitch ranking experiment, correlated well with rate discrimination for the 100- and 400-pps standards, respectively. The results are consistent with the upper limit of rate discrimination sharing a common basis with gap detection. There was no evidence that this limitation also applied to rate discrimination at lower rates.
cochlear implant; rate discrimination; pitch; interleaved procedure; gap detection