As the site of intersecting afferent and efferent pathways, the inferior colliculus plays a key role in auditory learning. Indeed, animals models have demonstrated that the corticocollicular pathway is essential for auditory learning [63
]. Therefore, it is reasonable to expect that the cABR reflects evidence of auditory training; in fact, the cABR shows influences of both life-long and short-term training. For example, native speakers of tonal languages have better brainstem pitch tracking to changing vowel contours than speakers of nontonal languages [24
]. Bilingualism provides another example of the auditory advantages conferred by language expertise. Bilingualism is associated with enhanced cognitive skills, such as language processing and executive function, and it also promotes experience-dependent plasticity in subcortical processing [65
]. Bilingual adolescents, who reported high English and Spanish proficiency, had more robust subcortical encoding of the F0
to a target sound presented in a noisy background than their age-, sex-, and IQ-matched monolingual peers. Within the bilingual group, a measure of sustained attention was related to the strength of the F0
; this relation between attention and the F0
was not seen in the monolingual group. Krizman et al. [65
] proposed that diverse language experience heightens directed attention toward linguistic inputs; in turn, this attention becomes increasingly focused on features important for speaker identification and stream segregation in noise, such as the F0
Musicianship, another form of auditory expertise, also extends to benefits of speech processing; musicians who are nontonal language speakers have enhanced pitch tracking to linguistically relevant vowel contours, similar to that of tonal language speakers [31
]. Ample evidence now exits for the effects of musical training on the cABR [28
]. The OPERA (Overlap, Precision, Emotion, Repetition, and Attention) hypothesis has been proposed as the mechanism by which music engenders auditory system plasticity [74
]. For example, there is overlap in the auditory pathways for speech and music, explaining in part the musician's superior abilities for neural speech-in-noise processing. The focused attention required for musical practice and performance results in strengthened sound-to-meaning connections, enhancing top-down cognitive (e.g., auditory attention and memory) influences on subcortical processing [75
Musicians' responses to the cABR are more resistant to the degradative effects of noise compared to nonmusicians [68
]. Background noise delays and reduces the amplitude of the cABR [76
]; however, musicianship mitigates the effects of six-talker babble noise on cABR responses in young adults, with earlier peak timing of the onset and the transition in musicians compared to nonmusicians. Bidelman and Krishnan [73
] evaluated the effects of reverberation on the FFR and found that reverberation had no effect on the neural encoding of pitch but significantly degraded the representation of the harmonics. In addition, they found that young musicians had more robust responses in quiet and in most reverberation conditions. Benefits of musicianship have also been seen in older adults; when comparing effects of aging in musicians and nonmusicians, the musicians did not have the expected age-related neural timing delays in the CV transition indicating that musical experience offsets the effects of aging [60
]. These neural benefits in older musicians are accompanied by better SIN perception, temporal resolution, and auditory memory [77
But, what about the rest of us who are not able to devote ourselves full time to music practice—can musical training improve our auditory processing as well? Years of musical training in childhood are associated with more robust responses in adults [67
], in that young adults with zero years of musical training had responses closer to the noise floor compared to groups of adults with one to five or six to eleven years of training who had progressively larger signal-to-noise ratios. In a structural equation model of the factors predicting speech-in-noise perception in older adults, two subsets were compared—a group who had no history of musical training and another group who had at least one year of musical training (range 1 year to 45 years). Cognitive factors (memory and attention) played a bigger role in speech-in-noise perception in the group with musical training, but life experience factors (physical activity and socioeconomic status) played a bigger role in the group with no experience. Subcortical processing (pitch encoding, harmonic encoding, and cross-correlations between responses in quiet and noise) accounted for a substantial amount of the variance in both groups [78
Short-term training can also engender subcortical plasticity. Carcagno and Plack [79
] found changes in the FFR after ten sessions of pitch discrimination training that took place over the course of approximately four weeks. Four groups participated in the experiment: three experimental groups (static tone, rising tone, and falling tone) and one control group. Perceptual learning occurred for the three experimental groups, with effects somewhat specific to the stimulus used in training. These behavioral improvements were accompanied by changes in the FFR, with stronger phase locking to the F0
of the stimulus, and changes in phase locking were related to changes in behavioral thresholds.
Just as long-term exposure to tonal language leads to better pitch tracking to changing vowel contours, just eight days of vocabulary training on words with linguistically relevant contours resulted in stronger encoding of the F0
and decreases in the number of pitch-tracking errors [29
]. The participants in this study were young adults with no prior exposure to a tonal language. Although the English language uses rising and falling pitch to signal intonation, the use of dipping tone would be unfamiliar to a native English speaker, and, interestingly, the cABR to the dipping tone showed the greatest reduction in pitch-tracking errors.
Training that targets speech-in-noise perception has also shown benefits at the level of the brainstem [80
]. Young adults were trained to discriminate between CV syllables embedded in a continuous broad-band noise at a +10
dB signal-to-noise ratio. Activation of the medial olivocochlear bundle (MOCB) was monitored during the five days of training through the use of contralateral suppression of evoked otoacoustic emissions. Training improved performance on the CV discrimination task, with the greatest improvement occurring over the first three training days. A significant increase in MOCB activation was found, but only in the participants who showed robust improvement (learners). The learners showed much weaker suppression than the nonlearners on the first day; in fact, the level of MOCB activation was predictive of learning. This last finding would be particularly important for clinical purposes—a measure predicting benefit would be useful for determining treatment candidacy.
There is renewed clinical interest in auditory training for the management of adults with hearing loss. Historically, attempts at auditory training had somewhat limited success, partly due to constraints on the clinician's ability to produce perceptually salient training stimuli. With the advent of computer technology and consumer-friendly software, auditory training has been revisited. Computer technology permits adaptive expansion and contraction of difficult-to-perceive contrasts and/or unfavorable signal-to-noise ratios. The Listening and Communication Enhancement program (LACE, Neurotone, Inc., Redwood City, CA) is an example of an adaptive auditory training program that employs top-down and bottom-up strategies to improve hearing in noise. Older adults with hearing loss who underwent LACE training scored better on the Quick Speech in Noise test (QuickSIN) [81
] and the hearing-in-noise test (HINT) [82
]; they also reported better hearing on self-assessment measures—the Hearing Handicap Inventory for the Elderly/Adults [83
] and the Client Oriented Scale of Improvement [84
]. The control group did not show improvement on these measures.
The benefits on the HINT and QuickSIN were replicated in young adults by Song et al. [66
]. After completing 20 hours of LACE training over a period of four weeks, the participants improved not only on speech-in-noise performance but also had more robust speech-in-noise representation in the cABR (). They had training-related increases in the subcortical representation of the F0
in response to speech sounds presented in noise but not in quiet. Importantly, the amplitude of the F0
at pretest predicted training-induced change in speech-in-noise perception. The advantages of computer-based auditory training for improved speech-in-noise perception and neural processing have also been observed in older adults [86
]. Based on this evidence, the cABR may be efficacious for documenting treatment outcomes, an important component of evidence-based service.
Figure 5 Young adults with normal hearing have greater representation of the F0 in subcortical responses to /da/ presented in noise after undergoing LACE auditory training. The F0 and the second harmonic have greater amplitudes in the postcondition when calculated (more ...)