Individual acoustic stimulus waveforms in the environment sum together to form a complex composite waveform that arrives at the ear as a single, time-varying pressure amplitude wave. The fundamental problem of hearing lies in how the brain decomposes that waveform into information useful for performing auditory tasks necessary for the survival of the animal such as sound localization and vocal communication. Many of the computational principles for sound localization have emerged from the study of avian brains, especially that of the barn owl, a specialized auditory predator (Konishi et al., 1988). Additional data from less specialized birds, such as the chicken, and from mammals have revealed a suite of cellular and synaptic specializations in common for the temporal coding of sound, necessary for the detection of one important cue for localization, interaural time difference (ITD): fast glutamatergic neurotransmission, calyceal synaptic morphology, low-threshold voltage-gated potassium conductances, bipolar dendritic structures, and axonal delay lines (Carr, 1993; Oertel, 1999; Trussell, 1999). Such commonalities of form arising in widely disparate animal clades suggest that there is a computational advantage to that form, whether it is dendritic structure, expression of a suite of ion channels, or the temporal patterning of activity. Brains in both birds and mammals experience similar constraints in detecting sound, and because hearing of airborne sound arose separately in these two groups, similarities in structure, function and coding between them suggest common coding principles at work, and common solutions arrived at through parallel evolution (Carr and Soares, 2002).
A similarly comprehensive answer to the question of coding non-ITD aspects of sound has lagged behind. Early work in the barn owl recognized a division of labor for the coding of interaural timing differences versus interaural intensity differences (also known as interaural level differences, or ILDs), beginning with the two divisions of the avian cochlear nucleus: nucleus magnocellularis (NM) as the origin of the ‘timing pathway’ and nucleus angularis (NA) as the origin of the ‘intensity pathway’ (Figure 1A)(Sullivan and Konishi, 1984; Takahashi et al., 1984); for review see. Both cochlear nuclei are monaural, and project to binaural targets which compare inputs from the two ears (Manley et al., 1988; Takahashi and Konishi, 1988). While NM neurons are similar to the mammalian cochlear nucleus bushy cells, NA is a heterogeneous nucleus with many properties similar to non-bushy cell components of the mammalian cochlear nucleus (CN) (Carr and Soares, 2002; Köppl and Carr, 2003; Oertel, 1999; Soares and Carr, 2001; Soares et al., 2002). This heterogeneity (described below) coupled with the extreme specialization of one pathway for timing cues beginning with NM, suggests that NA is largely responsible for encoding non-localization aspects of sound in addition to its role in the ILD pathway. A major challenge will be to determine how each component cell type contributes to different aspects of sound recognition.
In this chapter, we discuss the morphological and physiological cell types found in NA, their responses to auditory stimuli, and the commonalities and differences of these properties as compared to the mammalian cochlear nucleus. We review the synaptic properties of the neurons in NA and suggest that short-term synaptic plasticity may play a role in the division of intensity and timing information into parallel pathways. While many questions are yet unanswered, further comparative studies of auditory coding in the cochlear nuclei will increase our understanding of common principles of auditory coding. In the avian system especially, a better understanding of general sound coding at the brainstem level will contribute to our understanding of complex auditory function, such as birdsong recognition and learning.