In the CDaCI study, we have observed the effects of an apparent sensitive period such that greater benefit for spoken language acquisition after a CI is significantly associated with earlier implantation. Based in this prospective dataset, significantly greater trajectory of spoken language learning occurs in children implanted in infant and early toddler stages relative to implantation in later toddler stages. Outcomes, however, are significantly modified by a range of factors based in a child’s pre- and post-implant experience. Our observations are consistent with a growing body of evidence that epigenetic modification of the CNS subserves periods for learning of complex tasks such as those related to learning the subsystems of spoken language and ultimately are important in, if not definitive of, effective language comprehension and expression.
Sensitive periods in the development of auditory cortex terminate with reductions in overall synaptic activity and are associated with an inability to completely restore hearing function (Kral 2007
). Changes in synaptic plasticity are likely due to genetic timing of brain sensitivity to language combined with epigenetic features that are guided by the availability of adequate sensory input (Kral 2007
; Panksepp 2008
). Though the closure of a sensitive period that occurs without development of auditory circuits is evident in cat models of congenital deafness, exposure to auditory stimuli by means of cochlear implantation appears capable of producing evoked potentials in more cortical areas, at higher amplitudes, and with the longer latency responses that resemble those of normal-hearing cats (Klinke et al. 1999
). This suggests the ability of cochlear implantation to restore or potentially preserve normal auditory input to cortical areas. EEG studies of auditory-evoked potentials have also demonstrated normal latencies of cortical responses in children implanted (CI), but only if they received an implant before 3.5 years of age, suggesting a watershed age of implantation that affects the capacity for cortical processing (Sharma et al. 2007
Two key observations are of interest to the development of an epigenetic model of spoken language development when hearing restoration with a CI is pursued: (1) elements of the language system (e.g., phonetics, vocabulary, grammar, and pragmatics) appear to be differentially affected by delayed exposure to spoken language and by modifying factors, and (2) delayed exposure can cause disruptions in the social/affective process of parentally guided language learning.
The significance of sensitive periods within this model comes from the possibility of a CI to restore normal auditory learning capacity in the context of cortical plasticity. We observed trends of dissociation between the domains of vocabulary and receptive syntax vs. expressive syntax, with implantation prior to 18 months having a larger positive impact on the development of receptive and expressive syntax than on vocabulary acquisition. Importantly, children who received a CI prior to 18 months of age also demonstrate relatively strong development of expressive syntax and pragmatic use of spoken language; implantation at later stages of toddler development was associated with vulnerabilities in pragmatics and expressive syntax. Though generally highly associated in their developmental patterns (Bates and Dick 2002
), impaired acquisition of grammar relative to vocabulary has previously been noted in deaf children, suggesting the potential for differential development across the subdomains of spoken language (Tomblin et al. 2007
Children with hearing loss possess specific deficits in grammar development that are similar to children with specific language impairment, demonstrating that such grammar-specific deficits can be observed in children with cortical neurosystems that developed in the presence of normal auditory inputs (Norbury et al. 2001
; Briscoe et al. 2001
; Watkins and Rice 1994
). These results suggest that, in children with hearing loss, normally linked dimensions of language can become dissociated from one another. The aspects of learning specifically associated with grammar must be analyzed to understand the basis for this dissociation.
As events in the real world generally result in the stimulation of multiple sensory modalities (e.g., auditory and visual), it is important to consider that developmental outcomes may reflect interactions between the auditory system and other sensory modalities (Kral et al. 2000
). Multisensory integration can be thought of in terms of salience—the ability of a stimulus to capture attention. Multisensory inputs may enhance the salience of a particular stimulus that would have otherwise evaded detection and subsequent response. These interactions are therefore most relevant when a stimulus has low salience (Calvert et al. 2001
). Detecting and subsequent learning of the rules of grammar relies on attention to the more subtle “little words” and (morphosyntactic) endings of words and phrases (Bates and Dick 2002
). Thus, reduced access to acoustic–phonetic cues may inhibit the natural attentional enhancement of grammatical cues (Dick et al. 2001
; Singer Harris et al. 1997
Having considered the importance of multisensory integration of auditory and visual cues, we can consider its relationship to sensitive periods. Though auditory perception is restored with cochlear implantation, these co-activated processes in deaf children demonstrate a bias toward visual rather than auditory stimuli (Bergeson et al. 2005
). The persistence of a visual bias suggests that multisensory integration may not develop normally when a single sense dominates in early development. Auditory stimulation in an early sensitive period may, therefore, be necessary to ensure adequate influence on central circuits that enable multisensory integration. Cochlear implantation within the first year of life may rescue these circuits and enable matching of auditory and visual cues (Bergeson et al. 2010
). Evidence for this comes from an examination of implanted congenitally deaf children who were more likely to fuse auditory and visual information processing if they received their cochlear implant within 2.5 years of age (Schorr et al. 2005
). This observation reinforces the idea that early, effective auditory stimulation is necessary to establish multisensory connections and preserve the attentional resources necessary for learning in the subdomains of spoken language.
Our observations suggest evidence that the auditory system communicates with the visual system in circuits that are established early on and affect learning within language subdomains. Detection and learning of grammar requires multisensory interactions due to the low perceptual salience of grammatical cues. We suggest that, unlike vocabulary, grammar substantially improved for the group of CI who received implants prior to 18 months because the early activation of auditory cortex was able to rescue the development of multisensory integration circuits that ultimately amplified the salience of grammatical cues.
Observations gained from the CDaCI study can also be considered in the context of an epigenetic perspective by considering the multiple ways a child interacts with her environment, specifically the impact of limited verbal language on parent–child interactions. The development of language necessitates and derives from encounters with the world through childhood (Panksepp 2008
). The affective components of these experiences have a measurable impact on the trajectory that language development follows. Joy from play, nurturance from care, and panic from separation distress are just a few of the many emotional aspects of the relationship between the mother and child that shape language development (Schore 2003
; Trevarthen 2001
). Such experiences associate with a child’s desire to engage with the world in an exploratory fashion, which is inevitably accompanied by exposure to a diverse range of sounds, including utterance material (Panksepp 2008
). “Motherese,” the high-pitched, melodic, and repetitive form of speech with exaggerated intonation, appears well-suited for the acquisition of language (Fernald 1989
; Trevarthen and Aitken 2001
). While this form of the speech has been known to engage infants, it further appears that it typifies the affective bond shared between mother and child and plausibly promotes profound neurobiological changes to support the development of language.
Aspects of motivation are critical to an understanding of a model by which epigenetic changes are associated with parental nurturing to promote the development of spoken language (MacLean 1990
). Self-motivation is highly associated with activity of the anterior cingulate regions that appear to enact social–emotional response. Activity within these regions associates both with experience of separation distress and the formation of social bonds (Panksepp 2003
). Interestingly, bilateral damage to the same regions results in akinetic mutism, a deficit of language despite adequate motor function (Devinsky et al. 1995
). This suggests the potential of these regions to “gate” interactions between affective interactions during childhood and development of lifelong language skills. Though neocortical regions ultimately process linguistic information, it is important to note that non-linguistic areas can provide attention and motivation in promoting, or inhibiting, language-associated activities (Panksepp 2008
Recent discoveries in molecular genetics have begun to elucidate the patterns of genetic expression that underlay emergent CNS circuitry that supports language learning. For example, one gene that has been implicated in language (FOXP2) is concentrated in the basal ganglia. Evidence from songbirds suggests that this gene’s product may be necessary for trial and error vocal learning (Scharff and Haesler 2005
; Ölveczky et al. 2005
). Motivation to pursue trial and error for such exploration is essential for acquiring language and is likely dependent on encouragement derived from supportive, affective social interactions. Preliminary evidence suggests that FOXP2 may impact neuronal plasticity in an epigenetic fashion. Regulating mRNAs support neurite outgrowth and synapse formation of circuits that are involved in motor learning in rodents and song learning in birds (Fisher and Scharff 2009
; Vernes et al. 2011
). Furthermore, FOXP2 expression is associated with auditory inputs. Mutations in FOXP2 in rodents appear to specifically affect either synchrony of synaptic transmission from the cochlea to the auditory brainstem or the activation of auditory nerve fibers that carry auditory information to the brainstem (Kurt et al. 2009
Epigentic modifications in these same subcortical regions demonstrate a possible mechanism that controls specific cortical functions. Selective lesions to cholinergic system in the basal forebrain of rats results in shift from long-term potentiation to long-term depression—a transition that is accompanied with a loss of synaptic plasticity in the visual cortex (Kuczweski et al. 2005
). Such observations suggest mechanisms by which epigenetic modifications may influence the duration of sensitive periods (Hanganu-Opatz 2010
; van Ooyen 2011
We can hypothesize a basic mechanism by which experience acts through epigenetic means to promote cortical differentiation and regulate sensitive periods. The results of the CDaCI study fit well into the proposed model. The maximal effect of implantation is seen in the group implanted earliest, suggesting a sensitive period that begins to close for the other groups that experienced constrained access to the key acoustic–phonetic perceptions that normally initiate spoken language learning early in life. Selective effects on the domain of grammar highlight the role that attention likely plays in the acquisition of grammar skills. The necessity for the environment to provide sensory information and for this information to be recognized by the nervous system appears to be absolute, though a critical time frame exists during which intervention allows at least partial recovery of function. Ongoing and emerging factors contribute to early development of behaviors of interest in the CDaCI study, with the primary outcome variable being the development of spoken language. There are important contributions to language development from multiple sources (family, social interactions) as well as synergistic effects of one developing system on another (e.g., low language level affecting behavioral organization).
Our observations of the key role played by parent–child interactions in shaping outcomes after a CI provide the most powerful example of how epigenetic changes could be regulated by the environment. A bidirectional relationship can be hypothesized between language development and parent–child interactions. In a child with SNHL, although innate language systems may be intact, with a sole deficit located in the perception of sound, the child may have either an inadequate store of utterance material or inadequate experience with meaning-interpretation experiences to fully engage with language tasks. A child’s cognitive skills, parent–child interactions, social adjustment, behavioral skills, parental well-being, and social skills interact within the home milieu early on and, over time, with information in the outside environment, all are also nested within the framework of environmental experience affected by socioeconomics and societal influences.
The appropriation and command of spoken language directly help children regulate their attention, and to communicate in ways that affect emotion and behavior and facilitates caregiver and, later, peer communications to enable further refinement and nuanced use of spoken language. When a child’s command of language is lacking, the result is inevitably impairment in communication with parents and heightened risk of greater parental stress. Parental perception of their child’s language skills therefore predictably results in a change in the way they interact. Such parental interpretation of their child’s abilities and how this, in turn, affects the development of further verbal (and written) interactions are key questions that can be answered with longitudinal follow-up.
In the same way that a rodent raised without licking and grooming undergoes epigenetic changes that ultimately affect behavior, one can hypothesize that a young child developing without sufficient affective and social interaction may experience epigenetic modification that closes optimal periods by inhibiting synaptic plasticity. Consider, for example, a mother who is frustrated by a perceived lack of language development in her child with a recent CI. Her interpretation may prevent her from using “motherese” and aggressively communicating with her child through speech as she otherwise would have done. Data from the CDaCI study, as well as those from field studies of hearing children, indicate that the lack of such affective stimulation can stifle motivation for the child to speak and to explore novel applications of spoken language. Furthermore, as neurobiological changes decrease attention directed at language, neurobiological observations suggest that there is likely an associated diminution in synaptic plasticity that will ultimately inhibit future progress in language acquisition. In this model, the result can be a harmful circle of poor language skills causing parental stress and disappointment, with resultant negative and multidimensional influences on the development of spoken language skills in a child with a CI.
This model demonstrates the clinical importance of promoting parental support and intervening when a communicatively inactive home environment and parental stress are detected. Parents of children with hearing loss love their children and, though they seek to nurture them in different ways, it is essential that they are encouraged to emphasize the same language-based affection provided to children with normal hearing. Additionally, this model provides a concrete example for hypothesized environmental impact on cortical function and plasticity. We envision multiple epigenetic changes, such as one regulating attention to language based on affective social interactions, combine to impact the development of higher order cognitive functions of spoken language after surgical intervention in deafness.
A convergence of the biological, cognitive, and communication sciences potentially unifies our approach to the complexities of developmental learning. Within a multidimensional, epigenetic framework, this report addresses childhood acquisition of spoken language after cochlear implantation—a process that represents an interplay between general learning mechanisms, auditory perception, and ongoing environmental experience with the statistical regularities of auditory input. From such an interplay, a child gains operational insight into the meaning and communicative intent conveyed by the sounds of speech of others.
The CDaCI study represents variance in naturally occurring circumstances that affect language learning reflected in inhomogeneities in the baseline biological factors and the environments of participants. In such variability, however, are opportunities to identify dependent variables of clinical importance in addressing how SNHL-challenged children can learn to receive and produce more adequate speech and language. CDaCI data indicate that a range of factors associate with the pattern of the acquisition of spoken language skills after cochlear implantation. Earlier exposure to sound via the CI associated with a faster rate of spoken language growth. Phonological, semantic, grammatical, and pragmatic development differed with age of implantation. Such results support models of language learning that predict that with earlier onset of access to acoustic–phonemic inputs, growth rates in spoken language can approach those of normal-hearing children, whereas delayed access associates with slower growth rates, particularly within language domains of syntax and pragmatics. Multivariable analyses suggest that language learning involves complex interactions in which modifying factors vary in their impact on language learning with age of onset of effective hearing, and the impact of biological and experiential factors varies with the age at which perceptual capabilities are introduced via cochlear implantation. A wealth of data indicates that neurodevelopmental phenomena related to language learning are driven by time-sensitive, bidirectional events. If environmental cues and interaction are not provided in a timely manner, developmental potential narrows. Conversely, Bates et al. (2003
) have observed that brain maturation affects experience, and experience returns the “favor” by altering brain structure. In periods of exponential bursts that are characteristic of early language learning, there are compelling data that underscore the role of mutually beneficial, bidirectional interactions between brain and behavior.
Key advances will come from a fuller understanding of the specific neural events that drive language acquisition, and the genetic control that promotes learning from experience. For example, if we are able to make deductions about epigenetic controls of brain development through an understanding of how synaptogenesis and regression, synaptic refinement and cortical connectivity are influenced by the transmission, reception, and production of speech, we can inform approaches to rehabilitation of the child with early-onset SNHL to promote the remarkable achievement represented by spoken language development in the typical child.