|Home | About | Journals | Submit | Contact Us | Français|
To examine the effects of observed maternal sensitivity (MS), cognitive stimulation (CS), and linguistic stimulation on the 4-year growth of oral language in young, deaf children receiving a cochlear implant. Previous studies of cochlear implants have not considered the effects of parental behaviors on language outcomes.
In this prospective, multisite study, we evaluated parent–child interactions during structured and unstructured play tasks and their effects on oral language development in 188 deaf children receiving a cochlear implant and 97 normal-hearing children as controls. Parent–child interactions were rated on a 7-point scale using the National Institute of Child Health and Human Development's Early Childcare Study codes, which have well-established psychometric properties. Language was assessed using the MacArthur Bates Communicative Development Inventories, the Reynell Developmental Language Scales, and the Comprehensive Assessment of Spoken Language.
We used mixed longitudinal modeling to test our hypotheses. After accounting for early hearing experience and child and family demographics, MS and CS predicted significant increases in the growth of oral language. Linguistic stimulation was related to language growth only in the context of high MS.
The magnitude of effects of MS and CS on the growth of language was similar to that found for age at cochlear implantation, suggesting that addressing parenting behaviors is a critical target for early language learning after implantation.
Children with sensorineural hearing loss demonstrate less verbal skills, poorer academic achievement, and delayed behavioral and social development compared with normal-hearing peers.1-3 Several previous studies have indicated that the use of cochlear implants in these children facilitates more age-appropriate oral language skills compared with conventional hearing aids.1,2,4,5 However, there is substantial variability in oral language outcomes, even after accounting for age of implantation and duration of implant use.1,2,6,7
Missing from previous research are quantified measures of the influence of parenting behaviors on cochlear implant outcomes. Both maternal sensitivity (MS) and parental stimulation may account for the variability in these outcomes. Models of early language learning have long emphasized the key role played by early caregiver interactions in facilitating communicative intent and attentional frames, setting the stage for imitative learning of linguistic expression.8-13 Observational studies have shown that hearing mothers of young deaf children engage in more controlling, directive, and intrusive interactions with their children and display less positive affect compared with mothers of hearing children.14-16 The consequences of these dyadic interactions include less secure attachment, difficulty sustaining attention, and slower development of communicative competence.3,17
In the present study, we tested the following hypotheses: (1) children receiving a cochlear implant before 2 years of age would demonstrate more rapid language growth than those doing so after 2 years of age; (2) MS would be significantly related to oral language development over the first 4 years after implantation, after accounting for hearing loss, age at implantation, and demographic variables; (3) cognitive stimulation (CS) and linguistic stimulation (LS) provided by parents would be unique predictors of language growth after accounting for the variables (including MS) listed above; and (4) MS would interact with both CS and LS, combining to enhance the effects of parental behavior on child oral language outcomes.
The data for this study came from the Childhood Development after Cochlear Implantation (CDaCI) cohort. The CDaCI cohort is comprised of 188 children with severe to profound sensorineural hearing loss (≥70 dB loss) recruited from 6 implant centers across the US (Table I). Oral language was assessed before implantation and at 6, 12, 24, 36, and 48 months after implantation. The CDaCI normal-hearing cohort is comprised of 97 children recruited from 2 preschools affiliated with 2 of the implant centers.
Eligibility criteria for the study included age <5 years and a screening score of ≥70 on the Bayley Scales of Infant Development, Second Edition Mental Scale or Motor Scale18 or ≥66 on the Leiter International Performance Scale-Revised.19 Eligible families had to be committed to training their child in spoken English. Detailed descriptions of the study methods and design have been published previously.20 The study was approved by each center's Insitutional Review Board, and written informed consent was obtained from parents.
At enrollment (before implantation), the youngest participant was aged 5 months and the oldest was aged 5 years. The children were followed for 4 years after implantation. Three measures of oral language were used to assess growth in language across this 8.5-year span. The MacArthur–Bates Communicative Development Inventories (CDI)21 assesses parental reports of receptive and expressive language for children aged 8-30 months (4 children were too young to complete the CDI at baseline; imputation was used to replace these values). The CDI Words and Gestures scale was used for children aged 8-16 months, and the CDI Words and Sentences scale was used for children aged 16-30 months. The Reynell Developmental Language Scales22 Verbal Comprehension and Expressive Language scales were used for children aged 24-55 months. Finally, the Comprehensive Assessment of Spoken Language,23 comprising 5 subscales (Antonyms, Syntax, Paragraph Comprehension, Nonliteral Language, and Pragmatic Judgment) and spanning age 25 months to 21 years, 9 months was used at the 48-month postimplantation assessment. All 3 measures provided age-equivalent scores based on their respective normative samples.
Parent–child interactions were observed and videotaped during several structured and unstructured tasks. MS, CS, and LS were coded during 3 standardized, well-validated videotaped parent–child interaction tasks lasting 20 minutes: Free Play, Problem-Solving, and Art Gallery. During the unstructured Free Play task, the parent and child were directed to “play as you would at home”. In the Problem-Solving task, the parent and child spent 5 minutes completing 2 puzzles, one that was easy for the child and one that was difficult. During the Art Gallery task, the parent and child looked at a series of 5 art posters mounted on the walls of the playroom at different heights and discussed the posters for 5 minutes.
Interactions were coded using the National Institute of Child Health and Human Development's Early Childcare Study codes,11,12 including the MS composite, which consists of 4 subscales: Sensitivity/Responsivity, Respect for Child's Autonomy, Positive Regard, and Hostility. Trained observers coded the videotaped interactions using a 7-point rating scale (1 = very low; 7 = very high). The Sensitivity/Responsivity scale reflects the degree to which the mother expresses positive regard and emotional support to the child. Respect for Child's Autonomy assesses whether the mother recognizes and respects the child's individuality, motives, and perspectives during the session. Positive Regard rates the amount of positive feelings directed to the child (eg, parent watches attentively, praises child). Hostility is reversed-coded and reflects the parent's expression of anger toward or rejection of the child.
CS was also rated on a 7-point scale using the National Institute of Child Health and Human Development's Early Childcare Study codes. This scale measures the degree to which the parent fosters the child's cognitive development through instruction or engagement in activities designed to facilitate learning and cognitive development.
LS was coded using a 7-point scale developed by a research team of psychologists and speech/language pathologists at the University of Miami (Appendices 2 and 3; available at www.jpeds.com). This scale measures the amount and quality of stimulation that facilitates functional auditory and linguistic skill development. For example, parents scoring high on this scale would use a variety of techniques to move the child along the listening hierarchy, such as expose the child to sound, ask for behavioral or verbal responses, and link sounds and objects in the environment.
All coders completed an extensive training process, including weekly group meetings in which difficult tapes were coded and reviewed. Independent coding of tapes followed feedback from I.C. and A.Q., along with coding of several previously coded tapes to reach a criterion of 80% reliability. Both graduate students and postbaccalaureate research assistants (n = 18) completed the training. Periodic checks of coders' scores were performed every 3 months to ensure consistency and good reliability. One-fifth of all tapes (n = 1700) were randomly selected for each assessment point and coded (ie, full 20 minutes of video). Reliability of the MS, CS, and LS scales ranged from 0.78 to 0.84.11,12 Two trained coders independently rated 20% of the videotapes, yielding good interrater reliability (intraclass correlation coefficient: for MS, 0.79-0.93 [mean, 0.86]; for CS, 0.72-0.91 [mean, 0.80]; for LS, 0.73-0.89 [mean, 0.80]).
The age-equivalent scores for each of the 3 language measures provided a common metric that facilitated combining information across these measures. Only scores within the valid range for each measure were used in these analyses. Scores at the floor or ceiling of each measure, as documented in each language manual, were excluded. Language age was calculated by averaging across the age-equivalent scores from the language measures completed by each participant at a given assessment. Mixed models were used to test our hypotheses. Mixed models were chosen because they allow for nesting of multiple assessments within individuals, thereby accounting for children's language development over the 48 months of the study. Children's chronological age at each assessment was used to index time. Chronological age was centered at 8 months, the lowest language-age provided by the language measures. A linear growth trend was fit to each individual, and these trends were included as a random component of the model. Predictors of the linear growth trends were entered in 3 blocks: early hearing experience, child and family demographics, and parenting behaviors (ie, MS, CS, and LS). Restricted maximum likelihood was used to estimate the models.
Along with the mixed models, children's language abilities at the 48-month assessment were tested for performance differences in children of parents who varied on parenting behaviors. The means of these variables were used to divide parents into high versus low MS, CS, and LS groups. The outcome for the 48-month analyses was language delay, defined as the difference between chronological age and language age. Locally weighted scatterplot smoothing was used to graphically depict growth trajectories for specific groups according to age and parenting behaviors. All analyses were performed using SAS 9.2 (SAS Institute, Cary, North Carolina).
The first mixed model included the unadjusted growth trajectories. As expected, there was significant variability in these trajectories [Wald (z) = 8.95; P < .01] (Table II). The second model included variables describing a child's early hearing experience. This block of predictors significantly improved model fit [likelihood ratio χ2(4) = 126.5; P < .01] and accounted for 27% of the variability in growth trajectories (Table II). Several early hearing variables, including time with amplification (ie, hearing aids) before implantation, age of onset of hearing loss, and pure tone average in the better ear (a proxy for amount of residual hearing), were significantly associated with language learning.
In addition, we expected that children who received a cochlear implant before age 2 years would exhibit more rapid language development than those who did so after age 2 years (Table II and Figure). Examination of trajectories in the children with cochlear implants and those with normal hearing showed a much steeper trajectory of growth in the younger children with cochlear implants compared with older children with cochlear implants. At 48 months postimplantation, children who received an implant before age 2 years had a mean of 1.37 years (95% CI, 0.80-1.94 years) less language delay compared with those who received an implant at 2 years and older (Table III and Figure). Although both groups with cochlear implants demonstrated progress in oral language, neither group exhibited “catch-up” growth compared with normal-hearing peers (unadjusted linear growth: 0.98 [95% CI, 0.88-1.07] in controls, 0.71 [85% CI, 0.63-0.79] in those receiving an implant at age <2 years, and 0.53 [95% CI, 0.43-0.62] in those receiving an implant at age ≥2 years).
The third model added child and family demographics, improving the model fit [χ2(4) = 170.2; P < .01] and accounting for an additional 15% of the variance in growth trajectories. In this model, family income and ethnicity were significant predictors of language development for the children with cochlear implants (Table II). IQ was the only significant predictor of language growth in the normal-hearing children, but was not related to language growth in the children receiving a cochlear implant.
The fourth model included parenting behaviors (ie, MS, CS, and LS), which significantly improved the model fit [χ2(5) = 15.1; P < .01] and accounted for an additional 11% of the variance in growth trajectories. After adjusting for early hearing experience and child and family demographics, MS and CS predicted increases in language growth (Table I and Figure). In contrast, LS was related to language growth only in the context of high MS, as indicated by the significant interaction between MS and LS (Figure). At 48 months postimplantation, children of parents with higher (ie, above the mean) MS and LS showed 1.52 years (95% CI, 0.94-2.10 years) less delay compared with those with either lower MS or lower LS (Table III).
Parents are an important influence on young children's cognitive, linguistic, social, and behavioral development. The major aim of this study was to examine the effects of parental behaviors in the context of dyadic interactions on deaf children's language growth over the first 4 years postimplantation. This nationally representative sample evaluated the effects of cochlear implants. As predicted, deaf children who underwent implantation before age 2 years had a steeper trajectory of language growth compared with children who did so after age 2 years. At 48 months postimplantation, children receiving an implant before age 2 years had only a 1.2-year language delay, compared with 2.6 years in those who did so at age 2 years or later.
Our data provide strong support for our hypothesis that parenting behaviors, including MS, CS, and LS, would significantly affect the growth of oral language after accounting for a child's early hearing experience and child and family demographics. At 48 months postimplantation, children of parents with higher MS had only a 1.3-year language delay, compared with the 2.7-year delay in children of parents with low MS. CS was also a significant and unique predictor of oral language growth over the 4-year period. Children of parents who engaged in more CS had a 1.4-year language delay, compared with 2.6 years in children of parents who used less CS. Finally, LS was also related to improved language development, but only in the context of high MS. Children of parents with both high MS and high LS had only a 1-year delay in language, compared with 2.5 years in the other groups (ie, low MS, high LS; high MS, low LS; and low MS, low LS).
These findings are consistent with the literature on parenting behaviors and linguistic development. Previous studies have documented the positive effects of MS (also termed emotional availability and maternal responsiveness) on language learning and representational play in children with hearing loss13,24 and normal-hearing children.11,12,25,26 Our results are strengthened by the use of observational, standardized, and validated rating scales that have been linked to better developmental outcomes in a national longitudinal study of the effects of early child care.11,12
The magnitude of the effects of MS on the growth of language were similar to those found for age at implantation, suggesting that parenting behaviors are a critical target for intervention to attain optimal language outcomes. The effects were particularly strong for LS in the context of MS. It appears that both MS and LS are needed to maximize the benefits of cochlear implants. This finding has some important clinical implications. Cochlear implant programs, which typically provide rehabilitation focused on speech and language training, would likely see improved outcomes by incorporating MS training into their programs. This could be accomplished using a coaching model in which parents receive hands-on training to support their child's communication skills.10
This study had several limitations. Although we included a normal-hearing control group, these children had higher parental education and family income than the cochlear implant group, which was controlled for in the mixed models. Data on children's educational/rehabilitation services were incomplete and thus were not included in our analyses. In addition, the LS was developed as part of the CDaCI study, and although the interrater reliability of this scale and its predictive validity are strong, they have not been validated by other research groups. Additional longitudinal data on this cohort are currently being collected, providing an opportunity to examine the effects of parenting variables on language development through adolescence.
Funded by National Institute for Deafness and other Communication Disorders (R01 DC04797).
Members of the CDaCI Investigative Team include: House Research Institute, Los Angeles: Laurie S. Eisenberg, PhD, CCC-A (PI); Karen Johnson, PhD, CCCA (coordinator); William Luxford, MD (surgeon); Leslie Visser-Dumont, MA, CCC-A (data collection); Amy Martinez, MA, CCC-A (data collection); Dianne Hammes Ganguly, MA (data collection); Jennifer Still, MHS (data collection); Carren J. Stika, PhD (data collection)
Johns Hopkins University, Listening Center, Baltimore: John K. Niparko, MD (PI); Steve Bowditch, MS, CCC-A (data collection); Jill Chinnici, MA, CCC-A (data collection); James Clark, MD (data assembly); Howard Francis, MD (surgeon); Jennifer Mertes, AuD, CCC-A (coordinator); Rick Ostrander, EDD (data collection); Jennifer Yeagle, MEd, CCC-A (data collection)
Johns Hopkins University, The River School, Washington, DC: Nancy Mellon (administration); Meredith Dougherty (data collection); Mary O'Leary Kane, MA, CCC-SLP (former coordinator, data assembly); Meredith Ouellette (coordinator); Julie Verhoff, AuD, CCC-A (data collection); Dawn Marsiglia, MA, CCC-A/SLP (data collection)
University of Miami, Miami: Annelle Hodges, PhD (PI); Thomas Balkany, MD (surgeon); Alina Lopez, MA, CCC-SLP/A (coordinator); Leslie Goodwin, MSN, CCRC (data collection)
University of Michigan, Ann Arbor, MI: Teresa Zwolan, PhD, CCC-A (Principal Investigator); Caroline Arnedt, MA, CCC-A (clinic coordinator); Hussam El-Kashlam, MD (surgeon); Kelly Starr, MA, CCC-SLP (data collection); Ellen Thomas, MA, CCC-SLP, Cert AVT
University of North Carolina, Carolina Children's Communicative Disorders Program, Chapel Hill: Holly F. B. Teagle, AuD, (PI); Craig A. Buchman, MD (surgeon); Carlton Zdanski, MD (surgeon); Hannah Eskridge, MSP (data collection); Harold C. Pillsbury, MD (surgeon); Jennifer Woodard (coordinator)
University of Texas at Dallas, Dallas Cochlear Implant Program, Callier Advanced Hearing Research Center, Dallas: Emily A. Tobey, PhD, CCC-SLP (PI); Lana Britt, AUD, (co-coordinator); Janet Lane, MS, CCC-SLP (data collection); Peter Roland, MD (surgeon); Sujin Shin, MA (data collection); Madhu Sundarrajan, MS, CCC-SLP (data collection); Andrea Warner-Czyz Ph.D. CCC-AUD (co-coordinator)
Data Coordinating Center, Johns Hopkins University, Welch Center for Prevention, Epidemiology and Clinical Research, Baltimore: Nae-Yuh Wang, PhD (PI, biostatistician); Patricia Bayton (data assembly); Enrico Belarmino (data assembly); Christine Carson, ScM (study manager, data analysis); Nancy E. Fink, MPH (Former PI); Thelma Grace (data assembly)
Psychometrics Center, University of Miami, Department of Psychology, Coral Gables: Alexandra Quittner, PhD (PI); David Barker, PhD (data analysis); Ivette Cruz, PhD (data analysis, data assembly)
This scale measures the amount and quality of stimulation specifically directed to facilitate functional auditory and linguistic skill development. The focus of this scale is on the mother's ability to stimulate the child's auditory and linguistic skill development during everyday activities/routines.
Auditory stimulation consists of focusing the child's attention to sound in all its varieties for the purpose of promoting the development of spoken communication in a natural/sequential manner. Functional auditory skills refer to the child's ability to draw meaning from sound. Functional auditory skill development takes place in a continuum of 4 levels of sequential and overlapping skills, as observed in the auditory hierarchy (Appendix 3). LS refers to the mother's linguistic input to facilitate the child's ability to develop and enhance the use of spoken communication. The objective of maternal auditory and LS is to ensure that the child uses audition to develop spoken language.
The skilled mother promotes auditory learning during everyday activities/routines by pairing sound with meaningful experiences, so that the child will seek and become dependent on auditory input. In an early listener, this can be done by alerting the child to listen and interacting with the child as if he or she can learn through listening. Later this is seen by encouraging the child to monitor his or her own environment through hearing.
The skilled mother will work on developing a listening attitude from the child as it is fundamental to have the child's attention during auditory teaching. Nonauditory learners often become focused on a particular task (eg, toy), ignoring the mother's communicative intent. Thus, the mother should gain the disinterested child's attention to and interest in the auditory realm. In this case, the skilled mother will approach the child in an auditory manner by calling his name, making novel/interesting sounds (eg, whistling, tongue clicking), temporarily removing a toy, and/or redirecting his attention to her vocalizations and the task at hand. Nonauditory cues (eg, tapping) should be used only as a last resort and should always be paired with auditory stimuli. The mother will expect a response from the child that indicates that he has heard the stimuli presented by the mother (eg, head turn, imitating vocalizations, responding to a question, following directives).
The skilled mother uses a variety of techniques to develop/enhance the child's functional auditory abilities and gives the child reasons to listen and to communicate. The skilled mother works on an auditory level where the child can succeed most of the time. She works from the known to unknown, from the audible to less audible. For the early listener, auditory stimulation/LS may seem overly simplistic or repetitive; however, this input may be appropriate given the child's listening experience and linguistic development. The mother may give life to a sound by talking about/around a sound that occurred, for instance, saying “uhoh” when something noisily falls. The mother may use “learning to listen” sounds to link objects in the environment with sound and language (eg, “Hmm … cookies” “The train goes choo-choo”). The mother may use shorter phrases that are grammatically correct, and may use acoustic highlighting (overexaggeration of sounds/words embedded in phrases used to make the message more salient) designed to maintain the child's interest and to attune him to various types of activities. The mother is aware of the critical listening distance and moves closer to her child to provide auditory stimulation.
The skilled mother uses indirect language stimulation techniques, including self-talk, parallel talk, descriptions, repetitions, expansions and expansion. As a child develops more language, the mother will concentrate on fostering more sophisticated language and on expanding the child's linguistic repertoire (eg, figurative language, abstract knowledge, synonyms for known vocabulary). In addition, the mother will assist the child in improving the quality/clarity of his speech so as to facilitate mastery of the language.
If the child is being taught to both speak and sign, the mothers with the highest scores will consistently speak while signing to the child.
The auditory hierarchy—the continuum of listening skills through which the child passes as he learns to use his or her functional auditory abilities to become a competent communicator and examples or characteristics of the auditory stimulation appropriate for that stage—is presented in Appendix 3. The continuum of the auditory hierarchy is as follows:
A parent scoring high on this scale uses a variety of techniques to move the child along the listening hierarchy in order to facilitate language/cognitive development. Examples include: creating opportunities to expose the child to sounds, linking sound/objects/meaning in environment or activities, using pitch as a language cue, singing, asking precise language questions, asking for behavioral or verbal responses, and high enthusiasm and high positive reinforcement. The skillful mother stimulates auditory/language skill development in ordinary daily routines, in songs and most importantly, in play. She uses minimal visual cues and positions herself to maximize the auditory signal. Parent limits yes/no questions and uses commenting, rephrasing, wait time and expansion to stimulate conversation other than just questions. Overall, the parent expects the child to listen and talk.
A parent scoring low on this scale misses obvious opportunities for auditory exposure, uses visual cues (e.g., calling child's attention by just tapping), and fails to verbalize. In addition, they use a non-inflected tone of voice, use sound without elaboration, they stimulate way below the child's capabilities by just labeling objects, asking questions about concepts (e.g., what's this, what color is this) and use repetitive and unchallenging stimulation.
|Low||Detection: Ability to respond to the presence or absence of sound.||“I hear that!”; “What's that?” (focus attention through sound stimulus).|
“Listen, I hear a _ ” (focus attention on sound in environment and find sound source).
Creates/talks about sounds; shakes toys.
Attempts to regain the child's attention using auditory stimuli
|Discrimination: Ability to perceive similarities and differences between 2 or more sounds (probably will not see this in tapes).||Tell me if I am singing the same songs: “twinkle, twinkle little star…”, “itsy bitsy spider…”|
Tell me if these words are the same or different: “banana,” “apple.”
|Identification: Ability to label by repeating, pointing to, or writing the speech stimulus heard.||Move toy elevator upward while saying “up, up, up” and encouraging the child to repeat.|
Have several animals and encourage the child to select the cow by saying: “Get the cow that says moo. ”
|High||Comprehension: Ability to understand the meaning of speech by answering questions, following an instruction, paraphrasing, or participating in a conversation. The child's response must be qualitatively different than the stimuli presented (eg, accurately responding through language, vocalization, or action).||Ask precise language question (not just “What's this?”); eg, “Who came to dinner yesterday?”; “Which toy would you like to play with?” Encourage child to respond verbally.|
Parent limits yes/no questions and uses commenting, rephrasing, wait time, and expansion to stimulate conversation other than simply questions.