PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Brain Lang. Author manuscript; available in PMC 2010 September 1.
Published in final edited form as:
PMCID: PMC2731814
NIHMSID: NIHMS113627

Spectral vs. temporal auditory processing in Specific Language Impairment: A developmental ERP study

Abstract

Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental Language Impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n=25 per group). The stimuli were 3 CV syllables and 3 consonant-to-vowel transitions (spectral sweeps) isolated from the syllables. Each of these 6 stimuli appeared in 3 durations (transitions: 20, 50, and 80 ms; syllables: 120, 150, and 180 ms). Behaviorally, the group with LIs showed inferior syllable discrimination both with long and short stimuli. In ERPs, trends were observed in the group with LI for diminished long-latency negativities (the N2-N4 peaks) and a developmentally transient enhancement of the P2 peak. Some, but not all, ERP indices of spectral processing also showed trends to be diminished in the group with LI specifically in responses to syllables. Importantly, measures of the transition N2-N4 peaks correlated with expressive language abilities in the LI children. None of the group differences depended on stimulus duration. Therefore, sound brevity did not account for the diminished spectral resolution in these LI children. Rather, the results suggest a deficit in acoustic feature integration at higher levels of auditory sensory processing. The observed maturational trajectory suggests a non-linear developmental deviance rather than simple delay.

Keywords: auditory, children, ERP, Language Impairment, SLI, temporal processing

Introduction

Language Impairment (LI), a.k.a. Specific Language Impairment (SLI), is a neuro-developmental disorder characterized by language deficits with relative sparing of other cognitive domains (Bishop, 1997; Leonard, 1997). In English-speaking children, language difficulties include delayed onset and slower acquisition of lexical and grammatical forms, smaller vocabularies, and difficulty acquiring and using inflectional morphology and complex syntax. By definition, LI is not a consequence of hearing loss, articulatory problems, neurological disease, or pervasive developmental disorders. LI has a genetic component and is associated with ADHD and dyslexia later in life (Catts, 1993). Due to high prevalence (Leonard, 1997) and maximal disability during the age of intensive learning, LI poses a significant personal and societal burden (The Agency for Healthcare Research and Quality, 2002).

Three broad theoretical accounts have been offered to explain LI. The “higher-order” account implicates representational or procedural problems in language-specific capacities such as access to innate features of grammar, computation of implicit grammatical rules, or verbal or phonological memory (Montgomery, 2003; Rice, Wexler, & Redmond, 1999; Ullman & Pierpont, 2005). The “lower-level” account implicates language non-specific deficits, including problems with temporal and spectral encoding of sensory information as well as with information processing speed (Lowe & Campbell, 1965; Tallal & Piercy, 1973; 1975). The third account suggests maturational delay (Bishop & McArthur, 2004; 2004b; Bishop & McArthur, 2005; Wright & Zecker, 2004). This study addressed the latter two accounts, which are briefly reviewed below.

Temporal processing in LI

The first models concerning sensory origins of LI suggested a temporal processing deficit, predominant in the auditory modality (Lowe & Campbell, 1965; Tallal & Piercy, 1973; 1975; Tallal, Stark, Kallman, & Mellits, 1981). Studies that gave rise to this model used auditory repetition and temporal judgment tasks and found that children with developmental dysphasia (LI) were poor at discriminating or making sequence judgments about non-verbal stimuli, vowels, or consonants when the stimuli were presented at a fast pace or when they were brief in duration, but performed within the typical range when stimuli were presented at a slower pace or were longer in duration (Tallal & Piercy, 1973; 1974; 1975). Initially, it was concluded that it was “the brevity, not transitional character”, of auditory stimuli that challenged individuals with LI. Later, on the basis of findings that performance of children with LI was poor with consonant-vowel (CV) syllables but not with vowels of corresponding duration, it was proposed that the problem was caused by the transitional character of CV acoustics, containing rapidly changing frequencies, as well as a possibility of backward masking (Tallal, Stark, & Mellits, 1985a; 1985b; Tallal, Merzenich, Miller, & Jenkins, 1998). The backward masking hypothesis received support from several studies reporting that children with LI showed a disadvantage, compared with typically developing children, in detecting test tones specifically when they preceded a masking tone (Wright, Lombardino, King, Puranik, Leonard, & Merzenich, 1997; Marler, Champlin, & Gillam, 2002).

Subsequent studies attempted to further characterize temporal processing as well clarify its role in language abilities in LI. Bishop et al. (Bishop, Bishop, Bright, James, Delaney, & Tallal, 1999b) studied heritability of auditory deficits, as assessed by an auditory repetition task (Tallal & Piercy, 1973; 1974), in 37 twin pairs that included 55 7–13-year-old children with LI. This study found that performance on auditory repetition was poorer in the group with LI; however, this deficit was not influenced by stimulus presentation rate (inter-stimulus intervals of 10–70 ms vs. 500 ms). Further, Bishop et al. (Bishop, Carlyon, Deeks, & Bishop, 1999a) administered three tests of temporal processing to 8–10-year old children with LI, their typical peers, and 6 co-twins of children with LI (total n=28). Backward masking (also in McArthur & Bishop, 2004b) and temporal frequency modulation thresholds showed reliable correlations with auditory repetition task scores administered 2 years previously; however, these thresholds showed no relationship with language abilities. Moreover, auditory repetition scores themselves correlated with non-verbal, but not with verbal, abilities. Therefore, while auditory repetition differentiated typically developing children from those with LI, it seemed to reflect abilities other than those directly related to language skills. Several other studies found no differences between children with LI and typical children in tasks requiring fast auditory processing, including tone detection during brief (40–64 ms) gaps in masking noise (Helzer, Champlin, & Gillam, 1996), discrimination of brief (20-ms) tones presented at very short ISIs (16-, 32-, 64-ms, and longer) (Fernell, Norrelgen, Bozkurt, Hellberg, & Lowing, 2002; Norrelgen, Lacerda, & Forssberg, 2002), or discrimination of brief frequency glides (Bishop, Adams, Nation, & Rosen, 2005a). Furthermore, while the three latter studies found no problems with rapid auditory processing in children with LI, they found impaired discrimination of syllable pairs. Therefore, while the early results suggested temporal processing deficits in the LI, further research has failed to demonstrate a direct relationship between temporal processing problems and language abilities.

The three aspects of the temporal domain that have been suggested to pose a challenge for individuals with LI are stimulus brevity, transitional character, and fast presentation rate. While difficulty with brief stimuli could be caused by a slowed sampling rate, a time-domain property, problems with the transitional character of stimuli suggest a problem in a spectral domain1. Spectral sweeps are distinct acoustic features that are encoded by sweep-sensitive neurons in the auditory cortex (lateral belt). These neurons are tuned to specific “instantaneous” frequency and slope combinations, computed over brief time windows (Tian & Rauschecker, 2004). Therefore, the temporal sequence of instantaneous frequencies constituting a frequency sweep is encoded by multiple neuronal populations. The discrimination between two frequency sweeps, then, involves discrimination between two spectral representations encoded by two pools of neurons. Therefore, spectral processing appears to play an important role in processing of spectral sweeps, an essential feature of human speech and many environmental sounds. Finally, problems with rapid stimulus presentation rate refers to a temporal scale of hundreds of milliseconds (inter-stimulus intervals), which is least an order of magnitude longer/slower than the temporal scale involved in perception of brief individual sounds (the sound “brevity” account). Therefore, it is unlikely that these three deficiencies (problems with brief stimuli, transitional stimuli, and rapid stimulus presentation) originate from the same processing deficit. This may explain some of the apparent inconsistencies among the above-mentioned studies.

Spectral processing in LI

Interestingly, in the Wright et al. study (Wright, et al., 1997), children with LI needed a larger frequency notch in the masking noise than their controls in order to overcome masking interference. This suggested diminished capacity of spectral resolution in LI. Consistently, several early studies by Tallal’s group that were designed to address temporal processing had also found evidence for spectral processing deficits. Specifically, Tallal and Stark (1981) found that a group of 35 5–8-year-old children with LI had difficulty discriminating syllables/ba/from/da/with 40-ms CVT-s and also syllables/sa/and/sha/, in spite of a long (130-ms) duration of the fricative interval. Both of the above contrasts are spectrum-based. Further, Stark and Heinz (1996) found that perceptual similarity, and not the duration, of vowel stimuli made their discrimination challenging for children with LI. Finally, abnormal electrophysiological indices of detection of change in tone frequency (Holopainen, Korpilahti, Juottonen, Lang, & Sillanpää, 1997; Korpilahti & Lang, 1994) vowel (Shafer, Morr, Datta, Kurtzberg, & Schwartz, 2005) in children with LI are also consistent with spectral processing deficit in this population.

Both temporal and spectral perception in the same children with LI was examined by McArthur and Bishop (McArthur & Bishop, 2004a; 2004b; McArthur & Bishop, 2005). These authors found that frequency discrimination thresholds were elevated in about one-third of children with LI, both for brief (25-ms) and long (250-ms) simple tones, complex tones, and vowels. Further, these thresholds correlated with non-word reading abilities and did not correlate with non-verbal abilities. Stimulus duration did not affect performance in either the typically developing children or those with LI, whereas the spectral stimulus complexity did (McArthur & Bishop, 2005). In general, vowels induced the highest discrimination thresholds in all groups. In addition, younger children with LI showed elevated discrimination thresholds, as compared with their peers, for vowels and complex tones. Therefore, McArthur and Bishop concluded that it is the spectral complexity, rather than phonemic nature, of stimuli that challenged the auditory system of children with LI. Consistent with this idea, Bishop et al. (2005a) found that 9–12-year-old children with LI were not impaired on the discrimination of direction (up or down) of one-formant frequency glides, either as a function of glide’s duration or frequency span. However, these same children performed more poorly than their controls on speech in noise discrimination. Therefore, behavioral and electrophysiological evidence suggests that children with LI have problems in the spectral processing domain. However, links with these deficits and language abilities remain elusive.

Maturational account of LI

The finding that younger children with LI demonstrated elevated frequency discrimination thresholds led to proposing a maturational delay account of LI (Bishop & McArthur, 2004; 2005; McArthur & Bishop, 2004b). In three event-related potential (ERP) studies, McArthur and Bishop found that the N1-P2 region of auditory ERPs was age-inappropriate in children with LI, including subgroups with poor as well as normal frequency discrimination thresholds. Intra-class correlations between N1-P2 elicited by a single 25-ms tone and N1-P2 elicited by tone pairs with 20, 50, or 150 ms intra-pair intervals in 14–19-year-old children with LI were statistically comparable to the corresponding intra-class correlations in 11–13-year-old typically developing children (Bishop & McArthur, 2004). This suggested a maturational lag in the children with LI in an ability to differentiate, at the electrophysiological level, two closely spaced tones. However, no interactions were found between the tone separation interval and language group. Therefore, these findings did not support the idea of delayed maturation of rapid processing skills or the link between the maturation of electrophysiological brain’s response and that of perceptual frequency discrimination.

Spectral processing in younger children with LI was also studied by Sussman (1993) who found that 5–6-year-old children with LI discriminated spectrum-based/ba/-/da/continuum better than their language-matched 4-year-old children, but performed worse than their peers on phonetic categorization task with these same stimuli. These results suggested delayed maturation of phonological processing in the wake of non-delayed sensory discrimination abilities. Therefore, while both behavioral and electrophysiological measures suggest maturational delay in LI, a correspondence between these findings and language remains to be demonstrated.

The goal of the present study was to further clarify the nature of auditory sensory deficits in LI. In particular, we addressed the interaction between temporal and spectral processing by examining auditory spectral encoding as a function of stimulus duration, an approach similar to that undertaken by Bishop and McArthur (McArthur & Bishop, 2004a; 2004b; McArthur & Bishop, 2005). The stimuli for the present study were chosen to maximize the challenge to the auditory system as well as the relevance for language functioning. Specifically, we used complex frequency sweeps of 3 different durations: consonant-to-vowel transitions isolated from spoken syllables. Transitions like these are acoustic events that are encountered frequently in both verbal and non-verbal sounds (e.g., animal calls, environmental sounds) and can be critical for their identification. These frequency sweeps are brief and acoustically rich, though linear in shape, providing an ideal stimulus material to test the spectral - temporal processing interface. The second stimulus type was CV syllables that included the same spectral and temporal variables of the transitions, yet within verbal sounds with a complex large-scale structure. Behavioral syllable discrimination, as well as auditory sensory ERPs elicited by syllable and transition stimuli were obtained in a large group of children and were interpreted in light of the current knowledge of auditory ERPs.

The specific aims of the study were as follows: 1) to assess overall auditory processing capacity by comparing ERP peak amplitudes between the typically developing children and those with LI; 2) to examine an aspect of temporal processing (stimulus duration) by assessing enhancement of sensory ERPs with increasing sound length (Kushnerenko, Čeponienė, Fellman, Huotilainen, & Winkler, 2001); 3) to test spectral encoding in the auditory sensory system by assessing amplitude differences between the ERPs elicited by spectrally different stimuli (the three syllables and the three transitions); 4) to assess the spectral-temporal processing interface by examining behavioral as well as electrophysiological discrimination of spectrally distinct stimuli as a function of their duration; 5) to address a hypothesis of maturational delay by examining age-related changes in auditory processing, including stimulus detection (N1, P2) and content encoding (P2, N2/N4).

Methods

Subjects

Fifty-three children were recruited to the study via advertisements in parent magazines, schools, a network of speech and language therapists, and the Speech and Language Services program of San Diego Unified School District. All children were native speakers of American English. Three children were excluded from the study due to insufficient data. Of the remaining children, 25 had Language Impairment (LI; mean age 12 years 2 months; 18 males) and 25 were typically developing (TD; mean age 12 years 2 months; 18 males). The TD children were matched to those with LI by gender and age (within 6 months). They were screened for neurological disorders, uncorrected vision, emotional, and behavioral problems. Hearing was screened with a portable audiometer for pure tones of 500, 1000, 2000, and 4000 Hz. Threshold of 20-dB or lower for at least 3 frequencies was required to pass. All participants signed informed consents in accordance with the UCSD Institutional Review Board procedures.

The specific inclusionary/exclusionary criteria for the TD children were as follows: a) normal developmental and medical history as assessed by a phone screening interview with the parents, medical and family history questionnaire developed in our Center, and a neurological assessment; b) normal intelligence: standard score at or above 85 based on IQ screening with age-appropriate Wechsler Vocabulary and Block design sub-tests (Wechsler, 1991; 1997), established to have high validity (validity coefficient [r] > .90 (Sattler, 1988); c) normal language as per CELF screen Clinical Evaluation of Language Fundamentals – Revised Screening test (Semel, Wiig, & Secord, 1989); d) grade-level academic functioning.

The specific inclusionary/exclusionary criteria for the children with language impairment were as follows: a) a non-verbal Performance IQ of 80 or higher; b) Expressive Language scores, computed from CELF-R/CELF-3 sub-tests (Semel, Wiig, & Secord, 1995) 1.5 or more standard deviations below the age-appropriate mean; c) no specific neurological diagnosis (e.g., cerebral palsy, stroke, seizures), autism, or socio-emotional disturbance. Six children with LI had a clinical diagnosis of attention-deficit hyperactivity disorder (ADHD). Standardized testing results are shown in Tables 1 and and2.2. While there was a significant language group difference in non-verbal intelligence, the group with LI performed within 1 SD from the normative mean, and no significant correlations between the block design scores and ERP or behavioral discrimination measures were found in the overall subject sample.

Table 1
Characteristics of the LI and TD subject samples
Table 2
Characteristics of the three Age sub-groups of LI and TD children

To examine how auditory processing in the population with LI changes with age, children were divided into three age groups roughly corresponding to the maturational ERP periods determined by Bishop et al. (Bishop, Hardiman, Uwer, & von Suchodoletz, 2007a; Table 2). The oldest group, 15–17 years, had only 5 subjects. Therefore, age group analyses on ERP data were conducted using GRP1 (7–10 years) and GRP2 (11–14 years) data only. GRP3 ERP data is presented for descriptive purposes to demonstrate the continuity of age-related change observed from age of GRP1 to age of GRP2. Data from all 50 children was used in behavioral syllable discrimination analyses. Further, since differences in ERP abnormality patterns were found between the children with LI in GRP1 vs. GRP2, these groups were assessed for possible differences in LI severity. As illustrated in Table 2, children with LI in GRP2 showed poorer performance on CELF-R/3 Total Language score. There were no differences in GRP1 vs. GRP2 on the proportion of children with a combined LI type, those with dyslexia, or with a family history of language, speech, or reading problems.

Stimuli

The stimuli for the present study were created using Semisynthetic Speech Generation method (SSG, Alku, Tiitinen, & Näätänen, 1999). This method combines the use of natural speech with the ability to quantify and modify its features according to the particular goals of a study. For the present study, three consonant-vowel syllables,/ba/,/da/, and/ga/, spoken by a female speaker of American English, were recorded, digitized, and served as a raw material for computing the stimulus synthesis parameters with the SSG. Two stimulus sets were created, as described below. The syllable set consisted of the three syllables, with three consonant-vowel transition durations each (9 stimuli). The transition set consisted of consonant-vowel transitions (CVTs) isolated from these nine syllables (9 stimuli). Each syllable consisted from a pre-voicing bar of 40 ms, a consonant burst of 10 ms, a consonant-vowel transition (CVT) which was 20-, 50-, or 80-ms in duration, and a steady-state vowel/a/of 60 ms in duration (Fig. 1, upper image). Each CVT represented a syllable segment beginning at the end of the consonant burst and ending at the beginning of the steady-state portion of a vowel (Fig. 1, lower image). Therefore, depending on the CVT duration, the total durations of the syllable stimuli were 120, 150, or 180 ms and those of the isolated consonant-vowel transitions were 20, 50, and 80 ms. The syllable stimuli followed an envelope computed from naturally pronounced samples. The isolated consonant-vowel transitions had rise-and-fall times of 5 ms. The fundamental frequency of all sounds was 183 Hz. The starting formant frequencies for the consonant-vowel transitions were:/ba/, F1 - 605, F2 - 1150, F3 - 2700, and F4 - 3960 Hz;/da/, F1 - 540, F2 - 1895, F3 - 3210, and F4 - 4005 Hz;/ga/, F1 - 410, F2 - 1785, F3 - 2650, and F4 - 3790 Hz. The formants of the steady-state vowel segment were: F1 - 1075, F2 - 1445, F3 - 2930, F4 - 4910 Hz. These were identical in all syllable stimuli. Since the same glottal excitation was used in the synthesis of all syllables, acoustically they stimuli differed only in terms of the plosives and formant transitions, whereas the rest of the features (fundamental frequency, intonation, phonation type, intensity) were held equal. Perceptual quality of syllable stimuli was proved excellent via an identification task performed by 17 adult subjects (Čeponienė, et al., 2005).

Fig. 1
Spectral view of example stimuli. Top panel: 50-ms syllable/ba/containing 50-ms consonant-to-vowel transition (CVT). Bottom panel: 50-ms CVT isolated from 50-ms CVT syllable/ba/. a - pre-consonant voice bar (30 ms); b – consonant burst (10 ms); ...

The three stimuli differing in spectral characteristics were presented at equal probabilities within a stimulus block, with presentation sequences optimized to enhance spectrum-specific modulation of ERP amplitudes. Sensory ERPs show stimulus-specific as well as non-specific refractoriness, and a balance between the two appears to be optimal with 3 to 5 stimuli in a sequence (Čeponienė, et. al, 2002; Jacobsen & Schroger, 2001).

Behavioral syllable discrimination task

This task provided a behavioral index of syllable discrimination as a function of CVT in a syllable. During this task, children listened to a total of 180 Same and Different syllable pairs, 50% each, presented with 1-sec intra-pair interval. The children pressed a “happy-face” button if they perceived two syllables to be the same and a “sad-face” button if they perceived two syllables to be different. Three seconds were allowed to respond and the task took 10 to 15 minutes to complete. Syllables with 20-ms and 80-ms CVTs were presented in a random order. Assessed were reaction times as well as no-response-corrected2 %-correct and d′ measures for accuracy on the Same and Different 20-ms and 80-ms sound pairs.

ERP experiment

In the ERP experiment the stimuli were presented in blocks of 225 sounds. In the same block, sounds of same type (either syllables or transitions) and same CVT duration were presented (e.g., a 50-ms CVT syllable block contained the 3 syllables/ba/,/da/, and/ga/with 50-ms CVT duration). Stimuli were presented with equal probabilities within a block for a total of 6 different blocks (2 stimulus types × 3 CVT durations). The intensities of different-duration stimuli were equalized to be 62 dB SPL at the subject’s head, and inter-stimulus interval (onset-to-onset) varied randomly between 700 and 900 ms. Sounds were delivered by stimulus presentation software (Presentation® software, Version 0.70, www.neurobs.com) and played via 2 loudspeakers, situated 120 cm in front and at 40 degrees away from midline. During the experiment, subjects watched self-chosen soundless age-appropriate cartoons on a computer monitor and were asked to ignore the sounds.

EEG recording and averaging

Continuous EEG was recorded using a 32-electrode cap (Electrocap, Inc.) with the following electrodes attached to the scalp, according to the 10–20 system: FP1, FP2, F7, F8, FC6, FC5, F3, Fz, F4, TC3, TC4 (30% distance from T3 to C3 and T4 to C4, respectively), FC1, FC2, C3, Cz, C4, PT3, PT4 (halfway between P3 – T3 and P4 –T4, respectively), T5, T6, CP1, CP2, P3, Pz, P4, PO3, PO4, O1, O2, and the right mastoid. Eye movements were monitored with two electrodes, one attached below the left eye and another at the outer corner of the right eye. During data acquisition, all channels were referenced to the left mastoid; offline, data were re-referenced to the average of the left- and right-mastoid tracings.

The EEG (0.01 – 100 Hz) was amplified × 20,000 and digitized at 250 Hz for offline analyses. Prior to averaging, an independent-component analysis (Stark & Heinz, 1996) was used to correct for eye blinks and lateral movements. After this, each data set was examined for any remaining artifacts and the artifact cutoff values were adjusted individually for each subject to ensure their rejection during averaging. Epochs containing 100 ms pre-stimulus and 800 ms post-stimulus time were baseline-corrected with respect to the pre-stimulus interval and averaged by stimulus type. Frequencies higher than 60 Hz were filtered out by convolving the data with a Gaussian function. On average, control children’s data contained 256 accepted trials in any given Spectral Item (e.g., syllable/ba/across the three CVT durations) or Duration (e.g., all syllables with 20-ms CVTs) bin; the corresponding LI data contained an average of 241 accepted trials.

Data measurement and analysis

Based on visual inspection of syllable and transition grand-average waveforms (Figs. 2 & 3), latency windows for automatic peak search by Matlab-based in-house software were determined as shown in Table 3. Each computer-selected peak was visually verified by an experimenter. If deemed necessary, peak selection was changed. The main criterion for peak adjustment was choosing a deflection closest in latency to that of the corresponding peak at the electrode sites where it was clearly identifiable. Mean amplitudes of the peaks were calculated over time intervals roughly equaling 20% of that peak’s duration in the grand-average waveforms. These were 22 ms for the P1, N1, and P2 peaks and 44 ms for the N2 and N4 peaks. For each subject, these intervals were centered at their peak latency at each electrode.

Fig. 2
ERPs elicited by syllables with 20-ms, 50-ms, and 80-ms CVTs in the three Age Groups of children with LI and their typically developing peers. Age Group 3 shown for descriptive purposes only. There was a trend for diminished N2-N4 peaks in the group with ...
Fig. 3
ERPs elicited by isolated 20-ms, 50-ms, and 80-ms transitions in the three Age Groups of children with LI and their controls. Age Group 3 shown for descriptive purposes only. There was a trend for diminished N2-N4 peaks in group with LI and an enhanced ...
Table 3
Peak search windows (ms) derived from grand-average waveforms.

In order to assess whether spectral differentials increased as a function of CVT duration, and whether this interacted with Language Group, absolute values of spectral differentials were calculated for the P2 peak that showed the most consistent spectral item effects. The differentials were as follows:/ba/-/da/,/ba/-/ga/,/da/-/ga/. For stimuli with 20-ms and 50-ms CVTs each, the differentials were averaged across the three differences and the 23 scalp sites, resulting in one value per CVT/subject.

The default electrode set for ANOVAs was 23 electrodes that were averaged into three regions in order to reduce the number of dependent variables: Anterior Region: F7, F3, Fz, F4, F8, FC5, FC1, FC2, FC6; Central Region: TC3, C3, Cz, C4, TC4; Posterior Region: PT3, CP1, CP2, PT4, P3, Pz, P4. Our initial ANOVAs were conducted for syllable and transition trials, one each for the ERP peaks of interest, P1, P2, N2, N4. Therefore, the adjusted alpha level for the 8 conducted ANOVA’s was <.00625. Between subject factors were Language Group (LI, TD) and Age Group (GRP1 (7–10yrs), GRP2 (11–14yrs)). The oldest subjects (GRP3, 15–17yrs) were not used in these analyses as sample sizes were too small. Within subject factors were Spectral Item (ba, da, ga), CVT Duration (20, 50, 80 ms), and Region (Anterior, Central, Posterior). When significant interactions between the variables were found, follow-up ANOVAs were run to clarify their origins. Huynh-Feldt adjustment was applied as appropriate.

ERP-behavior correlations

In order to examine possible links between the ERP measures, language, and perceptual abilities in the group with LI, two-tailed partial Pearson correlations (controlling for age) were conducted between a composite ERP index of combined N2+N4 amplitudes in syllable and transition trials and CELF-R/3 expressive and receptive scores. The decision to use N2 and N4 values was made post-hoc since these peaks showed the strongest trends to differ between the two language groups, and because they have been shown to reflect spectral as well as duration stimulus features. The two peaks were combined in order to increase stability of the data and reduce the number of analyses. The composite ERP index was computed by averaging N2 and N4 amplitudes in response to syllables/ba/,/da/, and/ga/over 10 fronto-central scalp sites (F3, Fz, F4, FC1, FC2, C3, Cz, C4, CP1, CP2) and across the two peaks. That is, one ERP measure was derived by averaging across 60 mean amplitude values (i.e., 10 electrode locations × 3 syllables × 2 peaks). The same was done for the spectral differentials, except that instead of raw amplitudes of the 3 syllable ERPs, amplitude differences were used:/ba/-/da/,/ba/-ga/, and/da/-/ga/.

Results

Behavioral Syllable Discrimination Task

Overall, the TD children were more accurate (92% vs. 78% correct; t=3.35, p<.002; d′=3.60 and 2.46, respectively, p<.0001), but not significantly faster (TD: 978 ms, LI: 1052 ms, t=.983, p<.33), than the children with LI (Table 4). Across both Language groups, accuracy was marginally better for stimuli with longer CVT duration (80-ms CVT: 88% correct; 20-ms CVT: 86% correct; F(1,48)=5.47, p<.025). However, the Language Group × Duration interaction was not significant. The main Pairing effect (Same vs. Different) for accuracy was not significant; however, the Language Group x Pairing interaction was (F(1,48)=4.08, p<.05). It showed that the TD children performed similarly on both types of trials whereas the children with LI performed better on Same than Different trials (83% vs. 76% correct, respectively).

Table 4
Accuracy and Reaction Times from behavioral Syllable Discrimination task.

Reaction times did not differ between the groups. In both groups, RTs were shorter for Same than Different pairs (RT; F(1,48)=60.00, p<.0001; Table 4).

Age Effects

Accuracy. Across the two Language Groups, there were no significant effects or interactions involving Age. There was, however, a slight trend for improved performance in GRP2 compared with the GRP1 TD children (F(1,19) = 3.11, p<.09), but not in the children with LI.

Reaction Times

RTs decreased with age (Main Age effect F(2,44)=10.40, p<.0001), however significantly only from GRP1 to GRP2 (post hoc, p<.002). No interactions were found.

ERP results

Overall Language and Age Group Effects

None of the main effects involving overall Language Group survived significance after the experiment-wise Bonferroni correction (alpha value of .00625). The Language Group difference in the N2 and N4 peaks, seen in Figures 2 & 3 and Table 5, showed trends (transition N2: F(1,36)=4.12, p<.05; syllable N2 and both N4 peaks, p<0.1).

Table 5
Mean amplitudes of ERP peaks (μV (sem)) in the LI and Control groups. Reported values are averages over 23 electrodes used in ANOVAs.

Across both Language Groups, age-related changes were found in the P1 peak, which was smaller in amplitude in age GRP2 than in GRP1 (syllables: F(1,36)=10.16, p<.003; transitions: F(1,36)=8.82, p<.005). Transition N4 showed a similar trend (F(1,36)=4.99, p<.03). As seen in Figs. 2 & 3 and Table 5, children with LI in GRP2, but not in GRP1, appeared to show diminished N2 and N4 peaks in both syllable and transition trials; however, Language Group × Age interactions did not reach significance in the ANOVAs. There was a trend for Language Group × Age interaction for the transition P2 peak (F(1,36)=4.39, p<.04), due to a tendency for an increased P2 amplitude in children with LI, as compared with the controls, in GRP2 (F(1,18)=8.22, p<.01 in post hoc analysis) but not in GRP1 (p<.34).

Since 5 children with LI in GRP2 had a clinical ADHD diagnosis, their ERPs were compared with those GRP2 children with LI who did not have an ADHD diagnosis. As Fig. 4 demonstrates, the ERP differences between the Language Groups were similar across the ADHD(+) and ADHD (−) sub-groups.

Fig. 4
ERPs of children with LI in GRP2 with and without ADHD diagnosis as well as their age-matched controls. The between Language Group differences were similar across the ADHD (+) and ADHD (−) subgroups.

The N1 peak could be reliably measured in age GRP3 only. In this age group, it was roughly twice the amplitude (onset-to-peak) in the children with LI as compared with the TD peers (Figs. 2 & 3; Table 5). However, due to the small sample size of this group, this effect was not statistically assessed.

In summary, no statistically significant ERP differences were found between the two language groups. However, trends were revealed for the diminished long-latency negative peaks (N2 and N4) as well as a developmentally transient (age GRP2) increase in the P2 amplitude in the children with LI.

Overall Sound Duration Effects

As expected, sound duration effects manifested as increased N2 and N4 amplitudes in response to longer sounds (Fig. 6; trans: N2: F(2,72) = 13.69, p<.0001; N4 syl: F(2,72)=13.06, p<.0001; N4 trans: F(2,72) = 17.48, p<.0001). However, Language Group did not interact with sound duration for any of these peaks, which indicated that children with LI enhanced their responses to longer sounds similarly to the TD children.

Fig. 6
ERPs elicited by syllables/ba/,/da/, and/ga/in the three Age Groups of children with LI and their controls. Across all subjects, significant differences between the three Spectral Items were found in the P2 and N2 amplitudes. The group with LI showed ...

An unexpected finding was the transition P2 amplitude increase with longer stimulus durations (F(2,72) = 7.61, p<.001). Further, sound duration showed a tendency to interact with Language group (p<.06) such that in the TD children, P2 amplitude increased with sound duration (p<.0001), whereas in the children with LI it was larger than in the TD-s with 20-ms and 50-ms transitions and did not show a significant increase with longer sounds (p<.5).

Overall Spectral Item effects

As expected, overall Spectral Item (SI) effects manifested in the P2 and N2 amplitude variations (Čeponienė, et al., 2001; Čeponienė, Torki, Koyama, Alku, & Townsend, 2008). The significant transition P2 (F(2,72) = 8.93, p<.0003), and syllable N2 (F(2,72) = 6.56, p<.002) spectral item effects did not interact with Language Group. However, trends were found for syllable P1 and N4 peaks to interact with Language Group.

Syllable P1 showed a trend for 3-way interaction between Language group, Age, and Spectral Item (F(2,72) = 4.38, p<.02). This effect originated from Language Group differences in GRP2 (not GRP1), with the Spectral Item differences significant in the TD group (p<.001) but not in the group with LI (p<.41). Syllable N4 produced a weaker trend for Language Group x Spectral Item effect (F(2,72) = 2.91, p<.06) due to the TD children tending to show N4 amplitude differences among spectral items (ba<da<ga, p<. 05) and the group with LI not showing any (p<.4).

In summary, trends for Language Group × Spectral Item interactions were found in the syllable but not in the transition ERPs. In these trends, the TD group showed indices of better spectral differentiation than the group with LI.

P2 Spectral Differentials as a function of CVT duration

Absolute values of spectral differentials of the P2 peak showed a significant main CVT effect in the whole subject sample (F(1,45)=8.38, p<.006, η2=.16), indicating that with longer stimuli, the amplitude differences between spectral items increased. However, no interactions involving Language Group were found.

ERP-behavior correlations

CELF - ERP

In the LI children, poorer expressive language skills were associated with diminished long-latency negative brain responses. The CELF-R/3 Expressive language sub-scores were correlated with the composite measure of transition N2/N4 amplitudes: r =−.57, p<.005, and showed a trend to correlate with the N2/N4 amplitude differentials (r =−.36, p<.08). Correlations involving syllable N2/N4 indexes and Receptive CELF sub-scores were not significant.

Discussion

This study examined temporal and spectral aspects of auditory processing in LI using perceptual and electrophysiological measures of syllable and consonant-to-vowel transition (CVT) processing. The children with LI did not discriminate syllables as well as TD children regardless of the CVT duration. Although in general the brain responses did not show statistically significant main Language Group effects, children with LI showed a tendency for diminished long-latency negative ERP peaks (N2 and N4). In the children with LI, smaller long latency negative ERP peaks were associated with worse expressive language skills. In addition, syllable, but not transition, P1 and N4 peaks showed trends of diminished amplitude modulations by the three spectral items. However, the group with LI showed normal overall sound duration - ERP amplitude function. Unexpectedly, it was the middle-age group with LI (11–14 years) that showed tendencies for an enhanced P2 peak and diminished spectral differences of the P1 peak.

The current LI sample

All children in the group with LI met the selection criteria. They also showed poorer non-verbal intelligence measures, as compared with their controls (Table 1). However, no significant correlations were found between the block design scores, a measure of PIQ, and ERP or behavioral discrimination measures, indicating that PIQ did not influence the variables of interest of the present study (see also McGregor, Newman, Reilly, & Capone, 2002).

Further, we found marginally poorer CELF-R/3 Total Language scores in children with LI in GRP2 (11–14 years) as compared with those in GRP1 (7–10 years). Children in the LI GRP2 also tended to show more diminished N2/N4 amplitudes, as shown in Figures 2 and and3.3. However, given the lack of significant N2/N4 age group differences, the question remains whether these trends are due to the sample bias or whether the lack of significance is caused by the heterogeneity of the LI population. The latter account is somewhat favored by the fact that, in the whole LI sample, the N2/N4 amplitudes correlated with the Expressive CELF-R/3 Language scores, which suggests that there might be a link between the N2-N4 ERP effects and the severity of language impairment.

Finally, 5 children with LI in GRP2 had a clinical ADHD diagnosis. However, as Fig. 4 demonstrates, the ERP abnormality pattern was similar across the ADHD (+) and ADHD (−) sub-groups.

Behavioral findings

As expected, children with LI were inferior on the behavioral syllable discrimination task. A specific discrimination deficit, as opposed to a general performance bias, is suggested by the fact that performance of children with LI was less accurate on Different than Same stimulus pairs. However, discrimination accuracy in the group with LI was not affected by CVT duration in a syllable (20 vs. 80 ms). In part, this could be attributed to the fact that in the attended conditions, syllable perception is categorical and can be accomplished on the basis of partial cues, i.e., without fine-grained acoustic analysis. Nonetheless, this indicates that short CVT duration did not interfere with syllable perception in the current group with LI.

ERP findings

Overall ERP effects

No main Language Group-related ERP effect survived Bonferroni correction. However, trends were found for the N2 and N4 peaks to be diminished in the overall group with LI and for the P2 peak to be enhanced in 11–14-year-old children with LI. These trends are consistent with previous literature reporting diminished N2 peak (Korpilahti & Lang, 1994; Tonnquist-Uhlen, 1996) and abnormalities in the N1-P2 range (Adams, Courchesne, Elmasian, & Lincoln, 1987; Bishop & McArthur, 2004; Bishop & McArthur, 2005; Lincoln, Courchensne, Harms, & Allen, 1995; McArthur & Bishop, 2004b; Simon-Cereijido, Bates, Wulfeck, Cummings, Townsend, Williams, & Čeponienė, 2006) in this population. Interestingly, the lack of group effects, presence of subtle effects, and inconsistent findings across the studies appear to be characteristic of studies on auditory perception in the LI population. In several researchers’ work, a strong theme serving to clarify this phenomenon has been examining the heterogeneity of the LI population (Bishop, Hardiman, Uwer, & von Suchodoletz, 2007b; McArthur and Bishop, 2004a; Neville, Coffey, Holcomb, & Tallal, 1993). Specifically, Bishop and colleagues have repeatedly found that only sub-sets of their children with LI show auditory perceptual difficulties. Almost invariably, these are younger children with LI but also those with receptive, rather than expressive, LI subtype or with concomitant reading problems (Bishop, Hardiman, Uwer, & von Suchodoletz, 2007b; McArthur & Bishop, 2004a; 2004b; 2005). The present study has one other major source of variability, which is a wide age range. This was done in an attempt to evaluate maturational account of the LI. Although the present sample is larger than most reported previously, the wide age range introduced an additional source of variability associated with the maturational changes in the ERP waveforms that are likely to differ between the two groups, yet not in a homogenous manner. Some of these changes could be LI-specific but others might not necessarily be linked with the maturation of the LI-related auditory abilities (McArthur & Bishop, 2005). Therefore, the sub-significant findings of the present study are discussed with an understanding that they might apply only to a sub-set of children with LI. The following discussion is organized by the major questions addressed in the study.

Temporal processing deficit?

The term “temporal processing” has been used rather loosely in reference to a variety of stimulus and neural processing aspects, some of which have little to do with temporal aspects of information processing (Mody, Studdert-Kennedy, & Brady, 1997; Reed, 1989). Here, we define temporal processing as processing of time-domain information. As a result, the range of referent processes narrows down to three major categories. The first category of temporal processing is processing of time itself, in other words, “placing” events in time. The auditory ERP indices, likely reflective of this aspect, are the N1 and P2 peaks. Therefore, relevant discussion on this aspect is presented in the P2 section below.

The second neural property that has been traditionally assigned to “temporal” processing refers to the speed of information sampling, or temporal resolution. Temporal resolution denotes the smallest temporal intervals at which changes in extraneous information can be encoded. This is the only way in which information processing speed (as opposed to information transfer speed) can be related to temporal processing: the faster the synaptic transmission and the shorter the refractory state of the engaged circuits, the faster the processing speed, the higher the temporal resolution. However, the scale of temporal resolution of the auditory system is several times finer than that required to process time-domain features relevant for speech perception. For example, voice-onset time differences in voiced vs. un-voiced consonants are on the order of few tens of milliseconds, while temporal resolution of the auditory system is on the order of few milliseconds (Ehrlich, Casseday, & Covey, 1997). Therefore, temporal resolution of the auditory system might be more relevant for processing of stimulus spectrum (frequency/pitch) than time-domain features. This is further discussed in the Spectral Processing section below.

The third category of temporal processing is highly relevant to auditory perception and refers to processing of such time-domain features as duration (as in the example above) and temporal envelope (Casseday, Ehrlich, & Covey, 1994; He, Hashikawa, Ojima, & Kinouchi, 1997; Picton, Woods, & Proulx, 1978). Two previous ERP studies have produced MMN data assessing sound duration discrimination in children with LI (Korpilahti & Lang, 1994; Uwer, et al., 2002). Neither found MMN abnormalities for simple tone duration contrasts of 110 vs. 50 and 175 vs. 100 ms, respectively. Consistent with these MMN findings, the group with LI in the present study showed normal enhancement of negative power during N2-N4 latency range in response to longer sounds (Fig. 5). Such enhancement in late negativities with longer sounds has been explained by elicitation of a hypothetical duration-selective response as well as sustained potential (He, et al., 1997; Kushnerenko, et al., 2001; Picton, Woods, & Proulx, 1978). Further, short CVT durations within syllables did not impair behavioral syllable discrimination in the group with LI more than in the TD group. This echoes findings of McArthur and Bishop (2004a; 2005) showing that spectrum-based discrimination of simple as well as complex stimuli in children with LI does not vary with sound duration (25 vs. 250 ms in their studies). Altogether, these data suggest that no major problems exist in the population with LI in encoding or discriminating sound durations, even of very brief sounds, and fail to demonstrate adverse effects of sound brevity on sensory processing or perception of spectral sound features.

Fig. 5
Sound duration effects on syllable and transition ERPs in the two Language Groups (n=25 each) at the Cz electrode. The groups showed comparable enhancement in long-latency negativities with longer sounds.

Spectral processing deficit?

As in previous studies, in the present study we found that the P2 and N2 peaks of the auditory ERPs vary as a function of stimulus spectrum (Čeponienė, et al., 2008). In the present study, Spectral Item effects of these two peaks were highly significant and did not show interactions with Language Group. This indicates that certain aspects, or levels, of spectral encoding occur accurately in the population with LI. However, two other peaks, the P1 and N4, showed a tendency to have diminished Spectral Item effects in the group with LI and, notably, in syllable ERPs only. Since the only difference between the three Spectral Items/ba/,/da/, and/ga/was spectral content of consonant bursts and consonant-to-vowel transitions, this finding is suggestive of diminished accuracy, or robustness, of spectral encoding at least in a sub-set of children with LI at the processing levels indexed by the P1 and N4 peaks. This notion is in accordance with earlier behavioral data indicating spectral discrimination deficits in about one-third of children with LI (McArthur & Bishop, 2004a; 2004b; McArthur & Bishop, 2005) as well as with ERP studies that have found diminished spectral contrast MMNs in this population (Holopainen, et al., 1997; Korpilahti & Lang, 1994; Korpilahti, 1995; Shafer, et a., 2005; Uwer, et al., 2002). The relative sparing of transition ERP effects in the group with LI can be explained by the simpler larger-scale structure of these stimuli, permitting easier stimulus-neuropil mapping and less involvement of higher-order sensory integration. This is in line with an earlier finding of transitions eliciting the most consistent Spectral Item effects in typical children and adults (Čeponienė, et al., 2008).

The N2 and especially the N4 peaks (Čeponienė, et al., 2001) have been suggested to reflect integrative processing of sound content, involving formation of larger temporal scale, higher-level spectral code neural transients on the basis of temporally local spectral, ampliopic, and other features (Čeponienė, et al., 2005; 2008; Diesch & Luce, 1997; Karhu, et al., 1997; Woolley, Fremouw, Hsu, & Theunissen, 2005). This processing stage might be instrumental in phoneme matching during online auditory processing, especially in poor audibility conditions or during language acquisition (Näätänen, 2001 #1142; Čeponienė, 2008a #1152). In light of this evidence, the tendency for the diminished N2/N4 amplitudes in the population with LI may reflect abnormality specifically in higher-order, integrative intra-cortical sensory processing. This is supported by the fact in the group with LI, the raw amplitudes of transition N2/N4 peaks correlated with their Expressive CELF-R/3 scores (the corresponding spectral differentials showed a similar trend).

In and by itself, such an “integrative” deficiency does not necessarily require a deficit in encoding first- or even second-order acoustic features, which are chiefly defined by thalamo-cortical inputs. Perception-ready neural representations of complex sounds are formed by integration of elementary acoustic features into unitary percepts that can often be described as auditory “objects” (Näätänen, Tervaniemi, Sussman, Paavilainen, & Winkler, 2001), and those for over-learned sounds such as, e.g., phonemes of one’s native language, by detecting unique feature combinations serving as recognition cues (Diesch, & Luce, 1997). These functions are cortical and involve higher-order mechanisms of “sensory intelligence” (Näätänen, et. al, 2001). Several neuro-functional mechanisms could jeopardize integrative intra-cortical processing. First, intra-cortical signal fidelity can be compromised by poor Signal to Noise Ratio (SNR), which is contingent on fine-attunement of intra-cortical inhibition and excitation (Liu, Wu, Arbuckle, Tao, & Zhang, 2007; Keeler, Pichler, & Ross, 1989). Neuro-computation as well as aging population studies have shown that SNR levels correlate with perception (Li, Lindenberger, & Sikstrom, 2001; Murphy, Craik, Li, & Schneider, 2000). Perceptual deficits resulting from problems with SNR could include difficulties with brief or rapidly presented sounds (poor signal may require longer time for mapping it to perception), perception in noise (Ziegler, et al., 2005; Wright, et al., 1997), underspecified phoneme boundaries (Coady, Evans, Mainela-Arnold, & Kluender, 2007; Evans, Viele, Kass, & Tang, 2002; Sussman, 1993; Stark & Heinz, 1996), and even implicit learning, all found in LI. Another, related cortical mechanism that is likely to be involved in integrative processing is cortical coherence, thought to be instrumental for object-level perception (review in Ribary, 2005). Coherence deficits might be related to a decreased SNR - either as a result or a consequence, or it might represent an independent abnormality. Finally, a shorter span of the early stages of neural transients (“echoic” stage of short term memory), as suggested below based on the trend for enhanced N1-P2 amplitudes, may fail to provide sufficient time for the later-stage integrative processes. Regardless of the precise mechanism, the notion of a higher-order (integrative) sensory processing deficit is in agreement with recent behavioral data demonstrating impaired speech sound perception in children with LI in the wake of intact perception of basic auditory features (Fernell, et al., 2002; Nagarajan, Mahncke, Salz, Tallal, Roberts, & Merzenich, 1999; Ziegler, et al., 2005).

The P2 enhancement: sensory reactivity, memory, or perception?

The P2 appears to be a dual-nature response, reflecting sensory-attentional interface ontogenetically preceding the N1 mechanism (Čeponienė, et al., 2005). This notion is derived from two lines of evidence, one demonstrating P2 sensitivity to sound audibility and salience, similar to that of the N1 (Čeponienė, et al., 2002; Čeponienė, et al., 2005; Picton, Hillyard, Krausz, & Galambos, 1974; Woods, Knight, & Neville, 1984), and another indicating more consistent perception – ERP feature mapping for the P2 than for the N1 (Čeponienė, et al., 2008; Crowley & Colrain, 2004; Horev, Most, & Pratt, 2007; Lang, et al., 1990; Novak, Ritter, & Vaughan, 1992; Tremblay & Kraus, 2002). An earlier study found an enhanced auditory P2 in developmental dysphasia (Courchesne & Yeung-Courchesne, 1987). The most straightforward explanation of a tendency for the increased P2 (and possibly N1) in the older children with LI is enhanced non-specific sensory reactivity. This has several potential implications for perception. First, there might be a link between enhanced N1-P2 and distractibility (Escera, Alho, Schröger, & Winkler, 2000), leading to poorer voluntary attention in the population with LI (Shafer, Ponton, Datta, Morr, & Schwartz, 2007; Stevens, Sanders, & Neville, 2006). Second, the exaggerated sensory arousal may impede fine-grained feature processing by either suppressing it (lateral inhibition by a strong signal) or by supplying too much facilitation, broadening the population of feature-specific neurons and thus diminishing their feature-specificity. A supporting finding is that in GRP2 (11–14 years) TDs, the P2 increased in amplitude with longer transitions, whereas in the children with LI in GRP2, it was exaggerated with 20-ms transitions and did not show the stimulus feature - ERP (i.e., duration - P2 amplitude) correspondence.

Alternatively, since the P2 has been shown to be related to perception, an alternative explanation of a possibly enhanced sensory P2 in children with LI might be “enhanced” auditory feature processing. Indeed, McArthur and Bishop (McArthur & Bishop, 2004b) showed that N1-P2 region was abnormal in children with poor as well as with normal frequency discrimination thresholds, and, judging from their waveforms, the ERP hallmark in the normal-threshold subgroup of children with LI was increased P2 peak. However, while the P2 showed a strong Spectral Item effect in the overall subject sample, there was no interaction involving Language Group that might support the above hypothesis. Therefore, the more likely explanation is that in children with LI the P2 might have broader-tuned, and thus less precise, stimulus-response function, resulting in an overall larger and yet less specific response.

Finally, the tendency of an increased P2 and, possibly, N1 peaks in the older children may reflect reduced persistence of a transient stimulus trace in population with LI. The N1 and P2 amplitudes diminish (habituate) with stimulus repetition, unlike those of the P1 or N2-N4 (Čeponienė, et al., 2002). The stronger (or longer-lasting) the neural transient, the smaller the N1-P2 amplitude elicited by subsequent stimuli, and vice versa (Conley, Michalewski, & Starr, 1999; Lü, Williamson, & Kaufman, 1992; Sams, Kaukoranta, Hämäläinen, & Näätänen, 1991). Therefore, larger N1 and P2 amplitudes in the population with LI may index weaker, or less durable, neural transients. This interpretation is consistent with earlier findings suggesting impaired short-term auditory memory in this population (Archibald & Gathercole, 2007; Barry, Hardiman, Line, White, Yasin, & Bishop, 2008; Lincoln, Dickstein, Courchesne, Elmasian, & Tallal, 1992; Tallal, et al., 1991; Townsend, Wulfeck, Nichols, & Koch, 1995). The potential link with language abilities is that all levels and forms of sensory memory are instrumental for consolidating transient stimulus traces into long-term representations during language acquisition and later during language comprehension. Transient memories of stimulus occurrence may also contribute to neural processing of time, or “placing” events in time. This is the major large-scale temporal structure tapped by the temporal order judgment (TOJ) paradigm used to identify temporal processing deficit in LI. Deficiency in “placing” events in time may make it difficult to determine the rate, or order, of stimuli in a sequence, extract temporal invariances, and construct time-dependent predictions about upcoming stimuli. Many of these abilities have been shown to be impaired in the population with LI.

Aberrance or maturational delay?

Little, if any, direct evidence for maturational delay in auditory processing was found in the present sample of children with LI. Behaviorally, inferior syllable discrimination of the children with LI did not improve with age while in the TD group there was such tendency. If anything, the N1-P2 region of the ERP waveform tended to “mature” earlier in these children than in their typically developing peers. This is in line with the findings of Bishop, et al. (2007b) who found that, at 100ms post stimulus, the ERP waveform in a sub-group of children with a receptive SLI tended to be more negative than that of their controls. While this strange pattern of maturational change does not support the idea of maturational delay, it appears to be consistent with an idea brought forward by Wright and Zecker (1994), who suggested that a combination of delayed maturation of certain auditory skills with an abnormal halt in maturation brought about by puberty might explain the uneven sparing -abnormality landscapes found in LI. That is, specifically the later-maturing, more complex skills remain under-developed in children with LI due to a maturational halt imposed by the onset of puberty. Consistent with this notion, ERP waveforms of the present study suggest that the N2/N4 diminution was more severe in the two older groups than in the youngest group of children with LI (Figs 2 & 3), although statistically the Language Group × Age interaction was not significant.

On the other hand, the neural basis of the relative lack of the N2/N4 abnormality in the younger, as compared with the older group with LI, might be exuberant synaptic connectivity during pre-pubertal period, representing lack of experience-driven refinement. This could be likened to an extended “critical period” in animals raised in an unstructured sensory environment (e.g., white noise in the auditory case; Zhou, Nagarajan, Mossop, & Merzenich, 2008). This idea is consistent with the fact that, as a population, children with LI achieve sensory-motor, perceptual, and language milestones later (if at all) than their typical peers. If true, excessive synaptic connectivity could cause a diminished intra-cortical SNR and under-specified phonemic representations of speech sounds in pre-pubertal children with LI.

Finally, one should keep in mind that the heterogeneity of the population with LI, confounded with the wide age range of the current sample, might have biased the current results or, in contrast, diffused the strength of the trends observed therein. However, the current electrophysiological and behavioral data are consistent with a complex maturational aberrance in LI, which is possibly related to a pubertal arrest and cannot be explained by a maturational delay alone.

Overall conclusions

While both behavior and auditory ERPs suggest dampened spectral processing in children with LI, this could not be accounted for by sound brevity. Rather, the results point to a deficit in acoustic feature integration, a higher-order level of sensory processing. The observed maturational trajectory suggests complex developmental deviance rather than a simple maturational delay.

Acknowledgments

This work was funded by the National Institutes of Health Grant NINDS NS22343. We thank Dr. Paavo Alku for stimulus generation and all children and their parents who volunteered their time and efforts to participate in the study.

Footnotes

1In analogy to the visual spectrum of colors, the auditory spectrum refers to sound frequencies, their combinations, and derivatives, such as tone frequencies, vowel formants, or voice pitch.

2Trials with no response were excluded from d′ calculations since LI children had more of them (p<.03).

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • Adams J, Courchesne E, Elmasian R, Lincoln A. Increased amplitude of the auditory P2 and P3b components in adolescents with developmental dysphasia. Electroencephalogr Clin Neurophysiol Suppl. 1987;40:577–83. [PubMed]
  • Alku P, Tiitinen H, Näätänen R. A method for generating natural-sounding speech stimuli for cognitive brain research. Clinical Neurophysiology. 1999;110:1329–1333. [PubMed]
  • Archibald LM, Gathercole SE. Nonword repetition in specific language impairment: more than a phonological short-term memory deficit. Psychon Bull Rev. 2007;14(5):919–24. [PubMed]
  • Barry JG, Hardiman MJ, Line E, White KB, Yasin I, Bishop DV. Duration of auditory sensory memory in parents of children with SLI: a mismatch negativity study. Brain Lang. 2008;104(1):75–88. [PubMed]
  • Bishop DV, Adams CV, Nation K, Rosen S. Perception of transient non-speech stimuli is normal in specific language impairment: evidence from glide discrimination. Applied Psycholinguistics. 2005a;26:175–194.
  • Bishop DV, Bishop SJ, Bright P, James C, Delaney T, Tallal P. Different origin of auditory and phonological processing problems in children with language impairment: evidence from a twin study. J Speech Lang Hear Res. 1999b;42(1):155–68. [PubMed]
  • Bishop DV, Hardiman M, Uwer R, von Suchodoletz W. Maturation of the long-latency auditory ERP: step function changes at start and end of adolescence. Dev Sci. 2007a;10(5):565–75. [PMC free article] [PubMed]
  • Bishop DV, Hardiman M, Uwer R, von Suchodoletz W. Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment. Dev Sci. 2007b;10(5):576–87. [PMC free article] [PubMed]
  • Bishop DV, McArthur GM. Immature cortical responses to auditory stimuli in specific language impairment: evidence from ERPs to rapid tone sequences. Dev Sci. 2004;7(4):F11–8. [PubMed]
  • Bishop DV, McArthur GM. Individual differences in auditory processing in specific language impairment: a follow-up study using event-related potentials and behavioural thresholds. Cortex. 2005;41(3):327–41. [PMC free article] [PubMed]
  • Bishop DVM. Listening out for subtle deficits. Nature. 1997;387:129–130. [PubMed]
  • Bishop DVM, Carlyon RP, Deeks JM, Bishop SJ. Auditory temporal processing impairment: Neither necessary nor sufficient for causing language impairment in children. Journal of Speech, Language, and Hearing Research. 1999a;42:1295–1310. [PubMed]
  • Casseday J, Ehrlich D, Covey E. Neural tuning for sound duration: role of inhibitory mechanisms in the inferior colliculus. Science. 1994;6:847–50. [PubMed]
  • Catts HW. The relationship between speech-language impairments and reading disabilities. J Speech Hear Res. 1993;36(5):948–58. [PubMed]
  • Čeponienė R, Alku P, Westerfield M, Torkki M, Townsend J. Event-related potentials differentiate syllable and non-phonetic correlate processing in children and adults. Psychophysiology. 2005;42:391–406. [PubMed]
  • Čeponienė R, Kushnerenko H, Fellman V, Renlund M, Raivio K, Näätänen R. Event-related potential (ERP) features indexing central auditory discrimination in newborns. Cognitive Brain Research. 2002;13:101–113. [PubMed]
  • Čeponienė R, Rinne T, Naatanen R. Maturation of cortical sound processing as indexed by event-related potentials. Clinical Neurophysiology. 2002;113:870–882. [PubMed]
  • Čeponienė R, Shestakova A, Balan P, Alku P, Yaguchi K, Naatanen R. Children’s auditory event-related potentials index stimulus complexity and “speechness” International Journal of Neuroscience. 2001;109:245–260. [PubMed]
  • Čeponienė R, Torki M, Koyama A, Alku P, Townsend J. Event-related potentials reflect spectral differences in speech and non-speech stimuli in children and adults. Clin Neurophysiol. 2008;119(7):1560–77. [PMC free article] [PubMed]
  • Coady JA, Evans JL, Mainela-Arnold E, Kluender KR. Children with specific language impairments perceive speech most categorically when tokens are natural and meaningful. J Speech Lang Hear Res. 2007;50(1):41–57. [PubMed]
  • Conley EM, Michalewski HJ, Starr A. The N100 auditory cortical evoked potential indexes scanning of auditory short-term memory. Clin Neurophysiol. 1999;110(12):2086–93. [PubMed]
  • Courchesne E, Yeung-Courchesne R. Event-related brain potentials. In: Rutter M, Tuma A, Lann I, editors. Assesment and diagnosis in child psychopathology. NY: The Guilford Press; 1987. pp. 264–299.
  • Cowan N, Lichty W, Groove TR. Properties of memory for unattended spoken syllables. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1990;16:258–269. [PubMed]
  • Crowley KE, Colrain IM. A review of the evidence for P2 being an independent component process: age, sleep and modality. Clin Neurophysiol. 2004;115(4):732–44. [PubMed]
  • Diesch E, Luce T. Magnetic fields elicited by tones and vowel formants reveal tonotopy and nonlinear summation of cortical activation. Psychophysiology. 1997;34(5):501–10. [PubMed]
  • Ehrlich D, Casseday JH, Covey E. Neural tuning to sound duration in the inferior colliculus of the big brown bat, Eptesicus fuscus. J Neurophysiol. 1997;77(5):2360–72. [PubMed]
  • Escera C, Alho K, Schröger E, Winkler I. Involuntary attention and distractibility as evaluated with event-related brain potentials. Audiology & Neuro-otology. 2000;5:151–66. [PubMed]
  • Evans JL, Viele K, Kass RE, Tang F. Grammatical morphology and perception of synthetic and natural speech in children with specific language impairments. J Speech Lang Hear Res. 2002;45(3):494–504. [PubMed]
  • Fernell E, Norrelgen F, Bozkurt I, Hellberg G, Lowing K. Developmental profiles and auditory perception in 25 children attending special preschools for language-impaired children. Acta Paediatr. 2002;91(10):1108–15. [PubMed]
  • He J, Hashikawa T, Ojima H, Kinouchi Y. Temporal integration and duration tuning in the dorsal zone of cat auditory cortex. J Neurosci. 1997;17(7):2615–25. [PubMed]
  • Helzer JR, Champlin CA, Gillam RB. Auditory temporal resolution in specifically language-impaired and age-matched children. Percept Mot Skills. 1996;83(3 Pt 2):1171–81. [PubMed]
  • Holopainen IE, Korpilahti P, Juottonen K, Lang H, Sillanpää M. Attenuated auditory event-related potential (mismatch negativity) in children with developmental dysphasia. Neuropediatrics. 1997;28:253–256. [PubMed]
  • Horev N, Most T, Pratt H. Categorical Perception of Speech (VOT) and Analogous Non-Speech (FOT) signals: Behavioral and electrophysiological correlates. Ear Hear. 2007;28(1):111–28. [PubMed]
  • Jacobsen T, Schröger E. Is there pre-attentive memory-based comparison of pitch? Psychophysiol. 2001;38:723–727. [PubMed]
  • Jung TP, Makeig S, Westerfield M, Townsend J, Courchesne E, Sejnowski TJ. Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clin Neurophysiol. 2000b;111:1745–58. [PubMed]
  • Karhu J, Herrgård E, Pääkkönen A, Luoma L, Airaksinen E, Partanen J. Dual cerebral processing of elementary auditory input in children. NeuroReport. 1997;8:1327–1330. [PubMed]
  • Keeler JD, Pichler EE, Ross J. Noise in neural networks: thresholds, hysteresis, and neuromodulation of signal-to-noise. Proc Natl Acad Sci U S A. 1989;86(5):1712–6. [PubMed]
  • Korpilahti P. Auditory discrimination and memory functions in SLI children: A comprehensive study with neurophysiological and behavioral methods. Scandinavian Journal of Logopedics and Phonetics. 1995;20:131–139.
  • Korpilahti P, Lang HA. Auditory ERP components and mismatch negativity in dysphasic children. Electroencephalography and Clinical Neurophysiology. 1994;91:256–264. [PubMed]
  • Kraus N, McGee TJ, Carrell TD, Zecker SG, Nicol TG, Koch DB. Auditory neurophysiologic responses and discrimination deficits in children with learning problems. Science. 1996;273:971–973. [PubMed]
  • Kushnerenko E, Ceponiene R, Fellman V, Huotilainen M, Winkler I. Event-related potential correlates of sound duration: similar pattern from birth to adulthood. NeuroReport. 2001;12:3777–3781. [PubMed]
  • Lang H, Nyrke T, Ek M, Aaltonen O, Raimo I, Näätänen R. Pitch discrimination performance and auditory event-related potentials. In: Brunia CHM, Gaillard AWK, Kok A, Mulder G, Verbaten MN, editors. Psychophysiological Brain Research. Vol. 1. Tilburg: Tilburg University Press; 1990. pp. 294–298.
  • Leonard LB. Children with Specific Language Impairment. Cambridge, MA: MIT Press; 1997.
  • Li SC, Lindenberger U, Sikstrom S. Aging cognition: from neuromodulation to representation. Trends Cogn Sci. 2001;5(11):479–486. [PubMed]
  • Lincoln AJ, Courchensne E, Harms L, Allen M. Sensory modulation of auditory stimuli in children with autism and receptive developmental language disorder: event-related brain potential evidence. Journal of Autism and Developmental Disorders. 1995;25:521–539. [PubMed]
  • Lincoln AJ, Dickstein P, Courchesne E, Elmasian R, Tallal P. Auditory processing abilities in non-retarded adolescents and young adults with developmental receptive language disorder and autism. Brain Lang. 1992;43(4):613–22. [PubMed]
  • Liu BH, Wu GK, Arbuckle R, Tao HW, Zhang LI. Defining cortical frequency tuning with recurrent excitatory circuitry. Nat Neurosci. 2007;10(12):1594–600. [PMC free article] [PubMed]
  • Lowe AD, Campbell RA. Temporal discrimination in aphasoid and normal children. J Speech Hear Res. 1965;8(3):313–4. [PubMed]
  • Lü ZL, Williamson Sj, Kaufman L. Human auditory primary and association cortex have differing lifetimes for activation traces. Brain Research. 1992;572:236–241. [PubMed]
  • Marler JA, Champlin CA, Gillam RB. Auditory memory for backward masking signals in children with language impairment. Psychophysiology. 2002;39(6):767–80. [PubMed]
  • McArthur GM, Bishop DV. Frequency discrimination deficits in people with specific language impairment: reliability, validity, and linguistic correlates. J Speech Lang Hear Res. 2004a;47(3):527–41. [PubMed]
  • McArthur GM, Bishop DV. Which people with specific language impairment have auditory processing deficits? Cogn Neuropsychol. 2004b;21(1):79–94. [PubMed]
  • McArthur GM, Bishop DV. Speech and non-speech processing in people with specific language impairment: a behavioural and electrophysiological study. Brain Lang. 2005;94(3):260–73. [PubMed]
  • McGregor KK, Newman RM, Reilly RM, Capone NC. Semantic representation and naming in children with specific language impairment. J Speech Lang Hear Res. 2002;45(5):998–1014. [PubMed]
  • Mody M, Studdert-Kennedy M, Brady S. Speech perception deficits in poor readers. Auditory processing or phonological coding? Journal of Experimental Child Psychology. 1997;64:199–231. [PubMed]
  • Montgomery JW. Working memory and comprehension in children with specific language impairment: what we know so far. J Commun Disord. 2003;36(3):221–31. [PubMed]
  • Murphy DR, Craik FI, Li KZ, Schneider BA. Comparing the effects of aging and background noise on short-term memory performance. Psychol Aging. 2000;15(2):323–34. [PubMed]
  • Näätänen R, Tervaniemi M, Sussman E, Paavilainen P, Winkler I. “Primitive intelligence” in the auditory cortex. Trends Neurosci. 2001;24:283–8. [PubMed]
  • Nagarajan S, Mahncke H, Salz T, Tallal P, Roberts T, Merzenich MM. Cortical auditory signal processing in poor readers. Proc Natl Acad Sci U S A. 1999;96(11):6483–8. [PubMed]
  • Neville HJ, Coffey SA, Holcomb PJ, Tallal P. The neurobiology of sensory and language processing in language-impaired children. Journal of Cognitive Neuroscience. 1993;5:235–253. [PubMed]
  • Norrelgen F, Lacerda F, Forssberg H. Temporal resolution of auditory perception and verbal working memory in 15 children with language impairment. J Learn Disabil. 2002;35(6):540–46. [PubMed]
  • Novak G, Ritter W, Vaughan HGJ. Mismatch detection and the latency of temporal judgements. Psychophysiology. 1992;29:398–411. [PubMed]
  • Picton TW, Hillyard SA, Krausz HI, Galambos R. Human auditory evoked potentials. I. Evaluation of components. Electroencephalography and Clinical Neurophysiology. 1974;36:179–190. [PubMed]
  • Picton TW, Woods DL, Proulx GB. Human auditory sustained potentials. I. The nature of the response. Electroencephalography and Clinical Neurophysiology. 1978;45:187–197. [PubMed]
  • Picton TW, Woods DL, Proulx GB. Human auditory sustained potentials. II. Stimulus relationships. Electroencephalography and Clinical Neurophysiology. 1978;45:198–210. [PubMed]
  • The Agency for Healthcare Research and Quality. Criteria for Determining Disability in Speech-Language Disorders. 2002 . Report 52.
  • Reed A. Speech perception and the discrimination of brief auditory cues in reading disabled children. Journal of Experimental Child Pshychology. 1989;48:270–292. [PubMed]
  • Ribary U. Dynamics of thalamo-cortical network oscillations and human perception. Prog Brain Res. 2005;150:127–42. [PubMed]
  • Rice ML, Wexler K, Redmond SM. Grammaticality judgements of an extended optional infinitive grammar: evidence from English-speaking children with specific language impairment. J Speech Lang Hear Res. 1999;42(4):943–61. [PubMed]
  • Sams M, Kaukoranta E, Hämäläinen M, Näätänen R. Cortical activity elicited by changes in auditory stimuli: Different sources for the magnetic N100 and mismatch responses. Psychophysiology. 1991;28:21–29. [PubMed]
  • Sattler J. Assessment of Children. 3. San Diego, CA: Jerome M. Sattler; 1988.
  • Semel E, Wiig EH, Secord W. Clinical Evaluation of Language Fundamentals - Revised. Screening test. San Antonio, TX: The Psychological Corporation Harcourt Brace Jovanovich, Inc; 1989.
  • Semel E, Wiig EH, Secord W. Clinical Evaluation of Language Fundamentals - Third Edition (CELF-3) San Antonio, TX: The Psychological Corporation Harcourt Brace Jovanovich, Inc; 1995.
  • Shafer VL, Morr ML, Datta H, Kurtzberg D, Schwartz RG. Neurophysiological indexes of speech processing deficits in children with specific language impairment. J Cogn Neurosci. 2005;17(7):1168–80. [PubMed]
  • Shafer VL, Ponton C, Datta H, Morr ML, Schwartz RG. Neurophysiological indices of attention to speech in children with specific language impairment. Clin Neurophysiol. 2007;118(6):1230–43. [PMC free article] [PubMed]
  • Simon-Cereijido G, Bates E, Wulfeck B, Cummings A, Townsend J, Williams C, Ceponiene R. Picture naming in children with Specific Language Impairment: Differences in neural patterns throughout development. Paper presented at the Symposium on Research in Child Language Disorders; Madisson, WI. 2006. June 1–5 ,
  • Stark RE, Heinz JM. Perception of stop consonants in children with expressive and receptive-expressive language impairments. J Speech Hear Res. 1996;39(4):676–86. [PubMed]
  • Stevens C, Sanders L, Neville H. Neurophysiological evidence for selective auditory attention deficits in children with specific language impairment. Brain Res. 2006;1111(1):143–52. [PubMed]
  • Sussman E, Ritter W, Vaughan HG., Jr Predictability of stimulus deviance and the mismatch negativity. Neuroreport. 1998;9(18):4167–70. [PubMed]
  • Sussman JE. Perception of formant transition cues to place of articulation in children with language impairments. J Speech Hear Res. 1993;36(6):1286–99. [PubMed]
  • Tallal P, Merzenich MM, Miller S, Jenkins W. Language learning impairments: integrating basic science, technology, and remediation. Exp Brain Res. 1998;123(1–2):210–9. [PubMed]
  • Tallal P, Piercy M. Defects of non-verbal auditory perception in children with developmental aphasia. Nature. 1973;241(5390):468–9. [PubMed]
  • Tallal P, Piercy M. Developmental aphasia: Rate of auditory processing and selective impairment of consonant perception. Neuropsychologia. 1974;12:83–94. [PubMed]
  • Tallal P, Piercy M. Developmental aphasia: The perception of brief vowels and extended stop consonants. Neuropsychologia. 1975;13:69–74. [PubMed]
  • Tallal P, Stark R, Kallman C, Mellits D. A reexamination of some nonverbal perceptual abilities of language-impaired and normal children as a function of age and sensory modality. Journal of Speech and Hearing Research. 1981b;24:351–357. [PubMed]
  • Tallal P, Stark RE. Speech acoustic-cue discrimination abilities of normally developing and language-impaired children. J Acoust Soc Am. 1981;69(2):568–74. [PubMed]
  • Tallal P, Stark RE, Mellits ED. Identification of language-impaired children on the basis of rapid perception and production skills. Brain and Language. 1985a;25:314–322. [PubMed]
  • Tallal P, Stark RE, Mellits ED. The relationship between auditory temporal analysis and receptive language development: evidence from studies of developmental language disorder. Neuropsychologia. 1985b;23:527–534. [PubMed]
  • Tallal P, Townsend J, Curtiss S, Wulfeck B. Phenotypic profiles of language-impaired children based on genetic/family history. Brain Lang. 1991;41(1):81–95. [PubMed]
  • Tian B, Rauschecker JP. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey. J Neurophysiol. 2004;92(5):2993–3013. [PubMed]
  • Tonnquist-Uhlen I. Topography of auditory evoked long-latency potentials in children with severe language impairment: the P2 and N2 components. Ear & Hearing. 1996;17:314–326. [PubMed]
  • Townsend J, Wulfeck B, Nichols S, Koch L. Technical Report # CND-9503. Center for Research in Language. San Diego: University of California, San Diego; 1995. Attentional deficits in children with developmental language disorder.
  • Tremblay KL, Kraus N. Auditory training induces asymmetrical changes in cortical neural activity. J Speech Lang Hear Res. 2002;45(3):564–72. [PubMed]
  • Ullman MT, Pierpont EI. Specific language impairment is not specific to language: the procedural deficit hypothesis. Cortex. 2005;41(3):399–433. [PubMed]
  • Uwer R, Albrecht R, von Suchodoletz W. Automatic processing of tones and speech stimuli in children with specific language impairment. Dev Med Child Neurol. 2002;44:527–32. [PubMed]
  • Wechsler D. Wechsler Intelligence Scale for Children. 3. San Antonio, TX: The Psychological Corporation; 1991.
  • Wechsler D. Wechsler Adult Intelligence Scale. 3. San Antonio, TX: The Psychological Corporation; 1997.
  • Winkler I, Lehtokoski A, Alku P, Vainio M, Czigler I, Csépe V, Aaltonen O, Raimo I, Alho K, Lang H, Iivonen A, Näätänen R. Pre-attentive detection of vowel contrasts utilizes both phonetic and auditory memory representations. Cognitive Brain Research. 1999;7:357–369. [PubMed]
  • Woods DL, Knight RT, Neville HJ. Bitemporal lesions dissociate auditory evoked potentials and perception. Electroencephalography and Clinical Neurophysiology. 1984;57:208–220. [PubMed]
  • Woolley SM, Fremouw TE, Hsu A, Theunissen FE. Tuning for spectro-temporal modulations as a mechanism for auditory discrimination of natural sounds. Nat Neurosci. 2005;8(10):1371–9. [PubMed]
  • Wright BA, Lombardino LJ, King WM, Puranik CS, Leonard CM, Merzenich MM. Deficits in auditory temporal and spectral resolution in language-impaired children. Nature. 1997;387:176–177. [PubMed]
  • Wright BA, Zecker SG. Learning problems, delayed development, and puberty. Proc Natl Acad Sci U S A. 2004;101(26):9942–6. [PubMed]
  • Zhou X, Nagarajan N, Mossop BJ, Merzenich MM. Influences of unmodulated acoustic inputs on functional maturation and critical-period plasticity of the primary auditory cortex. Neuroscience 2008 [PMC free article] [PubMed]
  • Ziegler JC, Pech-Georgel C, George F, Alario FX, Lorenzi C. Deficits in speech perception predict language learning impairment. Proc Natl Acad Sci U S A. 2005;102(39):14110–5. [PubMed]