PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (545683)

Clipboard (0)
None

Related Articles

1.  Cracking the Language Code: Neural Mechanisms Underlying Speech Parsing 
Word segmentation, detecting word boundaries in continuous speech, is a critical aspect of language learning. Previous research in infants and adults demonstrated that a stream of speech can be readily segmented based solely on the statistical and speech cues afforded by the input. Using functional magnetic resonance imaging (fMRI), the neural substrate of word segmentation was examined on-line as participants listened to three streams of concatenated syllables, containing either statistical regularities alone, statistical regularities and speech cues, or no cues. Despite the participants’ inability to explicitly detect differences between the speech streams, neural activity differed significantly across conditions, with left-lateralized signal increases in temporal cortices observed only when participants listened to streams containing statistical regularities, particularly the stream containing speech cues. In a second fMRI study, designed to verify that word segmentation had implicitly taken place, participants listened to trisyllabic combinations that occurred with different frequencies in the streams of speech they just heard (“words,” 45 times; “partwords,” 15 times; “nonwords,” once). Reliably greater activity in left inferior and middle frontal gyri was observed when comparing words with partwords and, to a lesser extent, when comparing partwords with nonwords. Activity in these regions, taken to index the implicit detection of word boundaries, was positively correlated with participants’ rapid auditory processing skills. These findings provide a neural signature of on-line word segmentation in the mature brain and an initial model with which to study developmental changes in the neural architecture involved in processing speech cues during language learning.
doi:10.1523/JNEUROSCI.5501-05.2006
PMCID: PMC3713232  PMID: 16855090
fMRI; language; speech perception; word segmentation; statistical learning; auditory cortex; inferior frontal gyrus
2.  The Neural Basis of Speech Parsing in Children and Adults 
Developmental science  2010;13(2):385-406.
Word segmentation, detecting word boundaries in continuous speech, is a fundamental aspect of language learning that can occur solely by the computation of statistical and speech cues. Fifty-four children underwent functional magnetic resonance imaging (fMRI) while listening to three streams of concatenated syllables, which contained either high statistical regularities, high statistical regularities and speech cues, or no easily-detectable cues. Significant signal increases over time in temporal cortices suggest that children utilized the cues to implicitly segment the speech streams. This was confirmed by the findings of a second fMRI run where children displayed reliably greater activity in left inferior frontal gyrus when listening to ‘words’ that occurred more frequently in the streams of speech they just heard. Finally, comparisons between activity observed in these children vs. previously-studied adults indicate significant developmental changes in the neural substrate of speech parsing.
doi:10.1111/j.1467-7687.2009.00895.x
PMCID: PMC3229831  PMID: 20136936
fMRI; language; development; speech perception; word segmentation; statistical learning
3.  Age and experience shape developmental changes in the neural basis of language-related learning 
Developmental science  2011;14(6):1261-1282.
Very little is known about the neural underpinnings of language learning across the lifespan and how these might be modified by maturational and experiential factors. Building on behavioral research highlighting the importance of early word segmentation (i.e. the detection of word boundaries in continuous speech) for subsequent language learning, here we characterize developmental changes in brain activity as this process occurs online, using data collected in a mixed cross-sectional and longitudinal design. One hundred and fifty-six participants, ranging from age 5 to adulthood, underwent functional magnetic resonance imaging (fMRI) while listening to three novel streams of continuous speech, which contained either strong statistical regularities, strong statistical regularities and speech cues, or weak statistical regularities providing minimal cues to word boundaries. All age groups displayed significant signal increases over time in temporal cortices for the streams with high statistical regularities; however, we observed a significant right-to-left shift in the laterality of these learning-related increases with age. Interestingly, only the 5- to 10-year-old children displayed significant signal increases for the stream with low statistical regularities, suggesting an age-related decrease in sensitivity to more subtle statistical cues. Further, in a sample of 78 10-year-olds, we examined the impact of proficiency in a second language and level of pubertal development on learning-related signal increases, showing that the brain regions involved in language learning are influenced by both experiential and maturational factors.
doi:10.1111/j.1467-7687.2011.01075.x
PMCID: PMC3717169  PMID: 22010887
4.  Bilingualism and Inhibitory Control Influence Statistical Learning of Novel Word Forms 
We examined the influence of bilingual experience and inhibitory control on the ability to learn a novel language. Using a statistical learning paradigm, participants learned words in two novel languages that were based on the International Morse Code. First, participants listened to a continuous stream of words in a Morse code language to test their ability to segment words from continuous speech. Since Morse code does not overlap in form with natural languages, interference from known languages was minimized. Next, participants listened to another Morse code language composed of new words that conflicted with the first Morse code language. Interference in this second language was high due to conflict between languages and due to the presence of two colliding cues (compressed pauses between words and statistical regularities) that competed to define word boundaries. Results suggest that bilingual experience can improve word learning when interference from other languages is low, while inhibitory control ability can improve word learning when interference from other languages is high. We conclude that the ability to extract novel words from continuous speech is a skill that is affected both by linguistic factors, such as bilingual experience, and by cognitive abilities, such as inhibitory control.
doi:10.3389/fpsyg.2011.00324
PMCID: PMC3223905  PMID: 22131981
language acquisition; statistical learning; bilingualism; inhibitory control; Morse code; Simon task
5.  Rhythmic grouping biases constrain infant statistical learning 
Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to initial prominence (trochaic-based hypothesis) or whether, as predicted by the Iambic-Trochaic Law (ITL), intensity and duration have characteristic and separable effects on rhythmic grouping (ITL-based hypothesis) in a statistical learning task. Infants were familiarized with an artificial language (Experiments 1 & 3) or a tone stream (Experiment 2) in which there was an alternation in either intensity or duration. In addition to potential acoustic cues, the familiarization sequences also contained statistical cues to word boundaries. In speech (Experiment 1) and non-speech (Experiment 2) conditions, 9-month-old infants demonstrated discrimination patterns consistent with an ITL-based hypothesis: intensity signaled initial prominence and duration signaled final prominence. The results of Experiment 3, in which 6.5-month-old infants were familiarized with the speech streams from Experiment 1, suggest that there is a developmental change in infants’ willingness to treat increased duration as a cue to word offsets in fluent speech. Infants’ perceptual systems interact with linguistic experience to constrain how infants learn from their auditory environment.
doi:10.1111/j.1532-7078.2011.00110.x
PMCID: PMC3667627  PMID: 23730217
linguistic stress; rhythmic grouping; statistical learning; perceptual biases; speech segmentation; language acquisition
6.  Deviant fMRI patterns of brain activity to speech in 2–3 year-old children with autism spectrum disorder 
Biological psychiatry  2008;64(7):589-598.
Background
A failure to develop normal language is one of the most common first signs that a toddler might be at risk for autism. Currently the neural bases underlying this failure to develop language are unknown.
Methods
In this study, functional magnetic resonance imaging (fMRI) was utilized to identify the brain regions involved in speech perception in 12 2–3 year-old children with autism spectrum disorder (ASD) during natural sleep. We also recorded fMRI data from two typically developing control groups: a mental age-matched (MA) (n=11) and a chronological age-matched (CA) (n=12) group. During fMRI data acquisition, forward and backward speech stimuli were presented with intervening periods of no sound presentation.
Results
Direct statistical comparison between groups revealed significant differences in regions recruited to process speech. In comparison to their MA-matched controls, the ASD group showed reduced activity in an extended network of brain regions, which are recruited in typical early language acquisition. In comparison to their CA-matched controls, ASD participants showed greater activation primarily within right and medial frontal regions. Laterality analyses revealed a trend towards greater recruitment of right hemisphere regions in the ASD group and left hemisphere regions in the CA group during the forward speech condition. Furthermore, correlation analyses revealed a significant positive relationship between right hemisphere frontal and temporal activity to forward speech and receptive language skill.
Conclusions
These findings suggest that at 2–3 years, children with ASD may be on a deviant developmental trajectory characterized by a greater recruitment of right hemisphere regions during speech perception.
doi:10.1016/j.biopsych.2008.05.020
PMCID: PMC2879340  PMID: 18672231
language; development; laterality; fMRI; pediatric; sleep
7.  Altered integration of speech and gesture in children with autism spectrum disorders 
Brain and Behavior  2012;2(5):606-619.
The presence of gesture during speech has been shown to impact perception, comprehension, learning, and memory in normal adults and typically developing children. In neurotypical individuals, the impact of viewing co-speech gestures representing an object and/or action (i.e., iconic gesture) or speech rhythm (i.e., beat gesture) has also been observed at the neural level. Yet, despite growing evidence of delayed gesture development in children with autism spectrum disorders (ASD), few studies have examined how the brain processes multimodal communicative cues occurring during everyday communication in individuals with ASD. Here, we used a previously validated functional magnetic resonance imaging (fMRI) paradigm to examine the neural processing of co-speech beat gesture in children with ASD and matched controls. Consistent with prior observations in adults, typically developing children showed increased responses in right superior temporal gyrus and sulcus while listening to speech accompanied by beat gesture. Children with ASD, however, exhibited no significant modulatory effects in secondary auditory cortices for the presence of co-speech beat gesture. Rather, relative to their typically developing counterparts, children with ASD showed significantly greater activity in visual cortex while listening to speech accompanied by beat gesture. Importantly, the severity of their socio-communicative impairments correlated with activity in this region, such that the more impaired children demonstrated the greatest activity in visual areas while viewing co-speech beat gesture. These findings suggest that although the typically developing brain recognizes beat gesture as communicative and successfully integrates it with co-occurring speech, information from multiple sensory modalities is not effectively integrated during social communication in the autistic brain.
doi:10.1002/brb3.81
PMCID: PMC3489813  PMID: 23139906
Autism spectrum disorders; fMRI; gesture; language; superior temporal gyrus
8.  Statistical language learning in neonates revealed by event-related brain potentials 
BMC Neuroscience  2009;10:21.
Background
Statistical learning is a candidate for one of the basic prerequisites underlying the expeditious acquisition of spoken language. Infants from 8 months of age exhibit this form of learning to segment fluent speech into distinct words. To test the statistical learning skills at birth, we recorded event-related brain responses of sleeping neonates while they were listening to a stream of syllables containing statistical cues to word boundaries.
Results
We found evidence that sleeping neonates are able to automatically extract statistical properties of the speech input and thus detect the word boundaries in a continuous stream of syllables containing no morphological cues. Syllable-specific event-related brain responses found in two separate studies demonstrated that the neonatal brain treated the syllables differently according to their position within pseudowords.
Conclusion
These results demonstrate that neonates can efficiently learn transitional probabilities or frequencies of co-occurrence between different syllables, enabling them to detect word boundaries and in this way isolate single words out of fluent natural speech. The ability to adopt statistical structures from speech may play a fundamental role as one of the earliest prerequisites of language acquisition.
doi:10.1186/1471-2202-10-21
PMCID: PMC2670827  PMID: 19284661
9.  Deficient Brainstem Encoding of Pitch in Children with Autism Spectrum Disorders 
Objective
Deficient prosody is a hallmark of the pragmatic (socially contextualized) language impairment in Autism Spectrum Disorders (ASD). Prosody communicates emotion and intention and is conveyed through acoustic cues such as pitch contour. Thus, the objective of this study was to examine the subcortical representations of prosodic speech in children with ASD.
Methods
Using passively-evoked brainstem responses to speech syllables with descending and ascending pitch contours, we examined sensory encoding of pitch in children with ASD who had normal intelligence and hearing and were age-matched with typically-developing (TD) control children.
Results
We found that some children on the autism spectrum show deficient pitch tracking (evidenced by increased frequency and slope errors and reduced phase locking) compared with TD children.
Conclusions
This is the first demonstration of subcortical involvement in prosody encoding deficits in this population of children.
Significance
Our findings may have implications for diagnostic and remediation strategies in a subset of children with ASD and open up an avenue for future investigations.
doi:10.1016/j.clinph.2008.01.108
PMCID: PMC2536645  PMID: 18558508
auditory brainstem; autism; pitch tracking; prosody
10.  Isolated words enhance statistical language learning in infancy 
Developmental Science  2011;14(6):1323-1329.
Infants are adept at tracking statistical regularities to identify word boundaries in pause-free speech. However, researchers have questioned the relevance of statistical learning mechanisms to language acquisition, since previous studies have used simplified artificial languages that ignore the variability of real language input. The experiments reported here embraced a key dimension of variability in infant-directed speech. English-learning infants (8–10 months) listened briefly to natural Italian speech that contained either fluent speech only or a combination of fluent speech and single-word utterances. Listening times revealed successful learning of the statistical properties of target words only when words appeared both in fluent speech and in isolation; brief exposure to fluent speech alone was not sufficient to facilitate detection of the words’ statistical properties. This investigation suggests that statistical learning mechanisms actually benefit from variability in utterance length, and provides the first evidence that isolated words and longer utterances act in concert to support infant word segmentation.
doi:10.1111/j.1467-7687.2011.01079.x
PMCID: PMC3280507  PMID: 22010892
11.  Biological changes in auditory function following training in children with autism spectrum disorders 
Background
Children with pervasive developmental disorders (PDD), such as children with autism spectrum disorders (ASD), often show auditory processing deficits related to their overarching language impairment. Auditory training programs such as Fast ForWord Language may potentially alleviate these deficits through training-induced improvements in auditory processing.
Methods
To assess the impact of auditory training on auditory function in children with ASD, brainstem and cortical responses to speech sounds presented in quiet and noise were collected from five children with ASD who completed Fast ForWord training.
Results
Relative to six control children with ASD who did not complete Fast ForWord, training-related changes were found in brainstem response timing (three children) and pitch-tracking (one child), and cortical response timing (all five children) after Fast ForWord use.
Conclusions
These results provide an objective indication of the benefit of training on auditory function for some children with ASD.
doi:10.1186/1744-9081-6-60
PMCID: PMC2965126  PMID: 20950487
12.  Learning and Long-Term Retention of Large-Scale Artificial Languages 
PLoS ONE  2013;8(1):e52500.
Recovering discrete words from continuous speech is one of the first challenges facing language learners. Infants and adults can make use of the statistical structure of utterances to learn the forms of words from unsegmented input, suggesting that this ability may be useful for bootstrapping language-specific cues to segmentation. It is unknown, however, whether performance shown in small-scale laboratory demonstrations of “statistical learning” can scale up to allow learning of the lexicons of natural languages, which are orders of magnitude larger. Artificial language experiments with adults can be used to test whether the mechanisms of statistical learning are in principle scalable to larger lexicons. We report data from a large-scale learning experiment that demonstrates that adults can learn words from unsegmented input in much larger languages than previously documented and that they retain the words they learn for years. These results suggest that statistical word segmentation could be scalable to the challenges of lexical acquisition in natural language learning.
doi:10.1371/journal.pone.0052500
PMCID: PMC3534673  PMID: 23300975
13.  Beneficial effects of word final stress in segmenting a new language: evidence from ERPs 
BMC Neuroscience  2008;9:23.
Background
How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task.
Results
The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words.
Conclusion
The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones.
doi:10.1186/1471-2202-9-23
PMCID: PMC2263048  PMID: 18282274
14.  Audiovisual speech integration in autism spectrum disorder: ERP evidence for atypicalities in lexical-semantic processing 
Lay Abstract
Language and communicative impairments are among the primary characteristics of autism spectrum disorders (ASD). Previous studies have examined auditory language processing in ASD. However, during face-to-face conversation, auditory and visual speech inputs provide complementary information, and little is known about audiovisual (AV) speech processing in ASD. It is possible to elucidate the neural correlates of AV integration by examining the effects of seeing the lip movements accompanying the speech (visual speech) on electrophysiological event-related potentials (ERP) to spoken words. Moreover, electrophysiological techniques have a high temporal resolution and thus enable us to track the time-course of spoken word processing in ASD and typical development (TD). The present study examined the ERP correlates of AV effects in three time windows that are indicative of hierarchical stages of word processing. We studied a group of TD adolescent boys (n=14) and a group of high-functioning boys with ASD (n=14). Significant group differences were found in AV integration of spoken words in the 200–300ms time window when spoken words start to be processed for meaning. These results suggest that the neural facilitation by visual speech of spoken word processing is reduced in individuals with ASD.
Scientific Abstract
In typically developing (TD) individuals, behavioural and event-related potential (ERP) studies suggest that audiovisual (AV) integration enables faster and more efficient processing of speech. However, little is known about AV speech processing in individuals with autism spectrum disorder (ASD). The present study examined ERP responses to spoken words to elucidate the effects of visual speech (the lip movements accompanying a spoken word) on the range of auditory speech processing stages from sound onset detection to semantic integration. The study also included an AV condition which paired spoken words with a dynamic scrambled face in order to highlight AV effects specific to visual speech. Fourteen adolescent boys with ASD (15–17 years old) and 14 age- and verbal IQ-matched TD boys participated. The ERP of the TD group showed a pattern and topography of AV interaction effects consistent with activity within the superior temporal plane, with two dissociable effects over fronto-central and centro-parietal regions. The posterior effect (200–300ms interval) was specifically sensitive to lip movements in TD boys, and no AV modulation was observed in this region for the ASD group. Moreover, the magnitude of the posterior AV effect to visual speech correlated inversely with ASD symptomatology. In addition, the ASD boys showed an unexpected effect (P2 time window) over the frontal-central region (pooled electrodes F3, Fz, F4, FC1, FC2, FC3, FC4) which was sensitive to scrambled face stimuli. These results suggest that the neural networks facilitating processing of spoken words by visual speech are altered in individuals with ASD.
doi:10.1002/aur.231
PMCID: PMC3586407  PMID: 22162387
Auditory; ASD; ERP; Language; Multisensory; Visual
15.  Implicit language learning: Adults’ ability to segment words in Norwegian* 
Bilingualism (Cambridge, England)  2010;13(4):513-523.
Previous language learning research reveals that the statistical properties of the input offer sufficient information to allow listeners to segment words from fluent speech in an artificial language. The current pair of studies uses a natural language to test the ecological validity of these findings and to determine whether a listener’s language background influences this process. In Study 1, the “guessibility” of potential test words from the Norwegian language was presented to 22 listeners who were asked to differentiate between true words and nonwords. In Study 2, 22 adults who spoke one of 12 different primary languages learned to segment words from continuous speech in an implicit language learning paradigm. The task consisted of two sessions, approximately three weeks apart, each requiring participants to listen to 7.2 minutes of Norwegian sentences followed by a series of bisyllabic test items presented in isolation. The participants differentially accepted the Norwegian words and Norwegian-like nonwords in both test sessions, demonstrating the capability to segment true words from running speech. The results were consistent across three broadly-defined language groups, despite differences in participants’ language background.
doi:10.1017/S1366728910000039
PMCID: PMC3079201  PMID: 21512605
implicit learning; language; statistical learning; second language acquisition
16.  The neural underpinnings of prosody in autism 
This study examines the processing of prosodic cues to linguistic structure and to affect, drawing on fMRI and behavioral data from 16 high-functioning adolescents with autism spectrum disorders (ASD) and 11 typically-developing controls. Stimuli were carefully matched on pitch, intensity, and duration, while varying systematically in conditions of affective prosody (angry versus neutral speech) and grammatical prosody (questions versus statement). To avoid conscious attention to prosody, which normalizes responses in young people with ASD, the implicit comprehension task directed attention to semantic aspects of the stimuli. Results showed that when perceiving prosodic cues, both affective and grammatical, activation of neural regions was more generalized in ASD than in typical development, and areas recruited reflect heightened reliance on cognitive control, reading of intentions, attentional management, and visualization. This broader recruitment of executive and “mind-reading” brain areas for a relative simple language processing task may be interpreted to suggest that speakers with HFA have developed less automaticity in language processing, and may also suggest that “mind-reading” or theory of mind deficits are intricately bound up in language processing. Data provide support for both a right-lateralized as well as a bilateral model of prosodic processing in typical individuals, depending upon the function of the prosodic information.
doi:10.1080/09297049.2011.639757
PMCID: PMC3461129  PMID: 22176162
17.  Statistical Learning in a Natural Language by 8-Month-Old Infants 
Child development  2009;80(3):10.1111/j.1467-8624.2009.01290.x.
Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to natural language learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar natural language stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition.
doi:10.1111/j.1467-8624.2009.01290.x
PMCID: PMC3883431  PMID: 19489896
18.  From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging 
During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.
doi:10.3389/fnene.2010.00013
PMCID: PMC2912026  PMID: 20725516
optical imaging; infants; language acquisition; acoustic segmentation; NIRS
19.  The link between statistical segmentation and word learning in adults 
Cognition  2008;108(1):271-280.
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, infants seem learn consistent labels, but not inconsistent or neutral labels.
doi:10.1016/j.cognition.2008.02.003
PMCID: PMC2486406  PMID: 18355803
statistical learning; word segmentation; word learning; language acquisition
20.  Brain Activity of Adolescents with High Functioning Autism in Response to Emotional Words and Facial Emoticons 
PLoS ONE  2014;9(3):e91214.
Studies of social dysfunction in patients with autism spectrum disorder (ASD) have generally focused on the perception of emotional words and facial affect. Brain imaging studies have suggested that the fusiform gyrus is associated with both the comprehension of language and face recognition. We hypothesized that patients with ASD would have decreased ability to recognize affect via emotional words and facial emoticons, relative to healthy comparison subjects. In addition, we expected that this decreased ability would be associated with altered activity of the fusiform gyrus in patients with ASD. Ten male adolescents with ASDs and ten age and sex matched healthy comparison subjects were enrolled in this case-control study. The diagnosis of autism was further evaluated with the Autism Diagnostic Observation Schedule. Brain activity was assessed using functional magnetic resonance imaging (fMRI) in response to emotional words and facial emoticon presentation. Sixty emotional words (45 pleasant words +15 unpleasant words) were extracted from a report on Korean emotional terms and their underlying dimensions. Sixty emoticon faces (45 pleasant faces +15 unpleasant faces) were extracted and modified from on-line sites. Relative to healthy comparison subjects, patients with ASD have increased activation of fusiform gyrus in response to emotional aspects of words. In contrast, patients with ASD have decreased activation of fusiform gyrus in response to facial emoticons, relative to healthy comparison subjects. We suggest that patients with ASD are more familiar with word descriptions than facial expression as depictions of emotion.
doi:10.1371/journal.pone.0091214
PMCID: PMC3951306  PMID: 24621866
21.  Neural basis of irony comprehension in children with autism: the role of prosody and context 
Brain : a journal of neurology  2006;129(0 4):932-943.
While individuals with autism spectrum disorders (ASD) are typically impaired in interpreting the communicative intent of others, little is known about the neural bases of higher-level pragmatic impairments. Here, we used functional MRI (fMRI) to examine the neural circuitry underlying deficits in understanding irony in high-functioning children with ASD. Participants listened to short scenarios and decided whether the speaker was sincere or ironic. Three types of scenarios were used in which we varied the information available to guide this decision. Scenarios included (i) both knowledge of the event outcome and strong prosodic cues (sincere or sarcastic intonation), (ii) prosodic cues only or (iii) knowledge of the event outcome only. Although children with ASD performed well above chance, they were less accurate than typically developing (TD) children at interpreting the communicative intent behind a potentially ironic remark, particularly with regard to taking advantage of available contextual information. In contrast to prior research showing hypoactivation of regions involved in understanding the mental states of others, children with ASD showed significantly greater activity than TD children in the right inferior frontal gyrus (IFG) as well as in bilateral temporal regions. Increased activity in the ASD group fell within the network recruited in the TD group and may reflect more effortful processing needed to interpret the intended meaning of an utterance. These results confirm that children with ASD have difficulty interpreting the communicative intent of others and suggest that these individuals can recruit regions activated as part of the normative neural circuitry when task demands require explicit attention to socially relevant cues.
doi:10.1093/brain/awl032
PMCID: PMC3713234  PMID: 16481375
autism; brain development; fMRI; language pragmatics; social cognition
22.  The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder 
In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder
doi:10.1007/s10803-010-1117-5
PMCID: PMC3033475  PMID: 20972615
23.  Use of Prosody and Information Structure in High Functioning Adults with Autism in Relation to Language Ability 
Abnormal prosody is a striking feature of the speech of those with Autism spectrum disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without ASD and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language functioning. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
doi:10.3389/fpsyg.2012.00072
PMCID: PMC3312270  PMID: 22470358
prosody; language ability; information structure; pitch; duration; Autism
24.  The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies 
BMC Neurology  2013;13:12.
Background
Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility.
Results
FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not.
Conclusion
The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings.
doi:10.1186/1471-2377-13-12
PMCID: PMC3582442  PMID: 23351174
Frequency modulation; Auditory evoked response; FMAER; Cortical; Source analysis; Language disorder; Landau-Kleffner syndrome; Autism; Children; Epilepsy surgery
25.  Auditory Magnetic Mismatch Field Latency: A Biomarker for Language Impairment in Autism 
Biological psychiatry  2011;70(3):263-269.
Background
Auditory processing abnormalities are frequently observed in Autism Spectrum Disorders (ASD), and these abnormalities may have sequelae in terms of clinical language impairment (LI). The present study assessed associations between language impairment and the amplitude and latency of the superior temporal gyrus magnetic mismatch field (MMF) in response to changes in an auditory stream of tones or vowels.
Methods
51 children with ASD and 27 neurotypical controls, all aged 6-15 years, underwent neuropsychological evaluation, including tests of language function, as well as magnetoencephalographic (MEG) recording during presentation of tones and vowels. The MMF was identified in the difference waveform obtained from subtraction of responses to standard stimuli from deviant stimuli.
Results
MMF latency was significantly prolonged (p<0.001) in children with ASD compared to neurotypical controls. Furthermore, this delay was most pronounced (∼50ms) in children with concomitant LI, with significant differences in latency between children with ASD with LI and those without (p<0.01). Receiver operator characteristic analysis indicated a sensitivity of 82.4% and specificity of 71.2% for diagnosing LI based on MMF latency.
Conclusion
Neural correlates of auditory change detection (the MMF) are significantly delayed in children with ASD, and especially those with concomitant LI suggesting both a neurobiological basis for LI as well as a clinical biomarker for LI in ASD.
doi:10.1016/j.biopsych.2011.01.015
PMCID: PMC3134608  PMID: 21392733
autism spectrum disorders; mismatch negativity; language impairment; magnetoencephalography; biomarker; electrophysiology

Results 1-25 (545683)