While individuals with autism spectrum disorders (ASD) are typically impaired in interpreting the communicative intent of others, little is known about the neural bases of higher-level pragmatic impairments. Here, we used functional MRI (fMRI) to examine the neural circuitry underlying deficits in understanding irony in high-functioning children with ASD. Participants listened to short scenarios and decided whether the speaker was sincere or ironic. Three types of scenarios were used in which we varied the information available to guide this decision. Scenarios included (i) both knowledge of the event outcome and strong prosodic cues (sincere or sarcastic intonation), (ii) prosodic cues only or (iii) knowledge of the event outcome only. Although children with ASD performed well above chance, they were less accurate than typically developing (TD) children at interpreting the communicative intent behind a potentially ironic remark, particularly with regard to taking advantage of available contextual information. In contrast to prior research showing hypoactivation of regions involved in understanding the mental states of others, children with ASD showed significantly greater activity than TD children in the right inferior frontal gyrus (IFG) as well as in bilateral temporal regions. Increased activity in the ASD group fell within the network recruited in the TD group and may reflect more effortful processing needed to interpret the intended meaning of an utterance. These results confirm that children with ASD have difficulty interpreting the communicative intent of others and suggest that these individuals can recruit regions activated as part of the normative neural circuitry when task demands require explicit attention to socially relevant cues.
autism; brain development; fMRI; language pragmatics; social cognition
Individuals with ASD show consistent impairment in processing pragmatic language when attention to multiple social cues (e.g., facial expression, tone of voice) is often needed to navigate social interactions. Building upon prior fMRI work examining how facial affect and prosodic cues are used to infer a speaker's communicative intent, the authors examined whether children and adolescents with ASD differ from typically developing (TD) controls in their processing of sincere versus ironic remarks. At the behavioral level, children and adolescents with ASD and matched TD controls were able to determine whether a speaker's remark was sincere or ironic equally well, with both groups showing longer response times for ironic remarks. At the neural level, for both sincere and ironic scenarios, an extended cortical network—including canonical language areas in the left hemisphere and their right hemisphere counterparts—was activated in both groups, albeit to a lesser degree in the ASD sample. Despite overall similar patterns of activity observed for the two conditions in both groups, significant modulation of activity was detected when directly comparing sincere and ironic scenarios within and between groups. While both TD and ASD groups showed significantly greater activity in several nodes of this extended network when processing ironic versus sincere remarks, increased activity was largely confined to left language areas in TD controls, whereas the ASD sample showed a more bilateral activation profile which included both language and “theory of mind” areas (i.e., ventromedial prefrontal cortex). These findings suggest that, for high-functioning individuals with ASD, increased activity in right hemisphere homologues of language areas in the left hemisphere, as well as regions involved in social cognition, may reflect compensatory mechanisms supporting normative behavioral task performance.
Ironic remarks are frequent in everyday language and represent an important form of social cognition. Increasing evidence indicates a deficit in comprehension in schizophrenia. Several models for defective comprehension have been proposed, including possible roles of the medial prefrontal lobe, default mode network, inferior frontal gyri, mirror neurons, right cerebral hemisphere and a possible mediating role of schizotypal personality traits. We investigated the neural correlates of irony comprehension in schizophrenia by using event-related functional magnetic resonance imaging (fMRI). In a prosody-free reading paradigm, 15 female patients with schizophrenia and 15 healthy female controls silently read ironic and literal text vignettes during fMRI. Each text vignette ended in either an ironic (n = 22) or literal (n = 22) statement. Ironic and literal text vignettes were matched for word frequency, length, grammatical complexity, and syntax. After fMRI, the subjects performed an off-line test to detect error rate. In this test, the subjects indicated by button press whether the target sentence has ironic, literal, or meaningless content. Schizotypal personality traits were assessed using the German version of the schizotypal personality questionnaire (SPQ). Patients with schizophrenia made significantly more errors than did the controls (correct answers, 85.3% vs. 96.3%) on a behavioural level. Patients showed attenuated blood oxygen level-dependent (BOLD) response during irony comprehension mainly in right hemisphere temporal regions (ironic>literal contrast) and in posterior medial prefrontal and left anterior insula regions (for ironic>visual baseline, but not for literal>visual baseline). In patients with schizophrenia, the parahippocampal gyrus showed increased activation. Across all subjects, BOLD response in the medial prefrontal area was negatively correlated with the SPQ score. These results highlight the role of the posterior medial prefrontal and right temporal regions in defective irony comprehension in schizophrenia and the mediating role of schizotypal personality traits.
Understanding a speaker’s communicative intent in everyday interactions is likely to draw on cues such as facial expression and tone of voice. Prior research has shown that individuals with autism spectrum disorders (ASD) show reduced activity in brain regions that respond selectively to the face and voice. However, there is also evidence that activity in key regions can be increased if task demands allow for explicit processing of emotion.
To examine the neural circuitry underlying impairments in interpreting communicative intentions in ASD using irony comprehension as a test case, and to determine whether explicit instructions to attend to facial expression and tone of voice will elicit more normative patterns of brain activity.
Design, Setting, and Participants
Eighteen boys with ASD (aged 7–17 years, full-scale IQ >70) and 18 typically developing (TD) boys underwent functional magnetic resonance imaging at the Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles.
Main Outcome Measures
Blood oxygenation level– dependent brain activity during the presentation of short scenarios involving irony. Behavioral performance (accuracy and response time) was also recorded.
Reduced activity in the medial prefrontal cortex and right superior temporal gyrus was observed in children with ASD relative to TD children during the perception of potentially ironic vs control scenarios. Importantly, a significant group X condition interaction in the medial prefrontal cortex showed that activity was modulated by explicit instructions to attend to facial expression and tone of voice only in the ASD group. Finally, medial prefrontal cortex activity was inversely related to symptom severity in children with ASD such that children with greater social impairment showed less activity in this region.
Explicit instructions to attend to facial expression and tone of voice can elicit increased activity in the medial prefrontal cortex, part of a network important for understanding the intentions of others, in children with ASD. These findings suggest a strategy for future intervention research.
The combined knowledge of word meanings and grammatical rules does not allow a listener to grasp the intended meaning of a speaker’s utterance. Pragmatic inferences on the part of the listener are also required. The present work focuses on the processing of ironic utterances (imagine a slow day being described as “really productive”) because these clearly require the listener to go beyond the linguistic code. Such utterances are advantageous experimentally because they can serve as their own controls in the form of literal sentences (now imagine an active day being described as “really productive”) as we employ techniques from electrophysiology (EEG). Importantly, the results confirm previous ERP findings showing that irony processing elicits an enhancement of the P600 component (Regel et al., 2011). More original are the findings drawn from Time Frequency Analysis (TFA) and especially the increase of power in the gamma band in the 280–400 time-window, which points to an integration among different streams of information relatively early in the comprehension of an irony. This represents a departure from traditional accounts of language processing which generally view pragmatic inferences as late-arriving. We propose that these results indicate that unification operations between the linguistic code and contextual information play a critical role throughout the course of irony processing and earlier than previously thought.
Increasing evidence supports the existence of distinct neural systems that subserve two dimensions of affect – arousal and valence. Ten adult participants underwent functional magnetic resonance imaging during which they were presented a range of standardized faces and then asked, during the scan, to rate the emotional expressions of the faces along the dimensions of arousal and valence. Lower ratings of arousal accompanied greater activity in the amygdala complex, cerebellum, dorsal pons, and right medial prefrontal cortex. More negative ratings of valence accompanied greater activity in the dorsal anterior cingulate and parietal cortices. Extreme ratings of valence (highly positive and highly negative ratings) accompanied greater activity in the temporal cortex and fusiform gyrus. Building on an empirical literature which suggests that the amygdala serves as a salience and ambiguity detector, we interpret our findings as showing that a face rated has having low arousal is more ambiguous and a face rated as having extreme valence is more personally salient. This explains how both low arousal and extreme valence lead to greater activation of an ambiguity/salience system subserved by the amygdala, cerebellum, and dorsal pons. In addition, the right medial prefrontal cortex appears to down-regulate individual ratings of arousal, whereas the fusiform and related temporal cortices seem to up-regulate individual assessments of extreme valence when individual ratings are studied relative to group reference ratings for each stimulus. The simultaneous assessment of the effects of arousal and valence proved essential for the identification of neural systems contributing to the processing of emotional faces.
fMRI; Affect; Circumplex; Arousal; Valence; Face processing
Tone production is particularly important for communicating in tone languages such as Mandarin Chinese. In the present study, an artificial neural network was used to recognize tones produced by adult native speakers. The purposes of the study were (1) to test the sensitivity of the neural network to speaker variation typically in adult speaker groups, (2) to evaluate two normalization procedures to overcome the effects of speaker variation, and (3) to compare tone recognition performance of the neural network with that of the human listeners.
A feedforward multilayer neural network was used. Twenty-nine adult native Mandarin Chinese speakers were recruited to record tone samples. The F0 contours of the vowel part of the 1044 monosyllabic words recorded were extracted using an autocorrelation method. Samples from the F0 contours were used as inputs to the neural network. The efficacy of the neural network was first tested by varying the number of inputs and the number of neurons in the hidden layer from 1 to 16. The sensitivity of the neural network to speaker variation was tested by (1) using the raw F0 data from speech tokens of a number of randomly drawn speakers that varied from 1 to 29, (2) using the raw F0 data from speech tokens of either male-only or female-only speakers, and (3) using two sets of normalized F0 data (i.e., tone 1-based normalization and first-order derivative) from speech tokens from a number of randomly drawn speakers that varied from 1 to 29. The recognition performance of the neural network under several experimental conditions was compared with the corresponding recognition performance of 10 normal-hearing, native Mandarin Chinese speaking adult listeners.
Three inputs and four hidden neurons were found to be sufficient for the neural network to perform at about 85% correct using speech samples without normalization. The performance of the neural network was affected by variation across speakers particularly between genders. Using the tone 1-based normalization procedure, the performance of the neural network improved significantly. The recognition accuracy of the neural network as a whole or for each tone was comparable with that of the human listeners.
The neural network can be used to evaluate the tone production of Mandarin Chinese speaking adults with human listener-like recognition accuracy. The tone 1-based normalization procedure improves the performance of the neural network to human listener-like accuracy. The success of our neural network in recognizing tones from multiple speakers supports its utility for evaluating tone production. Further testing of the neural network with hearing-impaired speakers might reveal its potential use for clinical evaluation of tone production.
A precondition for successful communication between people is the detection of signals indicating the intention to communicate, such as eye contact or calling a person's name. In adults, establishing communication by eye contact or calling a person's name results in overlapping activity in right prefrontal cortex, suggesting that, regardless of modality, the intention to communicate is detected by the same brain region. We measured prefrontal cortex responses in 5-month-olds using near-infrared spectroscopy (NIRS) to examine the neural basis of detecting communicative signals across modalities in early development. Infants watched human faces that either signaled eye contact or directed their gaze away from the infant, and they also listened to voices that addressed them with their own name or another name. The results revealed that infants recruit adjacent but non-overlapping regions in the left dorsal prefrontal cortex when they process eye contact and own name. Moreover, infants that responded sensitively to eye contact in the one prefrontal region were also more likely to respond sensitively to their own name in the adjacent prefrontal region as revealed in a correlation analysis, suggesting that responding to communicative signals in these two regions might be functionally related. These NIRS results suggest that infants selectively process and attend to communicative signals directed at them. However, unlike adults, infants do not seem to recruit a common prefrontal region when processing communicative signals of different modalities. The implications of these findings for our understanding of infants’ developing communicative abilities are discussed.
eye contact; name; communication; intention; prefrontal cortex; infancy
The present study examined children’s and adults’ categorization and moral judgment of truthful and untruthful statements. 7-, 9-and 11-year-old Chinese children and college students read stories in which story characters made truthful or untruthful statements and were asked to classify and evaluate the statements. The statements varied in terms of whether the speaker intended to help or harm a listener and whether the statement was made in a setting that called for informational accuracy or politeness. Results showed that the communicative intent and setting factors jointly influence children’s categorization of lying and truth-telling, which extends an earlier finding (Lee & Ross, 1997) to childhood. Also, we found that children’s and adults’ moral judgments of lying and truth-telling were influenced by the communicative intent but not the setting factor. The present results were discussed in terms of Sweetser’s (1987) folkloristic model of lying.
lying; moral evaluation; white lies; deception; politeness; Chinese; culture
Speech-in-noise (SIN) perception is one of the most complex tasks faced by listeners on a daily basis. Although listening in noise presents challenges for all listeners, background noise inordinately affects speech perception in older adults and in children with learning disabilities. Hearing thresholds are an important factor in SIN perception, but they are not the only factor. For successful comprehension, the listener must perceive and attend to relevant speech features, such as the pitch, timing, and timbre of the target speaker’s voice. Here, we review recent studies linking SIN and brainstem processing of speech sounds.
To review recent work that has examined the ability of the auditory brainstem response to complex sounds (cABR), which reflects the nervous system’s transcription of pitch, timing, and timbre, to be used as an objective neural index for hearing-in-noise abilities.
We examined speech-evoked brainstem responses in a variety of populations, including children who are typically developing, children with language-based learning impairment, young adults, older adults, and auditory experts (i.e., musicians).
Data Collection and Analysis
In a number of studies, we recorded brainstem responses in quiet and babble noise conditions to the speech syllable /da/ in all age groups, as well as in a variable condition in children in which /da/ was presented in the context of seven other speech sounds. We also measured speech-in-noise perception using the Hearing-in-Noise Test (HINT) and the Quick Speech-in-Noise Test (QuickSIN).
Children and adults with poor SIN perception have deficits in the subcortical spectrotemporal representation of speech, including low-frequency spectral magnitudes and the timing of transient response peaks. Furthermore, auditory expertise, as engendered by musical training, provides both behavioral and neural advantages for processing speech in noise.
These results have implications for future assessment and management strategies for young and old populations whose primary complaint is difficulty hearing in background noise. The cABR provides a clinically applicable metric for objective assessment of individuals with SIN deficits, for determination of the biologic nature of disorders affecting SIN perception, for evaluation of appropriate hearing aid algorithms, and for monitoring the efficacy of auditory remediation and training.
Auditory brainstem response; evoked potentials; frequency; musicians; speech in noise; timing
Higher levels of discourse processing evoke patterns of cognition and
brain activation that extend beyond the literal comprehension of sentences. We
used fMRI to examine brain activation patterns while 16 healthy participants
read brief three-sentence stories that concluded with either a literal,
metaphoric, or ironic sentence. The fMRI images acquired during the reading of
the critical sentence revealed a selective response of the brain to the two
types of nonliteral utterances. Metaphoric utterances resulted in significantly
higher levels of activation in the left inferior frontal gyrus and in bilateral
inferior temporal cortex than the literal and ironic utterances. Ironic
statements resulted in significantly higher activation levels than literal
statements in the right superior and middle temporal gyri, with metaphoric
statements resulting in intermediate levels in these regions. The findings show
differential hemispheric sensitivity to these aspects of figurative language,
and are relevant to models of the functional cortical architecture of language
processing in connected discourse.
Figurative language; Irony; Metaphor; Pragmatics; fMRI
While sarcasm can be conveyed solely through contextual cues such as counterfactual or echoic statements, face-to-face sarcastic speech may be characterized by specific paralinguistic features that alert the listener to interpret the utterance as ironic or critical, even in the absence of contextual information. We investigated the neuroanatomy underlying failure to understand sarcasm from dynamic vocal and facial paralinguistic cues. Ninety subjects (20 frontotemporal dementia, 11 semantic dementia [SemD], 4 progressive nonfluent aphasia, 27 Alzheimer’s disease, 6 corticobasal degeneration, 9 progressive supranuclear palsy, 13 healthy older controls) were tested using the Social Inference – Minimal subtest of The Awareness of Social Inference Test (TASIT). Subjects watched brief videos depicting sincere or sarcastic communication and answered yes-no questions about the speaker’s intended meaning. All groups interpreted Sincere (SIN) items normally, and only the SemD group was impaired on the Simple Sarcasm (SSR) condition. Patients failing the SSR performed more poorly on dynamic emotion recognition tasks and had more neuropsychiatric disturbances, but had better verbal and visuospatial working memory than patients who comprehended sarcasm. Voxel-based morphometry analysis of SSR scores in SPM5 demonstrated that poorer sarcasm comprehension was predicted by smaller volume in bilateral posterior parahippocampii (PHc), temporal poles, and R medial frontal pole (pFWE<0.05). This study provides lesion data suggesting that the PHc may be involved in recognizing a paralinguistic speech profile as abnormal, leading to interpretive processing by the temporal poles and right medial frontal pole that identifies the social context as sarcastic, and recognizes the speaker’s paradoxical intentions.
Regions of human ventral extrastriate visual cortex develop specializations for natural categories (e.g., faces) and cultural artifacts (e.g., words). In adults, category-based specializations manifest as greater neural responses in visual regions of the brain (e.g., fusiform gyrus) to some categories over others. However, few studies have examined how these specializations originate in the brains of children. Moreover, it is as yet unknown whether the development of visual specializations hinges on “increases” in the response to the preferred categories, “decreases” in the responses to nonpreferred categories, or “both.” This question is relevant to a long-standing debate concerning whether neural development is driven by building up or pruning back representations. To explore these questions, we measured patterns of visual activity in 4-year-old children for 4 categories (faces, letters, numbers, and shoes) using functional magnetic resonance imaging. We report 2 key findings regarding the development of visual categories in the brain: 1) the categories “faces” and “symbols” doubly dissociate in the fusiform gyrus before children can read and 2) the development of category-specific responses in young children depends on cortical responses to nonpreferred categories that decrease as preferred category knowledge is acquired.
development; fMRI; fusiform gyrus; pruning
The brain is the central organ of stress and adaptation to stress because it perceives and determines what is threatening, as well as the behavioral and physiological responses to the stressor. The adult, as well as developing brain, possess a remarkable ability to show reversible structural and functional plasticity in response to stressful and other experiences, including neuronal replacement, dendritic remodeling, and synapse turnover. This is particularly evident in the hippocampus, where all three types of structural plasticity have been recognized and investigated, using a combination of morphological, molecular, pharmacological, electrophysiological and behavioral approaches. The amygdala and the prefrontal cortex, brain regions involved in anxiety and fear, mood, cognitive function and behavioral control, also show structural plasticity. Acute and chronic stress cause an imbalance of neural circuitry subserving cognition, decision making, anxiety and mood that can increase or decrease expression of those behaviors and behavioral states. In the short term, such as for increased fearful vigilance and anxiety in a threatening environment, these changes may be adaptive; but, if the danger passes and the behavioral state persists along with the changes in neural circuitry, such maladaptation may need intervention with a combination of pharmacological and behavioral therapies, as is the case for chronic or mood anxiety disorders. We shall review cellular and molecular mechanisms, as well as recent work on individual differences in anxiety-like behavior and also developmental influences that bias how the brain responds to stressors. Finally, we suggest that such an approach needs to be extended to other brain areas that are also involved in anxiety and mood.
Optimal brain sensitivity to the fundamental frequency (F0) contour changes in the human voice is important for understanding a speaker’s intonation, and consequently, the speaker’s attitude. However, whether sensitivity in the brain’s response to a human voice F0 contour change varies with an interaction between an individual’s traits (i.e., autistic traits) and a human voice element (i.e., presence or absence of communicative action such as calling) has not been investigated. In the present study, we investigated the neural processes involved in the perception of F0 contour changes in the Japanese monosyllables “ne” and “nu.” “Ne” is an interjection that means “hi” or “hey” in English; pronunciation of “ne” with a high falling F0 contour is used when the speaker wants to attract a listener’s attention (i.e., social intonation). Meanwhile, the Japanese concrete noun “nu” has no communicative meaning. We applied an adaptive spatial filtering method to the neuromagnetic time course recorded by whole-head magnetoencephalography (MEG) and estimated the spatiotemporal frequency dynamics of event-related cerebral oscillatory changes in beta band during the oddball paradigm. During the perception of the F0 contour change when “ne” was presented, there was event-related de-synchronization (ERD) in the right temporal lobe. In contrast, during the perception of the F0 contour change when “nu” was presented, ERD occurred in the left temporal lobe and in the bilateral occipital lobes. ERD that occurred during the social stimulus “ne” in the right hemisphere was significantly correlated with a greater number of autistic traits measured according to the Autism Spectrum Quotient (AQ), suggesting that the differences in human voice processing are associated with higher autistic traits, even in non-clinical subjects.
Theory of mind (ToM) - our ability to predict behaviors of others in terms of their underlying intentions - has been examined through verbal and nonverbal false-belief (FB) tasks. Previous brain imaging studies of ToM in adults have implicated medial prefrontal cortex (mPFC) and temporo-parietal junction (TPJ) for adults’ ToM ability. To examine age and modality related differences and similarities in neural correlates of ToM, we tested 16 adults (18-40 years-old) and 12 children (8-12 years-old) with verbal (story) and nonverbal (cartoon) FB tasks, using functional magnetic resonance imaging (fMRI). Both age groups showed significant activity in the TPJ bilaterally and right inferior parietal lobule (IPL) in a modality-independent manner, indicating that these areas are important for ToM during both adulthood and childhood, regardless of modality. We also found significant age-related differences in the ToM condition-specific activity for the story and cartoon tasks in the left inferior frontal gyrus (IFG) and left TPJ. These results suggest that depending on the modality adults may utilize different brain regions from children in understanding ToM.
fMRI; Theory of Mind; Cognitive Development; Language; Temporo-parietal junction
Emotional prosody comprehension (EPC), the ability to interpret another person's feelings by listening to their tone of voice, is crucial for effective social communication. Previous studies assessing the neural correlates of EPC have found inconsistent results, particularly regarding the involvement of the medial prefrontal cortex (mPFC). It remained unclear whether the involvement of the mPFC is linked to an increased demand in socio-cognitive components of EPC such as mental state attribution and if basic perceptual processing of EPC can be performed without the contribution of this region.
fMRI was used to delineate neural activity during the perception of prosodic stimuli conveying simple and complex emotion. Emotional trials in general, as compared to neutral ones, activated a network comprising temporal and lateral frontal brain regions, while complex emotion trials specifically showed an additional involvement of the mPFC, premotor cortex, frontal operculum and left insula.
These results indicate that the mPFC and premotor areas might be associated, but are not crucial to EPC. However, the mPFC supports socio-cognitive skills necessary to interpret complex emotion such as inferring mental states. Additionally, the premotor cortex involvement may reflect the participation of the mirror neuron system for prosody processing particularly of complex emotion.
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments.
attention; cortical variability; auditory-evoked potentials; music; musicians; speech in noise; prefrontal cortex; language
Autism spectrum disorders (ASD) impact social functioning and communication, and individuals with these disorders often have restrictive and repetitive behaviors. Accumulating data indicate that ASD is associated with alterations of neural circuitry. Functional MRI (FMRI) studies have focused on connectivity in the context of psychological tasks. However, even in the absence of a task, the brain exhibits a high degree of functional connectivity, known as intrinsic or resting connectivity. Notably, the default network, which includes the posterior cingulate cortex, retro-splenial, lateral parietal cortex/angular gyrus, medial prefrontal cortex, superior frontal gyrus, temporal lobe, and parahippocampal gyrus, is strongly active when there is no task. Altered intrinsic connectivity within the default network may underlie offline processing that may actuate ASD impairments. Using FMRI, we sought to evaluate intrinsic connectivity within the default network in ASD. Relative to controls, the ASD group showed weaker connectivity between the posterior cingulate cortex and superior frontal gyrus and stronger connectivity between the posterior cingulate cortex and both the right temporal lobe and right parahippocampal gyrus. Moreover, poorer social functioning in the ASD group was correlated with weaker connectivity between the posterior cingulate cortex and the superior frontal gyrus. In addition, more severe restricted and repetitive behaviors in ASD were correlated with stronger connectivity between the posterior cingulate cortex and right parahippocampal gyrus. These findings indicate that ASD subjects show altered intrinsic connectivity within the default network, and connectivity between these structures is associated with specific ASD symptoms.
In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400–1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence.
A common feature of the antisocial, rule-breaking behavior that is central to criminal, violent and psychopathic individuals is the failure to follow moral guidelines. This review summarizes key findings from brain imaging research on both antisocial behavior and moral reasoning, and integrates these findings into a neural moral model of antisocial behavior. Key areas found to be functionally or structurally impaired in antisocial populations include dorsal and ventral regions of the prefrontal cortex (PFC), amygdala, hippocampus, angular gyrus, anterior cingulate and temporal cortex. Regions most commonly activated in moral judgment tasks consist of the polar/medial and ventral PFC, amygdala, angular gyrus and posterior cingulate. It is hypothesized that the rule-breaking behavior common to antisocial, violent and psychopathic individuals is in part due to impairments in some of the structures (dorsal and ventral PFC, amygdala and angular gyrus) subserving moral cognition and emotion. Impairments to the emotional component that comprises the feeling of what is moral is viewed as the primary deficit in antisocials, although some disruption to the cognitive and cognitive-emotional components of morality (particularly self-referential thinking and emotion regulation) cannot be ruled out. While this neurobiological predisposition is likely only one of several biosocial processes involved in the etiology of antisocial behavior, it raises significant moral issues for the legal system and neuroethics.
antisocial; psychopathy; moral; prefrontal; temporal
To determine if differences between English- and Spanish-speaking parents in ratings of their children’s health care can be explained by need for interpretive services.
Using the Consumer Assessment of Health Plans Survey–Child-Survey (CAHPS), reports about provider communication were compared among 3 groups of parents enrolled in a Medicaid managed care health plan: 1) English speakers, 2) Spanish speakers with no self-reported need for interpretive services, and 3) Spanish speakers with self-reported need for interpretive services. Parents were asked to report how well their providers 1) listened carefully to what was being said, 2) explained things in a way that could be understood, 3) respected their comments and concerns, and 4) spent enough time during medical encounters. Multivariate logistic regression was used to compare the ratings of each of the 3 groups while controlling for child’s gender, parent’s gender, parent’s educational attainment, child’s health status, and survey year.
Spanish-speaking parents in need of interpretive services were less likely to report that providers spent enough time with their children (odds ratio = 0.34, 95% confidence interval = 0.17–0.68) compared to English-speaking parents. There was no statistically significant difference found between Spanish-speaking parents with no need of interpretive services and English-speaking parents.
Among Spanish- versus English-speaking parents, differences in ratings of whether providers spent enough time with children during medical encounters appear to be explained, in part, by need for interpretive services. No other differences in ratings of provider communication were found.
Consumer Assessment of Health Plan Survey; health disparities; Medicaid managed care; ratings of provider communication
Humans possess a remarkable ability to attend to a single speaker’s voice in a multi-talker background1–3. How the auditory system manages to extract intelligible speech under such acoustically complex and adverse listening conditions is not known, and, indeed, it is not clear how attended speech is internally represented4,5. Here, using multi-electrode surface recordings from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demonstrate that population responses in non-primary human auditory cortex encode critical features of attended speech: speech spectrograms reconstructed based on cortical responses to the mixture of speakers reveal the salient spectral and temporal features of the attended speaker, as if subjects were listening to that speaker alone. A simple classifier trained solely on examples of single speakers can decode both attended words and speaker identity. We find that task performance is well predicted by a rapid increase in attention-modulated neural selectivity across both single-electrode and population-level cortical responses. These findings demonstrate that the cortical representation of speech does not merely reflect the external acoustic environment, but instead gives rise to the perceptual aspects relevant for the listener’s intended goal.
In noisy environments, listeners tend to hear a speaker’s voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speaker’s mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual information to improve intelligibility. Using functional magnetic resonance imaging, we show that understanding speech-in-noise is supported by a network of brain areas including the left superior parietal lobule, the motor/premotor cortex, and the left anterior superior temporal sulcus (STS), a likely apex of the acoustic processing hierarchy. Multisensory integration likely improves comprehension through improved communication between the left temporal–occipital boundary, the left medial-temporal lobe, and the left STS. This demonstrates how the brain uses information from multiple modalities to improve speech comprehension in naturalistic, acoustically adverse conditions.
While major depressive disorder has been shown to be a significant mental health issue for school-age children, recent research indicates that depression can be observed in children as early as the preschool period. Yet, little work has been done to explore the neurobiological factors associated with this early form of depression. Given research suggesting a relation between adult depression and anomalies in emotion-related neural circuitry, the goal of the current study was to elucidate changes in functional activation during negative mood induction and emotion regulation in school-age children with a history of preschool-onset depression. The results suggest that a history of depression during the preschool period is associated with decreased activity in prefrontal cortex during mood induction and regulation. Moreover, the severity of current depressed mood was associated with increased activity in limbic regions, such as the amygdala, particularly in children with a history of depression. Similar to results observed in adult depression, the current findings indicate disruptions in emotion-related neural circuitry associated with preschool-onset depression.
preschool depression; imaging; emotion regulation; prefrontal; amygdala