In the present study, we investigated brain morphological signatures of dyslexia by using a voxel-based asymmetry analysis. Dyslexia is a developmental disorder that affects the acquisition of reading and spelling abilities and is associated with a phonological deficit. Speech perception disabilities have been associated with this deficit, particularly when listening conditions are challenging, such as in noisy environments. These deficits are associated with known neurophysiological correlates, such as a reduction in the functional activation or a modification of functional asymmetry in the cortical regions involved in speech processing, such as the bilateral superior temporal areas. These functional deficits have been associated with macroscopic morphological abnormalities, which potentially include a reduction in gray and white matter volumes, combined with modifications of the leftward asymmetry along the perisylvian areas. The purpose of this study was to investigate gray/white matter distribution asymmetries in dyslexic adults using automated image processing derived from the voxel-based morphometry technique. Correlations with speech-in-noise perception abilities were also investigated. The results confirmed the presence of gray matter distribution abnormalities in the superior temporal gyrus (STG) and the superior temporal Sulcus (STS) in individuals with dyslexia. Specifically, the gray matter of adults with dyslexia was symmetrically distributed over one particular region of the STS, the temporal voice area, whereas normal readers showed a clear rightward gray matter asymmetry in this area. We also identified a region in the left posterior STG in which the white matter distribution asymmetry was correlated to speech-in-noise comprehension abilities in dyslexic adults. These results provide further information concerning the morphological alterations observed in dyslexia, revealing the presence of both gray and white matter distribution anomalies and the potential involvement of these defects in speech-in-noise deficits.
There is mounting evidence for associations between sedentary behaviours and adverse health outcomes, although the data on occupational sitting and mortality risk remain equivocal. The aim of this study was to determine the association between occupational sitting and cardiovascular, cancer and all-cause mortality in a pooled sample of seven British general population cohorts.
The sample comprised 5380 women and 5788 men in employment who were drawn from five Health Survey for England and two Scottish Health Survey cohorts. Participants were classified as reporting standing, walking or sitting in their work time and followed up over 12.9 years for mortality. Data were modelled using Cox proportional hazard regression adjusted for age, waist circumference, self-reported general health, frequency of alcohol intake, cigarette smoking, non-occupational physical activity, prevalent cardiovascular disease and cancer at baseline, psychological health, social class, and education.
In total there were 754 all-cause deaths. In women, a standing/walking occupation was associated with lower risk of all-cause (fully adjusted hazard ratio [HR] = 0.68, 95% CI 0.52–0.89) and cancer (HR = 0.60, 95% CI 0.43–0.85) mortality, compared to sitting occupations. There were no associations in men. In analyses with combined occupational type and leisure-time physical activity, the risk of all-cause mortality was lowest in participants with non-sitting occupations and high leisure-time activity.
Sitting occupations are linked to increased risk for all-cause and cancer mortality in women only, but no such associations exist for cardiovascular mortality in men or women.
The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli.
Recent studies demonstrated that working memory could be improved by training. We recruited healthy adult participants and used adaptive running working memory training tasks with a double-blind design, combined with the event-related potentials (ERPs) approach, to explore the influence of updating function training on brain activity. Participants in the training group underwent training for 20 days. Compared with the control group, the training group's accuracy (ACC) in the two-back working memory task had no significant differences after training, but reaction time (RT) was reduced significantly. Besides, the amplitudes of N160 and P300 increased significantly whereas that of P200 decreased significantly. The results suggest that training could have improved the participants' capacity on both inhibitory and updating.
Fractional anisotropy (FA) is the most commonly used quantitative measure of diffusion in the brain. Changes in FA have been reported in many neurological disorders, but the implementation of diffusion tensor imaging (DTI) in daily clinical practice remains challenging. We propose a novel color look-up table (LUT) based on normative data as a tool for screening FA changes. FA was calculated for 76 healthy volunteers using 12 motion-probing gradient directions (MPG), a subset of 59 subjects was additionally scanned using 30 MPG. Population means and 95% prediction intervals for FA in the corpus callosum, frontal gray matter, thalamus and basal ganglia were used to create the LUT. Unique colors were assigned to inflection points with continuous ramps between them. Clinical use was demonstrated on 17 multiple system atrophy (MSA) patients compared to 13 patients with Parkinson disease (PD) and 17 healthy subjects. Four blinded radiologists classified subjects as MSA/non-MSA. Using only the LUT, high sensitivity (80%) and specificity (84%) were achieved in differentiating MSA subjects from PD subjects and controls. The LUTs generated from 12 and 30 MPG were comparable and accentuate FA abnormalities.
A consolidated approach to the study of the mental representation of word meanings has consisted in contrasting different domains of knowledge, broadly reflecting the abstract-concrete dichotomy. More fine-grained semantic distinctions have emerged in neuropsychological and cognitive neuroscience work, reflecting semantic category specificity, but almost exclusively within the concrete domain. Theoretical advances, particularly within the area of embodied cognition, have more recently put forward the idea that distributed neural representations tied to the kinds of experience maintained with the concepts' referents might distinguish conceptual meanings with a high degree of specificity, including those within the abstract domain. Here we report the results of two psycholinguistic rating studies incorporating such theoretical advances with two main objectives: first, to provide empirical evidence of fine-grained distinctions within both the abstract and the concrete semantic domains with respect to relevant psycholinguistic dimensions; second, to develop a carefully controlled linguistic stimulus set that may be used for auditory as well as visual neuroimaging studies focusing on the parametrization of the semantic space beyond the abstract-concrete dichotomy. Ninety-six participants rated a set of 210 sentences across pre-selected concrete (mouth, hand, or leg action-related) and abstract (mental state-, emotion-, mathematics-related) categories, with respect either to different semantic domain-related scales (rating study 1), or to concreteness, familiarity, and context availability (rating study 2). Inferential statistics and correspondence analyses highlighted distinguishing semantic and psycholinguistic traits for each of the pre-selected categories, indicating that a simple abstract-concrete dichotomy is not sufficient to account for the entire semantic variability within either domains.
The combined knowledge of word meanings and grammatical rules does not allow a listener to grasp the intended meaning of a speaker’s utterance. Pragmatic inferences on the part of the listener are also required. The present work focuses on the processing of ironic utterances (imagine a slow day being described as “really productive”) because these clearly require the listener to go beyond the linguistic code. Such utterances are advantageous experimentally because they can serve as their own controls in the form of literal sentences (now imagine an active day being described as “really productive”) as we employ techniques from electrophysiology (EEG). Importantly, the results confirm previous ERP findings showing that irony processing elicits an enhancement of the P600 component (Regel et al., 2011). More original are the findings drawn from Time Frequency Analysis (TFA) and especially the increase of power in the gamma band in the 280–400 time-window, which points to an integration among different streams of information relatively early in the comprehension of an irony. This represents a departure from traditional accounts of language processing which generally view pragmatic inferences as late-arriving. We propose that these results indicate that unification operations between the linguistic code and contextual information play a critical role throughout the course of irony processing and earlier than previously thought.
Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence.
Sarcopenia is associated with loss of independence and ill-health in the elderly although the causes remain poorly understood. We examined the association between two screen-based leisure time sedentary activities (daily TV viewing time and internet use) and muscle strength.
Methods and Results
We studied 6228 men and women (aged 64.9±9.1 yrs) from wave 4 (2008-09) of the English Longitudinal Study of Ageing, a prospective study of community dwelling older adults. Muscle strength was assessed by a hand grip test and the time required to complete five chair rises. TV viewing and internet usage were inversely associated with one another. Participants viewing TV ≥6hrs/d had lower grip strength (Men, B = −1.20 kg, 95% CI, −2.26, −0.14; Women, −0.75 kg, 95% CI, −1.48, −0.03) in comparison to <2hrs/d TV, after adjustment for age, physical activity, smoking, alcohol, chronic disease, disability, depressive symptoms, social status, and body mass index. In contrast, internet use was associated with higher grip strength (Men, B = 2.43 kg, 95% CI, 1.74, 3.12; Women, 0.76 kg, 95% CI, 0.32, 1.20). These associations persisted after mutual adjustment for both types of sedentary behaviour.
In older adults, the association between sedentary activities and physical function is context specific (TV viewing vs. computer use). Adverse effects of TV viewing might reflect the prolonged sedentary nature of this behavior.
The brain remains electrically and metabolically active during resting conditions. The low-frequency oscillations (LFO) of the blood oxygen level-dependent (BOLD) signal of functional magnetic resonance imaging (fMRI) coherent across distributed brain regions are known to exhibit features of this activity. However, these intrinsic oscillations may undergo dynamic changes in time scales of seconds to minutes during resting conditions. Here, using wavelet-transform based time-frequency analysis techniques, we investigated the dynamic nature of default-mode networks from intrinsic BOLD signals recorded from participants maintaining visual fixation during resting conditions. We focused on the default-mode network consisting of the posterior cingulate cortex (PCC), the medial prefrontal cortex (mPFC), left middle temporal cortex (LMTC) and left angular gyrus (LAG). The analysis of the spectral power and causal flow patterns revealed that the intrinsic LFO undergo significant dynamic changes over time. Dividing the frequency interval 0 to 0.25 Hz of LFO into four intervals slow-5 (0.01–0.027 Hz), slow-4 (0.027–0.073 Hz), slow-3 (0.073–0.198 Hz) and slow-2 (0.198–0.25 Hz), we further observed significant positive linear relationships of slow-4 in-out flow of network activity with slow-5 node activity, and slow-3 in-out flow of network activity with slow-4 node activity. The network activity associated with respiratory related frequency (slow-2) was found to have no relationship with the node activity in any of the frequency intervals. We found that the net causal flow towards a node in slow-3 band was correlated with the number of fibers, obtained from diffusion tensor imaging (DTI) data, from the other nodes connecting to that node. These findings imply that so-called resting state is not ‘entirely’ at rest, the higher frequency network activity flow can predict the lower frequency node activity, and the network activity flow can reflect underlying structural connectivity.
We present an efficient approach to discriminate between typical and atypical brains from macroscopic neural dynamics recorded as magnetoencephalograms (MEG). Our approach is based on the fact that spontaneous brain activity can be accurately described with stochastic dynamics, as a multivariate Ornstein-Uhlenbeck process (mOUP). By fitting the data to a mOUP we obtain: 1) the functional connectivity matrix, corresponding to the drift operator, and 2) the traces of background stochastic activity (noise) driving the brain. We applied this method to investigate functional connectivity and background noise in juvenile patients (n = 9) with Asperger’s syndrome, a form of autism spectrum disorder (ASD), and compared them to age-matched juvenile control subjects (n = 10). Our analysis reveals significant alterations in both functional brain connectivity and background noise in ASD patients. The dominant connectivity change in ASD relative to control shows enhanced functional excitation from occipital to frontal areas along a parasagittal axis. Background noise in ASD patients is spatially correlated over wide areas, as opposed to control, where areas driven by correlated noise form smaller patches. An analysis of the spatial complexity reveals that it is significantly lower in ASD subjects. Although the detailed physiological mechanisms underlying these alterations cannot be determined from macroscopic brain recordings, we speculate that enhanced occipital-frontal excitation may result from changes in white matter density in ASD, as suggested in previous studies. We also venture that long-range spatial correlations in the background noise may result from less specificity (or more promiscuity) of thalamo-cortical projections. All the calculations involved in our analysis are highly efficient and outperform other algorithms to discriminate typical and atypical brains with a comparable level of accuracy. Altogether our results demonstrate a promising potential of our approach as an efficient biomarker for altered brain dynamics associated with a cognitive phenotype.
Cortical excitability may be subject to changes through training and learning. Motor training can increase cortical excitability in motor cortex, and facilitation of motor cortical excitability has been shown to be positively correlated with improvements in performance in simple motor tasks. Thus cortical excitability may tentatively be considered as a marker of learning and use-dependent plasticity. Previous studies focused on changes in cortical excitability brought about by learning processes, however, the relation between native levels of cortical excitability on the one hand and brain activation and behavioral parameters on the other is as yet unknown. In the present study we investigated the role of differential native motor cortical excitability for learning a motor sequencing task with regard to post-training changes in excitability, behavioral performance and involvement of brain regions. Our motor task required our participants to reproduce and improvise over a pre-learned motor sequence. Over both task conditions, participants with low cortical excitability (CElo) showed significantly higher BOLD activation in task-relevant brain regions than participants with high cortical excitability (CEhi). In contrast, CElo and CEhi groups did not exhibit differences in percentage of correct responses and improvisation level. Moreover, cortical excitability did not change significantly after learning and training in either group, with the exception of a significant decrease in facilitatory excitability in the CEhi group. The present data suggest that the native, unmanipulated level of cortical excitability is related to brain activation intensity, but not to performance quality. The higher BOLD mean signal intensity during the motor task might reflect a compensatory mechanism in CElo participants.
Semantic priming is usually studied by examining ERPs over many trials and subjects. This article aims at detecting semantic priming at the single-trial level. By using machine learning techniques it is possible to analyse and classify short traces of brain activity, which could, for example, be used to build a Brain Computer Interface (BCI). This article describes an experiment where subjects were presented with word pairs and asked to decide whether the words were related or not. A classifier was trained to determine whether the subjects judged words as related or unrelated based on one second of EEG data. The results show that the classifier accuracy when training per subject varies between 54% and 67%, and is significantly above chance level for all subjects (N = 12) and the accuracy when training over subjects varies between 51% and 63%, and is significantly above chance level for 11 subjects, pointing to a general effect.
We report data from an internet questionnaire of sixty number trivia. Participants were asked for the number of cups in their house, the number of cities they know and 58 other quantities. We compare the answers of familial sinistrals – individuals who are left-handed themselves or have a left-handed close blood-relative – with those of pure familial dextrals – right-handed individuals who reported only having right-handed close blood-relatives. We show that familial sinistrals use rounder numbers than pure familial dextrals in the survey responses. Round numbers in the decimal system are those that are multiples of powers of 10 or of half or a quarter of a power of 10. Roundness is a gradient concept, e.g. 100 is rounder than 50 or 200. We show that very round number like 100 and 1000 are used with 25% greater likelihood by familial sinistrals than by pure familial dextrals, while pure familial dextrals are more likely to use less round numbers such as 25, 60, and 200. We then use Sigurd’s (1988, Language in Society) index of the roundness of a number and report that familial sinistrals’ responses are significantly rounder on average than those of pure familial dextrals. To explain the difference, we propose that the cognitive effort of using exact numbers is greater for the familial sinistral group because their language and number systems tend to be more distributed over both hemispheres of the brain. Our data support the view that exact and approximate quantities are processed by two separate cognitive systems. Specifically, our behavioral data corroborates the view that the evolutionarily older, approximate number system is present in both hemispheres of the brain, while the exact number system tends to be localized in only one hemisphere.
Research on language and aging typically shows that language comprehension is preserved across the life span. Recent neuroimaging results suggest that this good performance is underpinned by age-related neural reorganization [e.g., Tyler, L. K., Shafto, M. A., Randall, B., Wright, P., Marslen-Wilson, W. D., & Stamatakis, E. A. Preserving syntactic processing across the adult life span: The modulation of the frontotemporal language system in the context of age-related atrophy. Cerebral Cortex, 20, 352–364, 2010]. The current study examines how age-related reorganization affects the balance between component linguistic processes by manipulating semantic and phonological factors during spoken word recognition in younger and older adults. Participants in an fMRI study performed an auditory lexical decision task where words varied in their phonological and semantic properties as measured by degree of phonological competition and imageability. Older adults had a preserved lexicality effect, but compared with younger people, their behavioral sensitivity to phonological competition was reduced, as was competition-related activity in left inferior frontal gyrus. This was accompanied by increases in behavioral sensitivity to imageability and imageability-related activity in left middle temporal gyrus. These results support previous findings that neural compensation underpins preserved comprehension in aging and demonstrate that neural reorganization can affect the balance between semantic and phonological processing.
The core of human language, which differentiates it from the communicative abilities of other species, is the set of combinatorial operations called syntax. For over a century researchers have attempted to understand how this essential function is organised in the brain. Here we combine behavioural and neuroimaging methods, with left hemisphere-damaged patients and healthy controls, to identify the pathways connecting the brain regions involved in syntactic processing. In a previous fMRI study (Tyler et al. 2010b) we established that regions of left inferior frontal cortex and left posterior middle temporal cortex were activated during syntactic processing. These clusters were used here as seeds for probabilistic tractography analyses in patients and controls, allowing us to delineate, and measure the integrity of, the white matter tracts connecting the frontal and temporal regions active for syntax. We found that structural disconnection in either of two fibre bundles - the arcuate fasciculus or the extreme capsule fibre system - was associated with syntactic impairment in patients. The results demonstrate the causal role in syntactic analysis of these two major left hemisphere fibre bundles - challenging existing views about their role in language functions, and providing a new basis for future research in this key area of human cognition.
diffusion tensor imaging; connectivity; tractography; stroke; grammar
Understanding the relationship between brain and cognition critically depends on data from brain-damaged patients since these provide major constraints on identifying the essential components of brain–behavior systems. Here we relate structural and functional fMRI data with behavioral data in 21 human patients with chronic left hemisphere (LH) lesions and a range of language impairments to investigate the controversial issue of the role of the hemispheres in different language functions. We address this issue within a dual neurocognitive model of spoken language comprehension in which core linguistic functions, e.g., syntax, depend critically upon an intact left frontotemporal system, whereas more general communicative abilities, e.g., semantics, are supported by a bilateral frontotemporal system and may recover from LH damage through normal or enhanced activity in the intact right hemisphere. The fMRI study used a word-monitoring task that differentiated syntactic and semantic aspects of sentence comprehension. We distinguished overlapping interactions between structure, neural activity, and performance using joint independent components analysis, identifying two structural–functional networks, each with a distinct relationship with performance. Syntactic performance correlated with tissue integrity and activity in a left frontotemporal network. Semantic performance correlated with activity in right superior/middle temporal gyri regardless of tissue integrity. Right temporal activity did not differ between patients and controls, suggesting that the semantic network is degenerately organized, with regions in both hemispheres able to perform similar computations. Our findings support the dual neurocognitive model of spoken language comprehension and emphasize the importance of linguistic specificity in investigations of language recovery in patients.
The production and comprehension of human language is thought to involve a network of frontal, parietal, and temporal cortical loci interconnected by two dominant white matter pathways. These two white matter bundles, often referred to as the dorsal and ventral processing tracts, are hypothesized to have markedly different language functions. The dorsal tract is thought to process phonological processing, while the ventral tract is considered to abet semantics. This proposed functional differentiation of tracts is similar to the ventral and dorsal dichotomy proposed for the visual and auditory systems. The present study evaluated this characterization of the language system in the context of various components involved in its function. Twenty-four chronic stroke patients completed a battery of 10 language tests designed to measure performance on the comprehension and production of phonology, morphology, semantics, and syntax. The patients also completed diffusion MRI scanning. Lesions were confined to the left hemisphere, but the size and location of the insult varied so that patients had damage to a single tract, both tracts, or neither tract. Individual FA maps were generated, and focal areas of hypointensity served as markers of white matter damage. Whole-brain voxel-by-voxel correlations revealed that only phonological and semantic tasks fit into the dual-stream model, while syntax and morphology involved both pathways. ROI analyses of the arcuate fascicle and extreme capsule supported this finding. These data suggest that natural language function is more likely to reflect a synergistic system rather than a segregated dual-stream system.
In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing.
C-reactive protein (CRP) is associated with the risk of cardiovascular disease (CVD); whether the effects are modified by diabetes status still is unclear. This study investigated these issues and assessed the added value of CRP to predictions.
RESEARCH DESIGN AND METHODS
Participants were drawn from representative samples of adults living in England and Scotland. Cox proportional hazards regression models were used to relate baseline plasma CRP with all-cause and CVD mortality during follow-up in men and women with and without diabetes. The added value of CRP to the predictions was assessed through c-statistic comparison and relative integrated discrimination improvement.
A total of 25,979 participants (4.9% with diabetes) were followed for a median of 93 months, during which period there were 2,767 deaths (957 from CVD). CRP (per SD loge) was associated with a 53% (95% CI 43–64) and 43% (38–49) higher risk of cardiovascular and all-cause mortality, respectively. These associations were log linear and did not differ according to diabetes status (both P ≥ 0.08 for interaction), sex, and other risk factors. Adding CRP to conventional risk factors improved predictions overall and separately by diabetes status but not for CVD mortality, although such improvements only were marginal based on several discrimination statistics.
The association between CRP and CVD was similar across diabetes status, and the effects are broadly similar across levels of other conventional risk factors.
Prospective studies report associations between indicators of time spent sitting and obesity risk. Most studies use a single indicator of sedentary behavior and are unable to clearly identify whether sedentary behavior is a cause or a consequence of obesity.
To investigate cross-sectional and prospective associations between multiple sitting time indicators and obesity and examine the possibility of reverse causality.
Using data from the Whitehall II cohort, multiple logistic models were fitted to examine associations between prevalent obesity (BMI ≥30) at Phase 5 (1997–1999), and incident obesity between Phases 5 and 7 (2003–2004) across four levels of five sitting exposures (work sitting, TV viewing, non-TV leisure-time sitting, leisure-time sitting, and total sitting). Using obesity data from three prior phases (1985–1988, 1991–1993; and recalled weight at age 25 years), linear regression models were fitted to examine the association between prior obesity and sitting time at Phase 5. Analyses were conducted in 2012.
None of the sitting exposures were associated with obesity either cross-sectionally or prospectively. Obesity at one previous measurement phase was associated with a 2.43-hour/week (95% CI=0.07, 4.78) increase in TV viewing; obesity at three previous phases was associated with a 7.42-hour/week (95% CI=2.7, 12.46) increase in TV-viewing hours/week at Phase 5.
Sitting time was not associated with obesity cross-sectionally or prospectively. Prior obesity was prospectively associated with time spent watching TV per week but not other types of sitting.
The neural efficiency hypothesis postulates an inverse relationship between intelligence and brain activation. Previous research suggests that gender and task modality represent two important moderators of the neural efficiency phenomenon. Since most of the existing studies on neural efficiency have used ERD in the EEG as a measure of brain activation, the central aim of this study was a more detailed analysis of this phenomenon by means of functional MRI. A sample of 20 males and 20 females, who had been screened for their visuo-spatial intelligence, was confronted with a mental rotation task employing an event-related approach. Results suggest that less intelligent individuals show a stronger deactivation of parts of the default mode network, as compared to more intelligent people. Furthermore, we found evidence of an interaction between task difficulty, intelligence and gender, indicating that more intelligent females show an increase in brain activation with an increase in task difficulty. These findings may contribute to a better understanding of the neural efficiency hypothesis, and possibly also of gender differences in the visuo-spatial domain.
In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.