Search tips
Search criteria 


Logo of wtpaEurope PMCEurope PMC Funders GroupSubmit a Manuscript
J Neurosci. Author manuscript; available in PMC 2010 April 28.
Published in final edited form as:
PMCID: PMC2795346

Predicting language lateralization from gray matter


It has long been predicted that the degree to which language is lateralized to the left or right hemisphere might be reflected in the underlying brain anatomy. We investigated this relationship on a voxel by voxel basis across the whole brain using structural and functional MRI images from 86 healthy participants. Structural images were converted to gray matter probability images and language activation was assessed during naming and semantic decision. All images were spatially normalized to the same symmetrical template and lateralization images were generated by subtracting right from left hemisphere signal at each voxel. We show that the degree to which language was left or right lateralized was positively correlated with the degree to which gray matter density was lateralized. Post-hoc analyses revealed a general relationship between gray matter probability and BOLD signal. This is the first demonstration that structural brain scans can be used to predict language lateralization on a voxel by voxel basis in the normal healthy brain.

Keywords: gray matter, BOLD, language, lateralization, fMRI, variability


One of the most striking examples of human brain organization is that the left hemisphere is more involved in language processing than the right hemisphere (Broca, 1865; Hecaen et al., 1981; Finger and Roe, 1999). Yet the underpinnings of this functional asymmetry are unclear. Here we investigate whether, and how, language lateralization in the normal healthy brain is predicted by inter-hemispheric differences in brain structure.

Previous studies have already suggested that language lateralization may be reflected, to some extent, by asymmetry in the underlying brain structures (Geschwind and Levitsky, 1968; Galaburda et al., 1987; Witelson and Kigar, 1992; Karbe et al., 1995; Tzourio et al., 1998; Josse et al., 2003; Luders et al., 2003; Toga and Thompson, 2003; Dorsaint-Pierre et al., 2006). For example, an auditory area of the temporal lobe, known as the planum temporale, is typically larger in the left than in the right hemisphere (Geschwind and Levitsky, 1968), even in the human foetus (Chi et al., 1977) leading to speculation that language lateralization may originate from structural asymmetry in this area. However, the very few studies that have actually measured structural asymmetry and language lateralization in the same subjects have used either small samples of normal subjects (n≤20, except for one which reported a negative result (Eckert et al., 2006)), or larger samples of patients who may have abnormal structure-function relationships (Toga and Thompson, 2003; Josse and Tzourio-Mazoyer, 2004; Dorsaint-Pierre et al., 2006). Moreover, previous studies have compared structural asymmetry and language lateralization at different levels, either by comparing local (regional) structural asymmetries with global (hemispheric) measures of language lateralization (Foundas et al., 1994; Foundas et al., 1996; Dorsaint-Pierre et al., 2006) or by comparing structural asymmetry in one region with language lateralization in other regions (Karbe et al., 1995; Tzourio et al., 1998; Josse et al., 2003). This approach is not optimal because the degree to which language is lateralized varies with the tasks and regions that are tested (Josse and Tzourio-Mazoyer, 2004). The relationship between structural asymmetry and language lateralization, for any given task, therefore needs to be investigated within the same brain regions.

Here we used a novel MRI analysis to regress the degree to which brain structure was lateralized with the degree to which functional language activation was lateralized. This was conducted on a voxel by voxel basis across the whole brain in 86 participants. Lateralization was determined, in each voxel, as the signal difference between the left and right hemisphere, following spatial normalization to a symmetrical template. We then used the same voxel by voxel regression analysis to examine the relationship between gray matter with activation for two other tasks that are not typically lateralized to the left or right hemisphere: visual processing and articulation. This demonstrated that the relationship between the lateralized gray matter and lateralized language activation images was likely to be a consequence of a more general relationship between gray matter and activation.



The study was approved by the local Ethics Committee and all 86 participants gave written informed consent to take part in the study (supplementary Table 1). All participants were native english speakers and were free of any psychiatric or neurological abnormalities. The group comprised 41 males and 45 females. Forty-six participants described themselves as right handed and the remaining 40 described themselves as either ambidextrous or left handed. These self-descriptions were concordant with the Edinburgh questionnaire (Oldfield, 1971).

Language lateralization index

For each subject, language lateralization was measured using the procedure described in (Nagata et al., 2001) which is independent of the statistical threshold. In brief, this procedure calculates the difference in the number of left and right hemisphere voxels activated for all language tasks relative to all baseline tasks, at a range of different statistical threshold. The relationship between lateralization and the statistical threshold is reflected in a constant term that is entered into an equation comparing the total difference in left and right hemisphere voxels divided by the total number of voxels activated. The results revealed a wide distribution of lateralization indices along a left>right to right>left continuum (Josse et al. 2008). The large majority of subjects (74/86) showed typical left-lateralization or no lateralization, the remaining 12 subjects had atypical right-lateralization and all but one of these were non-right-handers.

Experimental design

The language activation paradigm used a 2×2 factorial design which resulted in 4 activation conditions (Figure 1). The first factor manipulated “stimulus” which could be either pictures of objects or their written names. The second factor manipulated “task”; which could be either naming or semantic decision. Each of the four activation conditions had a corresponding sensori-motor baseline condition. For picture stimuli, the baseline involved seeing pictures of nonobjects and for written words, the baseline involved seeing unfamiliar symbols. This resulted in a total of 8 conditions: four activation tasks and four baseline tasks. In all trials for all conditions, three stimuli were simultaneously presented as a “triad”, with one stimulus above and two stimuli below. The task instructions were as follows:

Figure 1
Experimental design

1 (a) Picture naming

Participants were instructed to name aloud all three pictures in the triad, starting with the top stimulus, then the lower left stimulus and finally the lower right stimulus. Participants' verbal responses were recorded and filtered using a noise cancellation procedure to monitor accuracy and response times. The participants were trained to whisper their responses and to minimize jaw and head movements in the scanner (as in Mechelli et al. 2007). Nevertheless, it was still necessary to exclude subjects due to movement artifacts or evidence of more than 2mm movement in the translation direction.

1 (b) Picture naming baseline

Participants articulated “one, two, three” aloud while looking at the three pictures of unfamiliar nonobjects. Responses were recorded as in the naming task.

2 (a) Reading words and (b) Reading baseline

The instructions were the same as for picture naming and its baseline i.e. read all three object names and say “one, two, three” in response to the unfamiliar symbols.

3 (a) Semantic decisions on pictures of objects

Participants were required to make a finger press response to indicate whether the stimulus above (e.g. glove) was more closely related in meaning to the stimulus on the lower left (e.g sock) or the lower right (e.g. rope). Reaction times and accuracy were recorded. The selected response (left or right side of screen) was indicated by the spatial position of two fingers on the same hand. Thus, the right middle finger indicated the stimulus on the right whereas the index finger of the same hand indicated the stimulus on the left.

3 (b) Semantic baseline

Participants were instructed to decide whether the unfamiliar stimulus above looked identical to the stimulus on the lower left or lower right, using the same finger press responses as in the semantic conditions.

4(a) Semantic decisions on written names and 4(b) perceptual decisions on unfamiliar symbols

The instructions and recordings were identical to those described for pictures above.

Each subject participated in four scanning runs lasting approximately 6 minutes each, with two runs involving the four articulation tasks and 2 runs involving the four matching (finger press) tasks. Within each run, stimuli were blocked with 4 triads per block. Each run included 4 blocks (16 triads) of pictures, 4 blocks of words, as well as 2 blocks (8 triads) of nonobjects and 2 blocks of symbols. Each block was preceded by 3.6 seconds of instructions (e.g. “Name”, “Read”, “Say 1,2,3”) and each triad then remained on the screen for 4 seconds followed by 180ms of fixation, adding up to 16.7 seconds for each condition. In addition, we interspersed the task blocks with 4 blocks of fixation each lasting 14.4 seconds. The order of words and pictures was counterbalanced within run. In addition, the order of task was counterbalanced across runs (either run 1 - articulation, run 2 - matching, run 3 - matching and run 4 - articulation; or run 1 - matching, run 2 - articulation, run 3 - articulation and run 4 - matching).


All stimuli were derived from a set of 192 familiar objects with three to six letter names: 33 had three letter names (e.g. “cat”, “bus”, “hat”), 65 had four letter names (“ship”, “bell”, “frog”, “hand”), 58 had five letter names (“teeth”, “camel”, “snake”) and 36 had six letter names (“spider”, “dagger”, “button”). A pilot study with 8 participants ensured inter-subject agreement on all picture names. The 192 objects were first divided into two different sets of 96 items which we will refer to as set A and set B. One group of selected participants was presented with set A as written words for reading aloud; set B as pictures for object naming; set B for semantic decisions on words and set A for semantic decisions on pictures. The other group was presented with set B as written words for reading aloud; set A as pictures for object naming; set A for semantic decisions on words and set B for semantic decisions on pictures. Thus, no word or picture was repeated over the experiment although each object concept occurred twice (once as a word and once as a picture). Words and pictures were counterbalanced within and between runs.

In the naming/reading triads, we minimized the semantic relationship between stimuli e.g. “lemon” (above), “cow” (lower left), “pipe” (lower right). In the semantic triads, there was a strong semantic relationship between two items in the triad but not the third item; e.g. “bomb” is more semantically related to “gun” than “bottle”. We did not include triads where the semantic decision could be made on the basis of perceptual attributes or verbal associations (e.g. “cat and dog”; “knife and fork”, “sock and shoe”). A pilot study with 8 participants ensured inter-subject agreement on the expected semantic association.

Stimulus presentation was via a video projector, a back-projection screen and a system of mirrors fastened to a head coil. Words were presented in lower case Arial and occupied 4.9 degrees (width) and 1.2 degrees (height) of the visual field. Each picture was scaled to take 7.3*8.5 degrees of the visual field.

Data acquisition

A Siemens 1.5T Sonata scanner was used to acquire both anatomical and functional images from all participants. Anatomical T1-weighted images were acquired using a three-dimensional (3D) MDEFT (modified driven equilibrium Fourier transform) sequence and 176 sagittal partitions with an image matrix of 256× 224 and a final resolution of 1 mm3 [repetition time (TR)/echo time (TE)/inversion time (TI), 12.24/3.56/ 530 ms].

Functional T2*-weighted echoplanar images with BOLD contrast comprised 40 axial slices of 2 mm thickness with 1 mm slice interval and 3 × 3 mm in-plane resolution. One-hundred and three volumes were acquired per session, leading to a total of 412 volume images across the 4 sessions. Effective repetition time (TR) was 3.6 s/volume. TR and Stimulus Onset Asynchrony did not match, allowing for distributed sampling of slice acquisition across the experiment (Veltman et al., 2002). To avoid Nyquist ghost artefacts a generalized reconstruction algorithm was used for data processing. After reconstruction, the first four volumes of each session were discarded to allow for T1 equilibration effects.

Data analysis

Image processing and statistical analyses were conducted using Statistical Parametric Mapping (SPM5: Wellcome Trust Center for Neuroimaging, London, UK. http// running under Matlab 7 (Mathworks, Natick, MA, USA).

Structural MRI data

Each subject's structural MR image was segmented and spatially normalized using the default options in the unified segmentation/normalization procedure implemented in SPM5 (Ashburner and Friston, 2005). The default options are as follows: {2,2,2,4} for the number of Gaussians in the different brain tissue classes, 25mm for the cut-off of three dimensional discrete cosine transform (DCT) basis functions for spatial warping, very light bias regularization, and 75mm width for the Gaussian smoothness of intensity bias fields. Only the resulting gray matter probability image was subsequently used for further processing. Each voxel value in that image gives the probability of being gray matter. This probability depends on (1) T1 signal values, (2) the spatial localisation of the voxel, and (3) the prior values of all brain tissues at this voxel location (e.g. a probability of 0.6 to be in the gray matter can also be seen as a 0.4 probability of being inside white matter or CSF). The gray matter template used, representing the prior probability of gray matter at each voxel, was a symmetrical version of the default gray matter template in SPM5, which is derived from the standard MNI template (recommended by the international consortium for brain mapping). Our symmetrical gray matter template was created by simply copying, flipping along the X axis and averaging the original and the mirror versions of the template. The symmetrically normalized gray matter images were resampled to 2×2×2mm voxels and smoothed with a 6mm full width half maximum isotropic Gaussian kernel to compensate for residual variability after spatial normalization. The choice of voxel size and smoothing corresponded to our standard preprocessing of the functional images (see below). We report data from unmodulated gray matter images (ie without a correction for local volume differences) because this provided the closest match to the processing of the functional images. However, to ensure that we were not losing any information, we also conducted the same analyses with volume-modulated gray matter images and we found that our results were unaffected. As modulated images represent both volume and probability density, and unmodulated images represent only density, the effects appear to be driven by gray matter density rather than volume. From the resulting symmetrical and unsmoothed gray matter images, we created left-minus-right gray matter lateralization images by copying and flipping the images and then subtracting the flipped images from the unflipped images.

Functional MRI data


Each subject's functional volumes were realigned to the same space as the corresponding structural image prior to pre-processing. Residual motion-related signal changes were adjusted using the SPM's unwarping routines (Andersson et al., 2001). Once the functional images were in the same space as the original anatomical image, the same spatial normalization parameters could be applied. In other words, the parameters for the spatial normalization of the structural image to the symmetrical template were applied to the functional images. The resulting functional images with 2×2×2mm voxels were then subjected to the same 6mm smoothing as the anatomical images.

First level analysis

At the first level, functional data were analyzed in a subject-specific fashion. Correct responses for each of the eight conditions, the instructions as well as the errors were modeled separately, with event-related delta functions for each trial, convolved with a canonical hemodynamic response function. Condition effects were estimated according to the general linear model. To exclude low frequency confounds, the data were high-pass filtered using a set of discrete cosine basis functions with a cut-off period of 128 seconds. Our effects of interest were extracted from the statistical contrasts as follows:

  • 1) Language activation. This involved the comparison of activation for naming, reading and semantic decisions to activation for perceptual decisions and saying 1,2,3 to nonobjects and symbols. The resulting image was then copied and the copy was flipped along the X axis. The resulting flipped contrast was then subtracted from its original (unflipped) version to create a map of language lateralization (for a similar rationale see (Jancke et al., 2002; Josse et al., 2008)).
  • 2) Visual processing, common to all conditions. This involved a comparison of activation for all 8 conditions (correct responses for the four activation and four baselines) to fixation.
  • 3) Articulation. This involved a comparison of activation for naming, reading and saying 1,2,3 to nonobjects and symbols with activation for semantic and perceptual decisions.

Combining structural with functional MRI data

First, we computed the average structural and language lateralization maps to show the extent of both types of lateralization across the 86 subjects (see Figure 2). We then computed the following list of images for the 2nd level analyses where structural data were used as the independent variable (i.e. the regressor) and functional data were used as the dependent variable:

Figure 2
Language and gray matter lateralization considered separately

Structural images:

  1. Gray matter lateralization (left-right gray matter probability)
  2. Gray matter probability images (no subtraction between left and right values)

Functional images:

  1. Language lateralization images (left-right language activation) where language activation refers to semantic, naming and reading activation relative to activation for perceptual tasks and saying 1,2,3 (i.e., corresponding to meaningful versus meaningless in Figure 2).
  2. Language activation images (no subtraction between left and right values)
  3. Visual activation relative to fixation
  4. Articulation tasks (naming, reading and saying 1,2,3) relative to finger press tasks (semantic and perceptual matching).

A was used to explain 1 and B was used to explain 2, 3 and 4.

Details of the analyses are as follow: to determine the relationship between gray matter and functional activation at a voxel by voxel level, the analyses entered the functional contrast images as the data and used the gray matter as the regressor. The values for both the BOLD and gray matter values had been scaled in each voxel by the subject's mean voxel value. This is standard procedure with fMRI data analyses in SPM and was applied here to the structural image processing for consistency. In order to do the analyses, we adapted the 2nd level regression tool in SPM5 so that we could specify a different regressor (gray matter probability) at each voxel (not currently available in the distributed version of the software). A regression model can be fitted between two sets of images by using the classical GLM model implemented in SPM.

In general, SPM regression involves fitting the same design matrix to every voxel. E.g.


  • for {i=1…N, the voxel position in the brain}
  • and yi is a K dimensional vector, X is a K×p design matrix.

The spatially varying regression model is intended for the cases where the design matrix depends on the voxel position. For these cases we solve N linear regressions of the form:


Here, yi is the functional signal in a given voxel i and Xi contains the structural signal at the same voxel i.

Summary statistics (e.g contrasts) and T/F values can be computed for each voxel. We also used the classical SPM machinery to correct for multiple comparisons. The smoothness of the images was derived from the residuals and random field theory was applied to compute the p-values corrected for the search volume.

Four different 2nd level analyses were conducted on the images. The first used the language lateralization images as the data and the gray matter lateralization images as the regressors. The three subsequent 2nd level analyses used normalized activation (contrast) and gray matter probability images (rather than the lateralization images). These three analyses respectively used language, visual and articulation activation images as the data.


Despite normalization of all structural and functional images to the symmetric SPM template, residual asymmetry remained as shown in Figure 2 (see also supplementary Tables 2 and 3). Cortical maps averaged across 86 subjects show significant left>right language activations and structural asymmetries and right>left structural asymmetries (which could be related to a multitude of other processes not tested in the current experiment). The only significant right>left language asymmetry was located in the cerebellum. Overlap was found between these structural and language lateralization maps in the occipito-temporal, supramarginal and inferior frontal areas (left>right) as well as in the cerebellum (right>left).

The regression between left-minus-right language activation and left-minus-right gray matter probability demonstrated that the degree to which language activation was lateralized was positively predicted by the degree to which the underlying gray matter was lateralized. Figure 3 shows that this positive relationship was observed in the majority of voxels showing language lateralization per se. Figure 4 shows details of the structure-function relationship in the three regions showing the most significant effect (p<0.05 corrected for multiple comparisons across the whole brain in either height or extent). These three regions were located in: the posterior part of the inferior frontal sulcus at the level of the pars opercularis (x=+/−40, y=16, z=20; Z score for structure-function relationship = 4.9) the posterior superior temporal region (+/−62, −20, 2; Z score = 5.0) and a ventral occipito-temporal region (+/− 42, −44, −8; Z score = 4.2). There were no voxels showing a negative relationship between gray matter probability and language activation or lateralization. Finally, there was no evidence for a relationship between the global hemispheric lateralization index and structural asymmetry in any region.

Figure 3
The positive relationship between language and gray matter lateralization
Figure 4
The positive relationship between language and gray matter lateralization (plots)

The right side of Figure 4 illustrates that the relationship between language lateralization and gray matter lateralization arose in the context of a positive relationship between language activation and gray matter in the original normalized images. In other words, the lateralization effects were a consequence of a more general positive relationship between functional activation and gray matter. Indeed, this effect was not even specific to language activation because we also observed a positive relationship between functional activation and gray matter probability in the context of activation for visual processing and articulation (see Figure 5 and supplementary Tables 4 and 5). Left versus right asymmetries were also observed in the activation pattern for visual processing and articulation and as with the language asymmetry maps, the relationship between structural and functional asymmetries was always positive. We do not report the details of these effect here because, unlike the language asymmetry maps that showed predominantly left lateralized activation, visual and articulation asymmetry maps showed both left > right and right > left asymmetries and their interpretation therefore requires further experimentation.

Figure 5
The positive relationship between gray matter and visual and articulatory activation


Our findings demonstrate that the degree to which language is lateralized to the left or right hemisphere is positively predicted, on a voxel by voxel basis, by the degree to which gray matter is lateralized in the left or right hemisphere (Figure 3). Although this positive relationship was observed throughout the language system, it was exceptionally strong in three regions that were located in: (1) the inferior frontal sulcus at the level of the pars opercularis, (2) a superior temporal region that lies in the auditory association cortex, ventral and lateral to the planum temporal, and (3) a left occipito-temporal region on the border between the middle temporal cortex and the occipital lobe but critically less posterior and less ventral than the area known as the visual word form area (Figure 4). The identification of these regions in our study is not surprising. The pars opercularis is associated with both phonological and semantic processing of auditory and visual words (Gold and Buckner, 2002; Devlin et al., 2003; Gitelman et al., 2005; Cone et al., 2008). Likewise, the left occipito-temporal region is associated with semantic processing of both auditory and visual words (Booth et al., 2002; Noppeney and Price, 2002; Vigneau et al., 2006). Although we used only visual stimuli, each of our language tasks involved semantic and/or phonological processing. Moreover, picture naming and reading also activate the auditory cortices when the participants hear the sound of their spoken response (Price et al., 1996; Bohland and Guenther, 2006; Fu et al., 2006). Thus, our paradigm was sensitive to lateralization of a range of language processes.

The strong relationship between structural asymmetry and language lateralization in the vicinity of the planum temporale is consistent with the early studies that noticed that the planum temporale is bigger in the left than the right hemisphere in the majority of human brains, and the subsequent proposal that this structural asymmetry had to do with functional lateralization (Geschwind and Levitsky, 1968; Tzourio et al., 1998). In addition, the strong relationship between structural asymmetry and language lateralization in the pars opercularis is consistent with a study that reported that the structure of this region correlated with the degree of global language lateralization as assessed by the Wada test in patients (Dorsaint-Pierre et al., 2006). Our results are concordant with previous studies but offer several new advances. First, a relationship between brain structure and language lateralization is demonstrated in a large sample of healthy subjects. Therefore, unlike previous studies, our results could not be explained by lesion-induced plasticity effects. Second, we estimated language lateralization at the voxel level which allows us to accommodate regional differences in the degree of lateralization across the brain (Josse and Tzourio-Mazoyer, 2004). This voxel-based approach demonstrates that language lateralization is predicted by distributed asymmetry in structural (i.e. gray matter) images and was not restricted to a single causal region such as the planum temporale or frontal operculum.

Implications of the relationship between structural and functional lateralization

Language lateralization is a well-established functional feature of the human brain, evidenced by studying the site of brain damage in patients with language difficulties (Broca, 1865; Hecaen et al., 1981; Finger and Roe, 1999), the effect of suppressing the function of one hemisphere at a time (Wada and Rasmussen, 1960) Kimura, 1961), the effect of presenting language stimuli independently to the left and right ears / visual fields (Cohen et al., 2002; Tervaniemi and Hugdahl, 2003) and in functional neuroimaging studies comparing left and right hemisphere activations (Mazoyer et al., 1993; Binder et al., 1996; Knecht et al., 2002). Despite this wealth of evidence, it is still unclear why there is language lateralization, why the left hemisphere, rather than the right, is dominant for language in most subjects, and why this feature varies between subjects with a varying degree of left-lateralization and even right-hemispheric dominance in a minority of subjects (Toga and Thompson, 2003; Josse and Tzourio-Mazoyer, 2004). Structural asymmetries parallel and precede this functional asymmetry, leading to the hypothesis that the former can predict and explain the latter. However, to date, few studies have investigated this hypothesis, and none have answered a more specific, underlying question: does structural asymmetry of a given region predict functional asymmetry in the same region ? Our results bring an affirmative answer to this question.

Post hoc analyses of language activation in the left or right hemisphere revealed a further novel observation. Specifically we found that when language activation was high in a left or right hemisphere voxel, gray matter was also high in that voxel (see Figure 4). Our lateralization results can therefore be explained in terms of a general relationship between gray matter probability and activation. To explore this relationship further we correlated gray matter probability and functional activation for visual processing and articulation. We found that high gray matter predicts high activation on a voxel by voxel basis in visual and articulatory regions respectively (see Figure 5). This relationship between gray matter probability and functional activation has not been demonstrated before but it has widespread implications for interpreting group differences in fMRI activation studies as well as correlations between behaviour and structural or functional neuroimaging measures. Specifically, there may be some common driving factors that explain a host of diverse results. Our findings therefore motivate future studies to investigate whether the effect of (for instance) age, handedness or gender in functional activation maps correlate with the effect of age, handedness or gender in structural images (Josse and Tzourio-Mazoyer, 2004; Asllani et al., In press.). More critically, further studies are required to establish the causal relationship between gray matter and functional activation. For example, does increased gray matter cause increased functional activation (Chi et al., 1977), does increased functional activation increase gray matter (Draganski et al., 2004), or do both gray matter and functional activation depend on a single causal factor (e.g. vasculature)?

In summary, we have shown that the degree to which language is lateralized can be predicted by the degree to which gray matter is lateralized in the same voxels. Post hoc analyses suggest that this observation is a consequence of a direct relationship between activation and the probability that the underlying structure was gray matter. Our findings motivate further investigation of the causal relationship between brain structure, neural activity, behaviour and demographic variables such as age, gender and handedness. From a clinical perspective, future studies are now required to determine whether the degree to which language lateralization is predicted from brain structure in large cohorts can be accurately used to predict language lateralization at the single patient level.

Supplementary Material

Supplementary Tables


This work was funded by the Wellcome Trust and the James S. MacDonnell Foundation (conducted as part of the NRG initiative). We would also like to thank our three radiographers (Amanda Brennan, Janice Glensman and David Bradbury) and the following people for recruiting subjects and helping with fMRI and behavioral data collection: Clare Shakeshaft, Laura Stewart, Tom Schofield, Alice Grogan, Hwee Ling Lee, Anneke Haddad, Fulden Babur, Ramona Howson, Caroline Ellis and Odette Megnin.


  • Andersson JL, Hutton C, Ashburner J, Turner R, Friston K. Modeling geometric deformations in EPI time series. Neuroimage. 2001;13:903–919. [PubMed]
  • Ashburner J, Friston KJ. Unified segmentation. Neuroimage. 2005;26:839–851. [PubMed]
  • Asllani I, Habeck C, Borogovac A, Brown CM, Brickman AM, Stern Y. Separating function from structure in perfusion imaging of the aging brain. Human Brain Mapping. In press. [PMC free article] [PubMed]
  • Binder JR, Swanson SJ, Hammecke TA, Morris GL, Mueller WM, Fischer M, Benbadis S, Frost JA, Rao SM, Haughton VM. Determination of language dominance using functional MRI : A comparison with the Wada test. Neurology. 1996;46:978–984. [PubMed]
  • Bohland JW, Guenther FH. An fMRI investigation of syllable sequence production. Neuroimage. 2006;32:821–841. [PubMed]
  • Booth JR, Burman DD, Meyer JR, Gitelman DR, Parrish TB, Mesulam MM. Modality independence of word comprehension. Hum Brain Mapp. 2002;16:251–261.. [PubMed]
  • Broca P. Sur le siège de la faculté du langage articulé Bulletins de la Société d'Anthropologie. 1865;6:377–399.
  • Chi JG, Dooling EC, Gilles FH. Left-right asymmetries of the temporal speech areas of the human fetus. ArchNeurol. 1977;34:346–348. [PubMed]
  • Cone NE, Burman DD, Bitan T, Bolger DJ, Booth JR. Developmental changes in brain regions involved in phonological and orthographic processing during spoken language processing. Neuroimage. 2008;41:623–635. [PMC free article] [PubMed]
  • Devlin JT, Matthews PM, Rushworth MF. Semantic processing in the left inferior prefrontal cortex: a combined functional magnetic resonance imaging and transcranial magnetic stimulation study. J Cogn Neurosci. 2003;15:71–84. [PubMed]
  • Dorsaint-Pierre R, Penhune VB, Watkins KE, Neelin P, Lerch JP, Bouffard M, Zatorre RJ. Asymmetries of the planum temporale and Heschl's gyrus: relationship to language lateralization. Brain. 2006;129:1164–1176. [PubMed]
  • Draganski B, Gaser C, Busch V, Schuierer G, Bogdahn U, May A. Neuroplasticity: changes in grey matter induced by training. Nature. 2004;427:311–312. [PubMed]
  • Eckert MA, Leonard CM, Possing ET, Binder JR. Uncoupled leftward asymmetries for planum morphology and functional language processing. Brain Lang. 2006;98:102–111. [PMC free article] [PubMed]
  • Finger S, Roe D. Does Gustave Dax deserve to be forgotten? The temporal lobe theory and other contributions of an overlooked figure in the history of language and cerebral dominance. Brain Lang. 1999;69:16–30. [PubMed]
  • Foundas AL, Leonard CM, Gilmore R, Fennell E, Heilman KM. Planum temporale asymmetry and language dominance. Neuropsychologia. 1994;32:1225–1231. [PubMed]
  • Foundas AL, Leonard CM, Gilmore RL, Fennell EB, Heilman KM. Pars triangularis asymmetry and language dominance. ProcNatlAcadSciUSA. 1996;93:719–722. [PubMed]
  • Fu CH, Vythelingum GN, Brammer MJ, Williams SC, Amaro E, Jr., Andrew CM, Yaguez L, van Haren NE, Matsumoto K, McGuire PK. An fMRI study of verbal self-monitoring: neural correlates of auditory verbal feedback. Cereb Cortex. 2006;16:969–977. [PubMed]
  • Galaburda AM, Corsiglia J, Rosen GD, Sherman GF. Planum temporale asymmetry, reappraisal since Geschwind and Levitsky. Neuropsychologia. 1987;25:853–868.
  • Geschwind N, Levitsky W. Human brain: left-right asymmetries in temporal speech region. Science. 1968;161:186–187. [PubMed]
  • Gitelman DR, Nobre AC, Sonty S, Parrish TB, Mesulam MM. Language network specializations: an analysis with parallel task designs and functional magnetic resonance imaging. Neuroimage. 2005;26:975–985. [PubMed]
  • Gold BT, Buckner RL. Common prefrontal regions coactivate with dissociable posterior regions during controlled semantic and phonological tasks. Neuron. 2002;35:803–812. [PubMed]
  • Hécaen H, De Agostini M, Monzon-Montes A. Cerebral organization in left-handers. Brain Lang. 1981;12:261–284. [PubMed]
  • Jancke L, Wustenberg T, Scheich H, Heinze HJ. Phonetic perception and the temporal cortex. Neuroimage. 2002;15:733–746. [PubMed]
  • Josse G, Tzourio-Mazoyer N. Hemispheric specialization for language. Brain Res Brain Res Rev. 2004;44:1–12. [PubMed]
  • Josse G, Mazoyer B, Crivello F, Tzourio-Mazoyer N. Left planum temporale: an anatomical marker of left hemispheric specialization for language comprehension. Brain Res Cogn Brain Res. 2003;18:1–14. [PubMed]
  • Josse G, Seghier ML, Kherif F, Price CJ. Explaining function with anatomy: language lateralization and corpus callosum size. J Neurosci. 2008;28:14132–14139. [PMC free article] [PubMed]
  • Karbe H, Wurker M, Herholz K, Ghaemi M, Pietrzyk U, Kessler J, Heiss WD. Planum temporale and Brodmann's area 22. Magnetic resonance imaging and high-resolution positron emission tomography demonstrate functional left-right asymmetry. ArchNeurol. 1995;52:869–874. [PubMed]
  • Kimura D. Cerebral dominance and the perception of verbal stimuli. CanadJPsychol. 1961;15:166–171.
  • Knecht S, Floel A, Drager B, Breitenstein C, Sommer J, Henningsen H, Ringelstein EB, Pascual-Leone A. Degree of language lateralization determines susceptibility to unilateral brain lesions. NatNeurosci. 2002;5:695–699. [PubMed]
  • Luders E, Rex DE, Narr KL, Woods RP, Jancke L, Thompson PM, Mazziotta JC, Toga AW. Relationships between sulcal asymmetries and corpus callosum size: gender and handedness effects. CerebCortex. 2003;13:1084–1093. [PubMed]
  • Mazoyer B, Dehaene S, Tzourio N, Murayama N, Cohen L, Levrier O, Salamon G, Syrota A, Mehler J. The cortical representation of speech. Journal of Cognitive Neuroscience. 1993;5:467–479. [PubMed]
  • Mechelli A, Josephs O, Lambon Ralph MA, McClelland JL, Price CJ. Dissociating stimulus-driven semantic and phonological effect during reading and naming. Hum Brain Mapp. 2007;28:205–217. [PMC free article] [PubMed]
  • Nagata SI, Uchimura K, Hirakawa W, Kuratsu JI. Method for quantitatively evaluating the lateralization of linguistic function using functional MR imaging. AJNR Am J Neuroradiol. 2001;22:985–991. [PubMed]
  • Noppeney U, Price CJ. Retrieval of visual, auditory, and abstract semantics. Neuroimage. 2002;15:917–926. [PubMed]
  • Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. [PubMed]
  • Price CJ, Wise RJS, Warburton EA, Moore CJ, Howard D, Patterson K. Hearing and saying. The functional neuro-anatomy of auditory word processing. Brain. 1996;119:919–931. [PubMed]
  • Tervaniemi M, Hugdahl K. Lateralization of auditory-cortex functions. Brain Res Brain Res Rev. 2003;43:231–246. [PubMed]
  • Toga AW, Thompson PM. Mapping brain asymmetry. NatRevNeurosci. 2003;4:37–48. [PubMed]
  • Tzourio N, Nkanga-Ngila B, Mazoyer B. Left planum temporale surface correlates with functional dominance during story listening. Neuroreport. 1998;9:829–833. [PubMed]
  • Veltman DJ, Mechelli A, Friston KJ, Price CJ. The importance of distributed sampling in blocked functional magnetic resonance imaging designs. Neuroimage. 2002;17:1203–1206. [PubMed]
  • Vigneau M, Beaucousin V, Herve PY, Duffau H, Crivello F, Houde O, Mazoyer B, Tzourio-Mazoyer N. Meta-analyzing left hemisphere language areas: Phonology, semantics, and sentence processing. Neuroimage. 2006 [PubMed]
  • Wada J, Rasmussen T. Intracarotid injection of sodium amytal for the lateralization of cerebral speech dominance. J Neurosurg. 1960;106:1117–1133. [PubMed]
  • Witelson SF, Kigar DL. Sylvian fissure morphology and asymmetry in men and women: bilateral differences in relation to handedness in men. J Comp Neurol. 1992;323:326–340. [PubMed]