|Home | About | Journals | Submit | Contact Us | Français|
Semantic memory refers to knowledge about people, objects, actions, relations, self, and culture acquired through experience. The neural systems that store and retrieve this information have been studied for many years, but a consensus regarding their identity has not been reached. Using strict inclusion criteria, we analyzed 120 functional neuroimaging studies focusing on semantic processing. Reliable areas of activation in these studies were identified using the activation likelihood estimate (ALE) technique. These activations formed a distinct, left-lateralized network comprised of 7 regions: posterior inferior parietal lobe, middle temporal gyrus, fusiform and parahippocampal gyri, dorsomedial prefrontal cortex, inferior frontal gyrus, ventromedial prefrontal cortex, and posterior cingulate gyrus. Secondary analyses showed specific subregions of this network associated with knowledge of actions, manipulable artifacts, abstract concepts, and concrete concepts. The cortical regions involved in semantic processing can be grouped into 3 broad categories: posterior multimodal and heteromodal association cortex, heteromodal prefrontal cortex, and medial limbic regions. The expansion of these regions in the human relative to the nonhuman primate brain may explain uniquely human capacities to use language productively, plan, solve problems, and create cultural and technological artifacts, all of which depend on the fluid and efficient retrieval and manipulation of semantic knowledge.
The human brain has an enormous capacity to acquire knowledge from experience. The characteristic shapes, colors, textures, movements, sounds, smells, and actions associated with objects in the environment, for example, must all be learned from experience. Much of this knowledge is represented symbolically in language and underlies our understanding of word meanings. These relationships between words and the stores of knowledge they signify are known collectively as the “semantics” of a language (Bréal 1897). In this article, we use the more general term “semantic processing” to refer to the cognitive act of accessing stored knowledge about the world.
For most languages, semantic properties of words are readily distinguished from their structural properties. For example, words can have both spoken (phonological) and written (orthographic) forms, but these surface forms are typically related to word meanings only through the arbitrary conventions of a particular vocabulary. There is nothing, for example, about the letter sequences DOG or CHIEN that inherently links these sequences to a particular concept. Conversely, it is trivial to construct surface forms (e.g., CHOG) that possess all of the phonological and orthographic properties of words in a particular language, but which have no meaning in that language. In this article, we hold to a simple, operational distinction between 1) the processes of analyzing surface form (phonology, orthography), and 2) semantic processes, which concern access to knowledge not directly represented in the surface form.
Semantic processing is a defining feature of human behavior, central not only to language, but also to our capacity to access acquired knowledge in reasoning, planning, and problem solving. Impairments of semantic processing figure in a variety of brain disorders such as Alzheimer disease, semantic dementia, fluent aphasia, schizophrenia, and autism. The neural basis of semantic processing has been studied extensively by analyzing patterns of brain damage in such patients (e.g., Alexander et al. 1989; Hart and Gordon 1990; Chertkow et al. 1997; Tranel et al. 1997; Gainotti 2000; Mummery et al. 2000; Hillis et al. 2001; Damasio et al. 2004; Dronkers et al. 2004). On the whole, this evidence suggests a broadly distributed neural representation, with particular reliance on inferotemporal and posterior inferior parietal regions. Semantic processing has also been addressed in a large number of functional neuroimaging studies conducted on healthy volunteers, using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). The aim of the present study is to conduct a meta-analysis of this functional neuroimaging research, which now includes over 500 published studies. Several excellent reviews on neuroimaging studies of semantic processing have been presented previously, which focused mainly on the evidence for organization of object knowledge by taxonomic categories (Joseph 2001; Martin and Chao 2001; Bookheimer 2002; Thompson-Schill 2003; Damasio et al. 2004; Gerlach 2007). The present analysis differs from these prior efforts in including a larger number and broader range of studies, in adopting specific inclusion and exclusion criteria for identifying semantic processing experiments, and in the construction of probabilistic maps for summarizing the data.
Considered for inclusion were any PET or fMRI studies in which words (either spoken or written) were used as stimulus materials. Thus, the goal of the current study is to identify brain systems that access meaning from words. This approach contrasts with several previous reviews that included and even emphasized studies in which object pictures were used to elicit knowledge retrieval (Joseph 2001; Martin and Chao 2001; Damasio et al. 2004; Gerlach 2007). Our focus on linguistic materials reflects our present concern, which is not how objects are recognized, but rather how conceptual knowledge is organized and accessed. Although the knowledge stores underlying word comprehension may be activated similarly during word and object recognition tasks, there is also evidence that these 2 semantic access routes are not identical. For example, object recognition engages a complex, hierarchical perceptual stream that encodes progressively more abstract representations of object features and their spatial relationships (Marr 1982; Tanaka et al. 1991; Logothetis and Sheinberg 1996; Humphreys and Forde 2001). Certainly not all of these perceptual representations are encoded in language or even available to awareness. At present, it is not clear that comprehension of a word necessarily entails activation of a detailed perceptual representation of the object to which it refers, at least not to the same degree as that evoked by the object itself. In fact, many functional neuroimaging studies suggest different patterns of activation during matched word and picture recognition tasks (Gorno-Tempini et al. 1998; Moore and Price 1999b; Chee et al. 2000; Hasson et al. 2002; Bright et al. 2004; Gates and Yoon 2005; Reinholz and Pollmann 2005). The existence of patients with profound visual object recognition disorders but relatively intact word comprehension also argues against a complete overlap between the knowledge systems underlying word and object recognition (Warrington 1985; Lissauer 1889; Farah 1990; Davidoff and De Bleser 1994). In the interest of maintaining a clear focus on the processing of concepts rather than percepts, we thus elected to include in this analysis only those experiments that used words as stimuli.
Specific exclusion criteria are a critical feature of the present study. The numerous published neuroimaging studies concerning semantic processing address a variety of specific topics and employ a wide range of tasks and task comparisons (here referred to as “contrasts”). The authors of these studies often began from very different theoretical perspectives and varied in their interpretation of task demands. One approach to selecting material for a meta-analysis would have been to include any study considered by the original authors to have addressed semantic processing, as indicated, for example, by the study title or list of keywords. We found during initial attempts to review these reports, however, that markedly different, largely nonoverlapping activation patterns were frequently observed across studies. Furthermore, this discordance was related mainly to variability in task selection and the type of task contrast used. Three problematic features of this literature were of particular interest and were found with some regularity. First, many studies employed contrasting tasks that differed on phonological or orthographic processing demands in addition to semantic processing demands (see examples in Table 1). It is a general principle of functional neuroimaging studies, and one that warrants strong emphasis, that the “activations” measured using these methods are in fact representations of the relative differences in neural activity between 2 or more brain states. Thus, the pattern of activated brain regions observed in a study putatively targeting semantic processes depends not only on the cognitive processes elicited by the semantic task, but also on the processes elicited, or not elicited, by the comparison task. If the comparison task does not make demands on phonological and orthographic processes that approximate those made by the semantic task (as is the case, e.g., with any control task involving unpronounceable or unnameable stimuli), then the resulting activation is as likely to reflect phonological or orthographic processes as semantic processes. Such contrasts were excluded from the present analysis.
A second, more prevalent problem concerns contrasts in which the conditions differed in task difficulty. Functional neuroimaging measurements are very sensitive to differences in response time, accuracy, and level of effort between tasks (see, e.g., Braver et al. 1997; Jonides et al. 1997; Honey et al. 2000; Adler et al. 2001; Braver et al. 2001; Ullsperger and von Cramon 2001; Gould et al. 2003; Binder et al. 2004; Binder, Medler, et al. 2005; Mitchell 2005; Sabsevitz et al. 2005; Desai et al. 2006; Lehmann et al. 2006; Tregallas et al. 2006). Such differences pose a problem if there are cognitive systems supporting general task performance functions that are modulated by task difficulty. Likely examples of such domain-general systems include a sustained attention network for maintaining arousal, a selective attention system for focusing neural resources on a particular modality or sensory object in the environment (e.g., a visual display), a working memory system for keeping task instructions and task-relevant sensory representations accessible, a response selection mechanism for mapping the contents of working memory to a response, a response inhibition system for preventing premature or prepotent responses from being made in error, and an error-monitoring system for adjusting response criteria and response time deadlines to minimize such errors. These systems, located mainly in frontal, anterior cingulate, and dorsal parietal cortices (Corbetta et al. 1998; Carter et al. 1999; Duncan and Owen 2000; Grosbras et al. 2005; Owen et al. 2005), are necessary for completing any task. If this is the case, and if the level of activation in these systems depends on general task demands, then it follows that activation can never be attributed with certainty to semantic processing when this activation has resulted from a contrast in which general task demands differ. Such contrasts (see examples in Table 2) were therefore excluded from the present analysis.
A final issue concerns the interpretation of states in which no overt task is required. Many neuroimaging studies used states described as “passive” or “resting” in which subjects are given either no task, a minimally demanding task such as fixating a point in the visual field, or a nominal task for which compliance is uncertain and unknowable (e.g., “clear your mind,” “focus on the scanner sounds”). Although such conditions can be useful as a low-level baseline, particularly for sensory stimulation studies, their use in semantic studies is problematic. Most people report experiencing vivid and memorable thoughts and mental images during such conscious, attentive states (James 1890; Antrobus et al. 1966; Pope and Singer 1976; Singer 1993; Giambra 1995; Binder et al. 1999; McKiernan et al. 2006). We argued previously that the processes supporting such “task-unrelated thoughts” are essentially semantic, because they depend on activation and manipulation of acquired knowledge about the world (Binder et al. 1999; McKiernan et al. 2006). If such states involve semantic processing, their use as a baseline runs the risk of subtracting away any semantic processing elicited by an overt semantic task. Such a contrast (an active semantic task compared with no task) also violates the stipulations, discussed above, that task conditions be matched on low-level processing demands and on overall difficulty, and would thus be expected to produce “false positive” activation of surface form processing and general executive systems. Such contrasts were therefore excluded from the present analysis. Also excluded were contrasts in which 2 conditions that both involve passive stimulation were compared, such as passively listening to words versus pseudowords. In these cases, the processes underlying generation of task-unrelated thoughts, which we consider to be largely semantic in nature, would predominate in both of the conditions being contrasted, resulting in little or no relative activation of semantic systems.
In summary, the present meta-analysis represents a critical review in which selection criteria are based on specific theoretical distinctions between semantic and surface form processing, and between semantic processes and more general processes required for task execution. Our aim is thus to identify brain regions that contribute specifically to the semantic component of word recognition, that is, the activation of stored conceptual knowledge, apart from the accompanying analysis of surface form or generation of an overt task response.
Procedures for identifying candidate studies were designed to be as inclusive as possible. Candidate studies were identified through searches of the PubMed, Medline, and PsychINFO online databases for the years 1980 through 2007. This search was conducted using the following Boolean operation applied to title, abstract, and keyword fields: <“brain mapping” OR “functional magnetic resonance imaging” OR “fMRI” OR “positron emission tomography” OR “PET” OR “neuroimaging”> AND <“semantics” OR “semantic memory” OR “category” OR “conceptual knowledge”>. This step yielded 2832 unique items. Abstracts from these studies were then initially screened to identify those that used either fMRI or PET and included healthy human participants, yielding 790 articles. Abstracts from these articles were then evaluated in more depth to identify experiments that used word stimuli and included either a general or specific semantic contrast. If this information could not be determined from the abstract, the article was also included. This second screening step yielded 431 articles. Online tables of contents and abstracts for selected journals focusing on cognition and neuroimaging, including “Brain and Language,” “Human Brain Mapping,” “Journal of Cognitive Neuroscience,” and “NeuroImage,” and previously published reviews of semantic neuroimaging studies (Cabeza and Nyberg 2000; Joseph 2001; Martin and Chao 2001; Bookheimer 2002; Devlin, Russell, et al. 2002; Price and Friston 2002; Martin and Caramazza 2003; Thompson-Schill 2003; Damasio et al. 2004; Vigneau et al. 2006; Gerlach 2007) were then manually searched back to 1995 for relevant articles, yielding an additional 72 items. Finally, any additional relevant articles known to the authors, cited in the initial set of articles, or encountered during the review process were added to the list, resulting in a total of 520 articles that underwent full review.
The full review process included a complete reading of each article by 1 of the 4 authors, followed by application of the following inclusion criteria:
In an effort to be as inclusive as possible, criteria 6 and 7 were not applied in a rigid manner. For example, studies that did not include adequate documentation of task performance were included if the reviewer judged the tasks to be approximately equal in difficulty. Studies were also included if the control task was more difficult than the semantic task, following the logic that relative activation during the easier semantic task could not in such cases be due to greater demands on general executive processes. Criterion 7 was applied to avoid sampling bias for particular brain regions, but studies were included if nearly all of the cerebrum was imaged. Cases in which adherence to criteria was ambiguous were reviewed by a second author to reach a consensus view.
All studies included after full review by one of the authors were then reviewed by a second author to confirm eligibility. Rare disagreements were resolved by further discussion among the authors.
Two types of contrasts are relevant to the goal of identifying activation due to semantic processing. The first, which we refer to as a “general semantics” contrast, follows from the operational distinction between word structure and meaning discussed above. A general contrast is one between a condition that elicits high levels of access to word meaning and a condition that elicits lower levels of access to word meaning. The contrast must include controls for processing word structure (phonology and orthography) as well as for general executive and response processes. The 3 most common general contrasts were:
The other type of semantic contrast, which we designate “specific,” entails a comparison between hypothetically distinct types of conceptual knowledge. The aim of such studies is not to delineate the entire semantic processing system, but rather to identify putative functional subdivisions within the semantic system. Many such studies, for example, involve comparisons between concrete objects from different categories (e.g., animals vs. tools). Such studies are relevant to our aims because they contribute to the identification of brain regions involved specifically in semantic processing. Because we are not concerned here with particular functional subdivisions within the semantic system, activation data from both sides of the contrast (e.g., activation for animals > tools and for tools > animals) were included. As with the general contrasts, several variations can be distinguished based on whether the contrast pertains to stimulus or task manipulations. For example, in experiments involving a stimulus manipulation, a constant task is used (usually a semantic decision) with 2 (or more) contrasting categories of stimuli. In experiments involving a task manipulation, the attentional focus of the participant is switched to different semantic attributes (e.g., color, action, and size) of the same concepts using changes in task set.
We report here results obtained from separate analyses of the general and specific contrasts as well as a global analysis combining all studies. The specific foci were classified according to type of semantic knowledge represented by each contrast, and separate analyses were conducted for each specific knowledge type for which sufficient data were available.
For each study reviewed, all reported contrasts that met inclusion criteria were included in the analysis. For each such contrast, the reviewer recorded the number of participants contributing to the activation map, the imaging modality used, the type of contrast, a brief description of the stimuli and tasks used in the contrast, the standard space to which the data were normalized, all reported coordinate locations for activation peaks, the Z values associated with each peak (if available), and the published table from which the coordinates were copied.
All coordinates were converted to a single common stereotaxic space. The studies were evenly divided between those that reported coordinates in the standard space of Talairach and Tournoux (1988) and those that used a variation of MNI space (Evans et al. 1993). We converted all MNI coordinates to the Talairach and Tournoux (1988) system using the “icbm2tal” transform (Lancaster et al. 2007). This transform reduces the bias associated with reference frame and scale in MNI–Talairach conversion.
Probabilistic maps of the resulting sets of coordinates were constructed using the “Activation Likelihood Estimate” (ALE) method (Turkeltaub et al. 2002), implemented in the GingerALE software package (Laird et al. 2005) (available at www.brainmap.org), using an 8-mm FWHM 3D Gaussian point spread function and a spatial grid composed of 2 × 2 × 2 mm voxels. This method treats each reported focus as the center of a Gaussian probability distribution. The 3D Gaussian distributions corresponding to all foci included in a given analysis are summed to create a whole-brain map that represents the overlap of activation peaks at each voxel, referred to as the ALE statistic. Subsequent analysis is restricted to a brain volume mask (Kochunov et al. 2002) distributed with the GingerALE software. To determine the null distribution of the ALE statistic for each analysis, a Monte Carlo simulation with 10000 iterations was performed, in which each iteration consisted of a set of foci equal in number to the observed data, placed at random locations within the analysis volume and convolved with the same point spread function (Laird et al. 2005). Based on these null distributions, the ALE statistic maps for each analysis are converted to voxelwise probability maps.
The probability of chance formation of suprathreshold clusters was then determined by Monte Carlo simulation using in-house software, with 1000 iterations. Each iteration contained randomly located foci equal in number to the observed data and convolved with the same point spread function. The ALE map was then computed for each iteration, and the number and size of voxel clusters were recorded after thresholding each simulated data set at various voxelwise ALE thresholds. ALE maps from each of the observed data sets were then thresholded at an ALE value that yielded a corrected mapwise α < 0.05 after removing clusters smaller than 1000 μL. For visualization of probability values at each voxel, these corrected ALE maps were applied as masks on the probability maps generated by the GingerALE software. These thresholded probability maps are shown in Figs. 3–6.
The initial search and screening procedures identified 520 articles, which were subsequently reviewed in detail. A total of 120 articles met inclusion criteria and provided 187 semantic contrasts. Figure 1 shows a breakdown of these studies by year published. A list of the included studies is provided in Appendix 1. Six of the included studies (Paulesu et al. 2000; Giraud and Price 2001; Tyler et al. 2001; Devlin, Russell, et al. 2002; Pilgrim et al. 2002; Liu et al. 2006) met all criteria, but no activation was observed for the semantic contrasts of interest. Among the 400 excluded studies, 10 used techniques other than fMRI or PET, or studied special subject populations. Six were reviews or reanalyses of previously published data. About 20% (82) of the excluded studies did not include any semantic contrasts, 13% (52) did not use word stimuli, 9% (37) used a resting or passive condition as the only control, 31% (127) used active control tasks that were less difficult than the semantic task, and 16% (66) used active control tasks that did not control for word-form (orthographic or phonological) processing. In 17% (69) of the excluded studies, no group activation data were reported for the semantic contrast of interest, or foci were reported only for a priori regions of interest.
Of the 187 contrasts that met all inclusion criteria, 87 were of the general type and 100 of the specific type; 126 used fMRI, and 61 used PET. These studies collectively involved 1642 participants and yielded 1145 activation foci (691 general and 454 specific). (The number of unique individuals involved in these studies could be smaller than 1642. We did not count participants more than once if they were involved in more than one contrast from the same publication. However, the same individual could have been included in more than one publication.)
Figure 2 shows activation foci from all of the included studies projected onto an inflated surface model of the cerebral cortex (see Figs. 1 and and22 in the Supplemental Material online for similar images color coded by type of contrast). Of the 1145 published foci, 10 were located in the cerebellum and are therefore not shown. About 68% (771) of the foci were in the left cerebral hemisphere (x < 0) and 32% (362) in the right hemisphere (x > 0), indicating moderate left-hemisphere lateralization. Foci were located throughout the brain, but with strong clustering in some regions, particularly the left inferior parietal lobe. Areas with notably few foci included the precentral and postcentral gyri bilaterally, primary and secondary visual cortices, superior parietal lobules, frontal eye fields, and dorsal anterior cingulate gyri. There was a clear line of demarcation along the lateral bank of the left intraparietal sulcus, separating a large and dense cluster of foci in the inferior parietal lobe from uninvolved cortex in the intraparietal sulcus and superior parietal lobe.
The thresholded ALE probability map based on all 1145 foci is shown in Figure 3. Activations were lateralized to the left hemisphere and widely distributed in frontal, temporal, parietal, and paralimbic areas. Seven principal regions showed a high likelihood of activation across studies: 1) the angular gyrus (AG) and adjacent supramarginal gyrus (SMG); 2) the lateral temporal lobe, including the entire length of the middle temporal gyrus (MTG) and posterior portions of the inferior temporal gyrus (ITG); 3) a ventromedial region of the temporal lobe centered on the mid-fusiform gyrus and adjacent parahippocampus; 4) dorsomedial prefrontal cortex (DMPFC) in the superior frontal gyrus (SFG) and adjacent middle frontal gyrus (MFG); 5) the inferior frontal gyrus (IFG), especially the pars orbitalis; 6) ventromedial and orbital prefrontal cortex; and 7) the posterior cingulate gyrus and adjacent ventral precuneus. Weaker activations occurred at homologous locations in the right hemisphere, principally the AG, posterior MTG, and posterior cingulate gyrus. See Appendix 2 for details regarding activation of each of these regions in each of the included studies.
Figure 4 shows the thresholded ALE map for the 691 general semantic foci. This map is very similar to the one derived from all foci, though with generally smaller clusters. Activation in the IFG is more clearly localized to the pars orbitalis. Activation in the left fusiform gyrus extends somewhat farther anteriorly, and there is more extensive involvement of the left ventromedial prefrontal region.
The specific contrasts were further categorized as to the putative type of semantic knowledge examined by each. Many of these types (e.g., auditory, gustatory, olfactory, tactile, visual motion, characteristic object location, emotion, causation, and self knowledge) were examined in only a few studies and thus had too few activation foci to examine separately by meta-analysis. There were 10 studies that examined contrasts between living things (usually animals) and manipulable artifacts (usually tools) (Mummery et al. 1996, 1998; Cappa et al. 1998; Perani et al. 1999; Grossman et al. 2002a; Laine et al. 2002; Kounios et al. 2003; Thioux et al. 2005; Wheatley et al. 2005; Goldberg et al. 2006), providing 41 “living” and 29 “artifact” foci. “Living” foci showed no significant overlap in the ALE analysis. As shown in Figure 5, “artifact” foci showed significant overlap at the left lateral temporal–occipital junction, where posterior MTG and ITG meet anterior occipital cortex (roughly BA 37), and in the ventral left SMG (BA 40) near the junction with the superior temporal gyrus (STG).
There were 10 studies that examined action knowledge relative to other types (Martin et al. 1995; Noppeney and Price 2003; Tyler, Stamatakis, et al. 2003; Davis et al. 2004; Boronat et al. 2005; Noppeney et al. 2005; Baumgaertner et al. 2007; Eschen et al. 2007; Ruschmeyer et al. 2007; Tomasino et al. 2007), providing 40 “action” foci. Significant overlap for these foci occurred in the ventral left SMG and posterior left MTG (BA 37). As shown in Figure 5, the SMG focus overlaps the SMG region observed in the “artifact” studies. The posterior MTG cluster associated with action knowledge was slightly dorsal and lateral to the temporal–occipital cluster of “artifact” foci.
There were 17 studies that examined the distinction between perceptually encoded knowledge (i.e., knowledge of concrete objects derived from sensory-motor experience) and verbally encoded knowledge (i.e., knowledge acquired through language) (Paivio 1986). The majority of these studies used contrasts between concrete and abstract concepts (Jessen et al. 2000; Wise et al. 2000; Grossman et al. 2002b; Fiebach and Friederici 2003; Giesbrecht et al. 2004; Noppeney and Price 2004; Whatmough et al. 2004; Binder, Medler, et al. 2005; Binder, Westbury, et al. 2005; Sabsevitz et al. 2005; Wallentin, Østergaard, et al. 2005; Bedny and Thompson-Schill 2006; Fliessbach et al. 2006), whereas a few others examined this distinction using tasks that required explicit knowledge of perceptual versus verbal facts (Fletcher et al. 1995; Noppeney and Price 2003; Lee and Dapretto 2006; Ebisch et al. 2007). Significant overlap for the 113 “perceptual” foci occurred in the AG bilaterally, left mid-fusiform gyrus, left DMPFC, and left posterior cingulate (Fig. 6). Significant overlap for the 34 “verbal” foci occurred in the left IFG (mainly pars orbitalis) and left anterior superior temporal sulcus (STS).
PET and fMRI activation studies are based, directly or indirectly, on differences between 2 or more brain activity states. The “activation maps” produced by these methods represent relative changes in brain activity, not absolute activity levels. Interpretation of such activations, therefore, cannot be based solely on the processing demands of one of the task states, but rather requires a joint analysis of the processing demands elicited by each of the task states and the degree to which they differ. The goal of the present meta-analysis was to clarify the brain regions specifically involved in semantic processing, a topic that has been the source of much debate (for a sample of conflicting views, see Wernicke 1874; Head 1926; Petersen et al. 1988; Démonet et al. 1993; Thompson-Schill et al. 1997; Tranel et al. 1997; Hillis et al. 2001; Martin and Caramazza 2003; Patterson et al. 2007). In contrast to previous meta-analyses on this topic, we used explicit criteria to define activation representing semantic processing; these criteria referred to differences between the stimuli and tasks used to generate each activation map. To be considered for inclusion, a contrast had to involve a difference in either the degree to which stored knowledge was accessed (“general” contrast) or the specific type of knowledge accessed (“specific” contrast). These differences in stored knowledge access could be elicited through manipulation of stimulus characteristics (e.g., words vs. pseudowords, meaningful vs. meaningless sentences, famous vs. unfamiliar names, and animals vs. tools), through manipulation of the subject's attention via task instructions (e.g., semantic vs. phonological decisions, color vs. action decisions), or both.
Another central feature of the present study that distinguishes it from previous reviews was the application of strict exclusion criteria. To minimize contamination of the results by nonsemantic processes, studies were excluded if the semantic condition of interest also made greater demands on low-level sensory, orthographic, phonological, syntactic, working memory, attentional, response selection, or motor processes. (Note that studies were excluded only when the semantic condition of interest made greater demands on these processes, not when the comparison condition made greater demands.) The 2 most common reasons for exclusion were inadequate controls for phonological processing due to use of unpronounceable or nonlinguistic stimuli in the comparison condition, and inadequate controls for general task performance processes. The latter type of exclusion was the most common and warrants further discussion, because in our view many prior studies have not clearly distinguished knowledge access processes from more general cognitive processes that are not specific to semantic tasks. The critical point we wish to make is that all consciously executed, goal-directed tasks require at minimum a set of domain-general processes that include maintenance of attention, direction of attention to relevant information (external or internal), maintenance of this relevant information in a short-term memory store, maintenance of the task goal and task procedures in working memory, decision, response selection, and error monitoring. These processes are necessary for all goal-directed cognitive tasks, including semantic tasks; however, our aim here was to identify brain regions engaged specifically in semantic processes. Thus, it was critical to exclude activation contrasts in which the semantic condition of interest engaged these processes to a greater degree than the comparison condition, including all contrasts in which the semantic task was more difficult than the comparison task. In addition to the examples given in the introduction, another illustrative case are the many studies involving either semantic priming or repetition suppression, in which unprimed or new words are compared with primed or repeated words (e.g., Mummery et al. 1999; Wagner et al. 2000; Yasuno et al. 2000; Rossell et al. 2001; Copland et al. 2003; Rossell et al. 2003; Matsumoto et al. 2005; Wible et al. 2006). One underlying hypothesis of these studies is that unprimed or new words require more semantic processing than primed or repeated words that have already been processed, and behavioral data uniformly support this hypothesis by showing longer response times for the unprimed/new items. Though it is likely that unprimed/new items elicit more extensive semantic processing, it is also an inescapable fact, in our view, that they also require greater attentional and executive resources. Exclusion of these studies from the present meta-analysis was therefore necessary to isolate the semantic processes of interest, even though the activation maps from these contrasts probably do reflect, at least in part, semantic processes.
The meta-analysis links the following 7 brain regions with semantic processes: 1) posterior inferior parietal lobe (AG and portions of SMG), 2) lateral temporal cortex (MTG and portions of ITG), 3) ventral temporal cortex (mid-fusiform and adjacent parahippocampal gyrus), 4) DMPFC, 5) IFG, 6) ventromedial prefrontal cortex (VMPFC), and 7) posterior cingulate gyrus. One common attribute of these regions is their likely role in high-level integrative processes. All are known to receive extensively processed, multimodal and supramodal input. Recent studies show that even cortical regions formerly considered “unimodal” receive multisensory inputs (Schroeder and Foxe 2004; Cappe and Barone 2005), blurring the traditional distinction between unimodal and heteromodal cortex (Mesulam 1985). A useful qualitative distinction can still be drawn, however, between “modal” cortex, where processing reflects a dominant sensory or motor modality, and “amodal” cortex, where input from multiple modalities is more nearly balanced and highly convergent. For continuity with previous work, we refer to these latter regions as heteromodal, though alternative terms such as supramodal or amodal are perhaps equally valid. The human semantic system thus corresponds in large measure to the network of parietal, temporal, and prefrontal heteromodal association areas, which are greatly expanded in the human relative to the nonhuman primate brain (von Bonin 1962; Geschwind 1965; Brodmann 1994/1909). Evidence supports the subdivision of this network into posterior (temporal/parietal) and frontal components corresponding to storage and retrieval aspects of semantic processing (see discussion below). A second general feature of the semantic system is that it is lateralized to the left hemisphere, though with some bilateral representation (particularly in the AG and posterior cingulate gyrus). The following discussion reviews each of the nodes in this network in greater detail, examining their anatomical characteristics and likely functional roles based on imaging and neuropsychological data.
The most dense concentration of activation foci was in the posterior aspect of the left inferior parietal lobule, a region known historically as the angular gyrus or “pli courbe” (French: “curved gyrus”) (Déjerine 1895). The AG consists of cortex surrounding the parietal extension of the STS; it is formed essentially by the continuation of the superior and middle temporal gyri into the inferior parietal lobe. Its medial boundary is the intraparietal sulcus, which separates it from the superior parietal lobule. The anterior boundary with the SMG is defined by the first intermediate sulcus of Jensen, though this landmark is not always present. Its posterior boundary with the occipital lobe is not well defined. The AG corresponds approximately to BA 39 and in recent cytoarchitectonic studies to PGa and PGp (Caspers et al. 2006). This region is practically nonexistent in lower primates (Brodmann 1994/1909) and is greatly expanded in the human brain relative to its probable homolog in the macaque monkey, area PG/7a (von Bonin and Bailey 1947; Hyvarinen 1982). It is anatomically connected almost entirely with other association regions and receives little or no direct input from primary sensory areas (Mesulam et al. 1977; Hyvarinen 1982; Seltzer and Pandya 1984; Cavada and Goldman-Rakic 1989a, 1989b; Andersen et al. 1990).
Though we use the term angular gyrus, a variety of other labels for activations in this region were encountered in the studies reviewed. Despite its location in the parietal lobe, many refer to it erroneously as the middle temporal gyrus. Others use the terms temporoparietal junction or temporal–parietal–occipital cortex. These concatenated terms strike us as unnecessarily imprecise in this context and should probably be reserved for describing large activations that straddle the boundaries between lobes or extend beyond the parietal lobe. AG activations were also not infrequently mislabeled with BA numbers 40 and 19. (Some of this confusion may be a historical accident stemming from Brodmann's famous illustration, which shows area 39 shrunken to a fraction of its true size relative to surrounding structures [Brodmann 1994/1909]. Other cytoarchitectonic studies have portrayed this region as much more extensive [von Economo and Koskinas 1925; Sarkissov et al. 1955]. Brodmann's intent seems to have been to show both lateral and dorsal brain regions in a single lateral view. This required that the inferior parietal lobule on the lateral surface be reduced in size to accommodate areas on the superior parietal lobule, which are normally not well seen from a lateral view.)
On the other hand, some of the activation foci in this large cluster probably lie outside the AG. Several are just posterior, in what is likely BA 19. Given this evidence from functional imaging, it is possible that at least some cortex in the anterior occipital lobe classically identified as BA 19 may serve a semantic rather than a modal visual associative function. Alternatively, BA 39 may extend farther posteriorly than is typically portrayed. Several other foci in this large cluster were in the SMG (BA 40) just anterior to the AG.
Lesions of the left AG produce a variety of cognitive deficits, including alexia and agraphia (Déjerine 1892; Benson 1979; Cipolotti et al. 1991), anomia (Benson 1979), transcortical sensory aphasia (Damasio 1981; Kertesz et al. 1982; Rapcsak and Rubens 1994), sentence comprehension impairment (Dronkers et al. 2004), acalculia (Gerstmann 1940; Benton 1961; Cipolotti et al. 1991; Dehaene and Cohen 1997), visual-spatial and body schema disorders (Gerstmann 1940; Critchley 1953), ideomotor apraxia (Haaland et al. 2000; Buxbaum et al. 2005; Jax et al. 2006), and dementia (Benson et al. 1982). Perhaps the main conclusion to be drawn from this evidence is that the AG likely plays a role in complex information integration and knowledge retrieval. Given its anatomical location adjoining visual, spatial, auditory, and somatosensory association areas, the AG may be the single best candidate for a high-level, supramodal integration area in the human brain (Geschwind 1965). Several functional imaging studies have shown that the AG is activated in response to semantically anomalous words embedded in sentences, suggesting that it plays a role in integrating individual concepts into a larger whole (Ni et al. 2000; Friederici et al. 2003; Newman et al. 2003). One recent fMRI study found that during auditory sentence comprehension, the AG, alone among the regions activated, showed a late activation relative to baseline that began at the end of the sentence and occurred only when the constituent words could be integrated into a coherent meaning (Humphries et al. 2007). Three studies comparing processing of connected discourse to processing of unrelated sentences or phrases have also shown activation of the AG (Fletcher et al. 1995; Homae et al. 2003; Xu et al. 2005) Considering these various lines of evidence, we propose that the AG occupies a position at the top of a processing hierarchy underlying concept retrieval and conceptual integration. Though it is involved in all aspects of semantic processing, it may play a particular role in behaviors requiring fluent conceptual combination, such as sentence comprehension, discourse, problem solving, and planning.
The meta-analysis identified several regions in the lateral and ventral left temporal lobe, including most of the MTG and portions of the ITG, fusiform gyrus, and parahippocampus. MTG, ITG, and ventral temporal lobe have often been considered modal visual association cortex by analogy with lateral and ventral temporal cortex in the macaque monkey (von Bonin and Bailey 1947; Mesulam 1985); however, the present analysis argues against such an interpretation in the human brain. In fact, many functional imaging studies have demonstrated activation of these regions by auditory stimuli, particularly during language tasks (e.g., Démonet et al. 1992; Binder et al. 1997; Wise et al. 2000; Noppeney et al. 2003; Rissman et al. 2003; von Kriegstein et al. 2003; Xiao et al. 2005; Humphries et al. 2006; Orfanidou et al. 2006; Spitsyna et al. 2006; Baumgaertner et al. 2007). Thus, these regions in the human brain are likely heteromodal cortex involved in supramodal integration and concept retrieval. As in the inferior parietal lobe, the relative expansion of this high-level integrative cortex in the temporal lobe has resulted in modal visual cortex being “pushed” posteriorly and reduced in relative surface area (Orban et al. 2004).
Focal damage to the MTG, though somewhat rare, is strongly associated with language comprehension and semantic deficits (e.g., Hart and Gordon 1990; Hillis and Caramazza 1991; Kertesz et al. 1993; Chertkow et al. 1997; Dronkers et al. 2004). The anterior ventral temporal lobe, including anterior MTG, ITG, and fusiform gyrus, is frequently damaged (usually bilaterally) in herpes simplex encephalitis, often resulting in profound semantic deficits (Warrington and Shallice 1984; Kapur, Barker, et al. 1994; Gitelman et al. 2001; Lambon Ralph et al. 2007; Noppeney et al. 2007). Semantic dementia, the temporal lobe variant of frontotemporal dementia, is characterized by progressive degeneration of the anterior ventrolateral temporal lobes and gradual loss of semantic knowledge (Warrington 1975; Snowden et al. 1989; Hodges et al. 1992, 1995; Mummery et al. 2000; Jefferies and Lambon Ralph 2006; Lambon Ralph et al. 2007; Noppeney et al. 2007). Large lesions of the ventral left temporal lobe have been associated with transcortical sensory aphasia (Damasio 1981; Kertesz et al. 1982; Alexander et al. 1989; Berthier 1999). A striking aspect of many of these temporal lobe injuries is a dissociation in performance across object categories. Patients with anterior temporal damage, for example, occasionally show greater impairment in processing concepts related to living things compared with artifacts (Warrington and Shallice 1984; Warrington and McCarthy 1987; Forde and Humphreys 1999; Gainotti 2000; Lambon Ralph et al. 2007), and the opposite pattern has been reported in patients with posterior temporal and parietal lesions (Warrington and McCarthy 1987, 1994; Hillis and Caramazza 1991; Gainotti 2000). These category-related deficits suggest that the temporal lobe may be a principal site for storage of perceptual information about objects and their attributes. A large number of functional imaging studies provide support for this hypothesis by showing selective activation of the posterior lateral temporal lobe by tool and action concepts (Martin et al. 1995, 1996; Cappa et al. 1998; Chao et al. 1999, 2002; Moore and Price 1999a; Perani et al. 1999; Grossman et al. 2002a; Kable et al. 2002; Phillips et al. 2002; Noppeney et al. 2003; Tyler, Stamatakis, et al. 2003; Davis et al. 2004; Kable et al. 2005; Noppeney et al. 2005; Wallentin, Lund et al. 2005).
Semantic foci in the fusiform and parahippocampal gyri were concentrated in a relatively focal region near the mid-point of these gyri, centered at y = −35 in the Talairach–Tournoux system. The specific role of this region is unknown. It may correspond to the “basal temporal language area” described in electrocortical stimulation mapping studies (Lüders et al. 1991). It is anterior to activation sites observed in functional imaging studies comparing different categories of object pictures (e.g., Perani et al. 1995; Martin et al. 1996; Kanwisher et al. 1997; Epstein and Kanwisher 1998; Chao et al. 1999; Ishai et al. 1999; Moore and Price 1999a; Gorno-Tempini et al. 2000; Okada et al. 2000; Haxby et al. 2001; Chao et al. 2002; Whatmough et al. 2002; Tyler, Bright, et al. 2003; Gerlach et al. 2004; Gerlach 2007). These more posterior activations (typically y < −50) are rarely observed in studies using words, suggesting that they arise from systematic differences between object categories in their constituent visual attributes, which are in turn processed by somewhat different visual perceptual mechanisms (Humphreys and Forde 2001; Hasson et al. 2002). Given its close proximity to these object perception areas, however, several authors have proposed that the mid-fusiform gyrus plays a particular role in retrieving knowledge about the visual attributes of concrete objects (D'Esposito et al. 1997; Chao and Martin 1999; Thompson-Schill, Aguirre, et al. 1999; Wise et al. 2000; Kan et al. 2003; Vandenbulcke et al. 2006; Simmons et al. 2007). This region is also near the hippocampus and massive cortical afferent pathways to the hippocampal formation via parahippocampal and entorhinal cortex (Van Hoesen 1982; Insausti et al. 1987; Suzuki and Amaral 1994). It is thus possible that the parahippocampal component of this cluster acts an interface between lateral semantic memory and medial episodic memory encoding networks (Levy et al. 2004).
The present analysis provides little evidence for involvement of the STG in semantic processing. The STG has long been considered to play a central role in language comprehension (e.g., Wernicke 1874; Geschwind 1971; Bogen and Bogen 1976; Hillis et al. 2001), but anatomical and functional data suggest that it contains mainly modal auditory cortex (von Economo and Koskinas 1925; Galaburda and Sanides 1980; Baylis et al. 1987; Kaas and Hackett 2000; Poremba et al. 2003). Its role in language relates primarily to speech perception and phonological processing rather than to retrieval of word meaning (Henschen 1918–1919; Binder et al. 2000; Wise et al. 2001; Binder 2002; Hickok et al. 2003; Scott and Johnsrude 2003; Indefrey and Levelt 2004; Liebenthal et al. 2005; Buchsbaum and D'Esposito 2008; Graves et al. 2008). Several studies, however, suggest that portions of the left superior temporal sulcus, which includes ventral STG, play a role in processing abstract concepts (see below).
We draw particular attention to this region, which has been largely overlooked in reviews on semantic processing despite its consistent activation. It forms a distinctive, diagonally oriented band extending from the posterior–medial aspect of the MFG, across the superior frontal sulcus and dorsal SFG, and onto the medial surface of the SFG. It corresponds roughly to BA 8, extending into BA 9 medially. We use the term “dorsomedial” to distinguish this region from “dorsolateral prefrontal cortex” located lateral and ventral to it in the lateral MFG and inferior frontal sulcus.
Lesions of the left dorsal and medial frontal lobe cause transcortical motor aphasia, a syndrome characterized by sparse speech output but otherwise normal phonological abilities (Luria and Tsvetkova 1968; Freedman et al. 1984; Alexander and Benson 1993). There is typically a striking disparity between cued and uncued speech production in this syndrome. Patients can repeat words and name objects relatively normally, but are unable to generate lists of words within a category or invent nonformulaic responses in conversation. In other words, patients perform well when a simple response is fully specified by the stimulus (a word to be repeated or object to be named) but poorly when a large set of responses is possible (Robinson et al. 1998). This pattern suggests a deficit specifically affecting self-guided, goal-directed retrieval of semantic information. The location of the DMPFC, adjacent to motivation and sustained attention networks in the anterior cingulate gyrus and just anterior to premotor cortex, makes this region a likely candidate for this semantic retrieval role.
The left DMPFC has not been delineated in previous discussions of the prefrontal cortex and semantic retrieval processes. Analysis of medial frontal lesions in transcortical motor aphasia has usually centered on the supplementary motor area (SMA), a region of medial premotor cortex (BA 6) posterior to the DMPFC, perhaps because of the attention drawn to this region in earlier stimulation mapping studies (Penfield and Roberts 1959). Some authors, citing the involvement of SMA and anterior cingulate cortex in motor planning, attention, and motivation processes, dismissed the deficits in patients with left medial frontal lesions as nonlinguistic in nature (Damasio 1981). Others have recognized the linguistic nature of the retrieval deficit while attributing this to SMA damage (Masdeu et al. 1978; Freedman et al. 1984; Goldberg 1985). The DMPFC and SMA have a common arterial supply (the callosomarginal branch of the anterior cerebral artery) and for this reason are usually damaged together in ischemic lesions. Although we agree that focal SMA damage is unlikely to produce a linguistic deficit, we propose that the specific linguistic deficit affecting fluent semantic retrieval in many of these patients is due to DMPFC damage anterior to the SMA.
The left IFG was implicated in several early imaging studies of semantic processing (Petersen et al. 1988; Frith et al. 1991; Kapur, Rose, et al. 1994), and much subsequent discussion has focused on this region (e.g., Démonet et al. 1993; Buckner et al. 1995; Fiez 1997; Thompson-Schill et al. 1997; Gabrieli et al. 1998; Poldrack et al. 1999; Thompson-Schill et al. 1999; Wagner et al. 2000; Roskies et al. 2001; Wagner et al. 2001; Bookheimer 2002; Chee et al. 2002; Gold and Buckner 2002; Devlin et al. 2003; Nyberg et al. 2003; Simmons et al. 2005; Goldberg et al. 2007). Consistent with prior reviews (Fiez 1997; Bookheimer 2002), the meta-analysis shows clear involvement of the anterior–ventral left IFG in semantic processing. This region corresponds to the “pars orbitalis” (BA 47). More posterior and dorsal parts of the IFG were also activated, though less consistently.
Imaging studies have also frequently implicated the left IFG in phonological, working memory, and syntactic processes (e.g., Démonet et al. 1992; Zatorre et al. 1992; Paulesu et al. 1993; Buckner et al. 1995; Fiez 1997; Smith et al. 1998; Fiez et al. 1999; Poldrack et al. 1999; Burton et al. 2000; Embick et al. 2000; Poldrack et al. 2001; Gold and Buckner 2002; Friederici et al. 2003; Nyberg et al. 2003; Davis et al. 2004; Indefrey and Levelt 2004; Fiebach et al. 2005; Owen et al. 2005; Tan et al. 2005; Grodzinsky and Friederici 2006). Many studies have also shown increased BOLD responses in the IFG as task difficulty increases, possibly due to increased working memory or phonological processing demands (e.g., Braver et al. 1997, 2001; Jonides et al. 1997; Honey et al. 2000; Adler et al. 2001; Ullsperger and von Cramon 2001; Gould et al. 2003; Binder, Medler, et al. 2005; Mitchell 2005; Sabsevitz et al. 2005; Desai et al. 2006; Lehmann et al. 2006; Tregallas et al. 2006). Though we attempted to remove contrasts in which semantic processing was confounded with phonological processing or overall difficulty, these screening efforts were likely imperfect due to the absence of appropriate behavioral data in many published studies. It is thus possible that some of the IFG activation foci, particularly those outside the pars orbitalis, are the result of residual phonological or working memory confounds.
As is well known, IFG lesions typically impair phonological, articulatory planning, and syntactic rather than semantic processes (Broca 1861; Mohr 1976; Caramazza et al. 1981; Alexander and Benson 1993), though a few cases of transcortical sensory aphasia have been reported (Otsuki et al. 1998; Maeshima et al. 1999, 2004; Sethi et al. 2007). Strokes affect the posterior aspect of the IFG more commonly than the anterior region; isolated lesions of the pars orbitalis are practically unknown. Devlin et al. (2003) applied transcranial magnetic stimulation (TMS) to the anterior IFG in 8 healthy participants during performance of semantic decision and perceptual (size) decision tasks. TMS slowed participants’ reaction time on the semantic but not on the control task, supporting a role for this region in semantic processing. Accuracy on the semantic task, however, was not affected by TMS. It may be that the anterior–ventral IFG contributes to semantic processing in the healthy brain but is not absolutely necessary for task completion (Price et al. 1999). Damage to this region thus impairs processing “efficiency,” resulting in slowing of responses without actual errors.
This group of foci occupy cortex in the cingulate gyrus and medial SFG anterior to the genu of the corpus callosum, the subgenual cingulate gyrus, gyrus rectus, and medial orbital frontal cortex. The involved region of anterior cingulate cortex is anterior and ventral to the more dorsal region of anterior cingulate cortex implicated in many studies of working memory, response conflict, error detection, and executive control functions (e.g., Carter et al. 1999; Duncan and Owen 2000; Barch et al. 2001; van Veen and Carter 2002; Owen et al. 2005). We use the term “rostral cingulate gyrus” to emphasize this distinction. The VMPFC corresponds to portions of BA 10, 11, 24, 25, and 32. This region has been linked with motivation, emotion, and reward processing and probably plays a central role in processing the affective significance of concepts (Damasio 1994; Drevets et al. 1997; Mayberg et al. 1999; Bechara et al. 2000; Phillips et al. 2003). It has also been activated in many general semantic contrasts, however, possibly due to incidental processing of the emotional attributes of words (Kuchinke et al. 2005).
This region, which corresponds to BA 23, BA 31, and the retrosplenial region (BA 26, 29, and 30), was one of the most consistently activated. Activation peaks occurred in both hemispheres but more often on the left. A few foci in this cluster were located in the ventral aspect of the precuneus just dorsal to the posterior cingulate, in the region of the subparietal sulcus separating these gyri, or in the ventral parieto-occipital sulcus, which separates the posterior cingulate gyrus from the occipital lobe.
This general region has been linked with episodic and visuospatial memory functions (Valenstein et al. 1987; Rudge and Warrington 1991; Gainotti et al. 1998; Aggleton and Pearce 2001; Vincent et al. 2006; Epstein et al. 2007), emotion processing (Maddock 1999), spatial attention (Mesulam 1990; Small et al. 2003), visual imagery (Hassabis et al. 2007; Johnson et al. 2007; Burgess 2008), and other processes (Vogt et al. 2006). Of these, the association with episodic memory may be most likely. Posterior cingulate and adjacent retrosplenial cortex have strong reciprocal connections with the hippocampal complex via the cingulum bundle (Morris et al. 1999; Kobayashi and Amaral 2003, 2007). A number of patients with focal lesions to this region have presented with amnestic syndromes (Valenstein et al. 1987; Heilman et al. 1990; Rudge and Warrington 1991; Takayama et al. 1991; Katai et al. 1992; Gainotti et al. 1998; McDonald et al. 2001). Retrosplenial and surrounding posterior cingulate cortex are affected early in the course of Alzheimer disease, which typically presents as an episodic memory encoding deficit (Desgranges et al. 2002; Nestor et al. 2003).
If posterior cingulate cortex is involved primarily in encoding episodic memories, why is it consistently activated in contrasts that emphasize semantic processing? The likely answer has to do with the nature of episodic memory, the presumed evolutionary purpose of which is to form a record of past experience for use in guiding future behavior. Not all experiences are equally useful in this regard; thus, the brain has evolved a strategy of preferentially recording highly meaningful experiences, that is, experiences that evoke associations and concepts. Familiar examples of this phenomenon include the enhanced learning of words encoded during semantic relative to perceptual tasks, imageable relative to abstract words, and emotional relative to neutral words (Paivio 1968; Craik 1972 #762; Bock 1986). In each case, the enhanced retrieval of conceptual information (semantic retrieval) leads to enhanced episodic encoding. Several related theories of this phenomenon have been proposed (Cohen and Eichenbaum 1993; McClelland et al. 1995; O'Reilly and Rudy 2001), all of which postulate that episodic memory encoding involves the formation of large-scale representations through interactions between neocortex and the hippocampal system. The role of the neocortex is to compute ongoing perceptual, semantic, affective, and motor representations during the episode, while the hippocampal system binds these spatiotemporal cortical events into a unique event configuration. The important point is that the amount of episodic encoding that occurs is highly correlated with the degree of semantic processing evoked by the episode. We propose that the posterior cingulate gyrus, by virtue of its strong connections with the hippocampus, acts as an interface between the semantic retrieval and episodic encoding systems, similar to the role postulated above for the parahippocampal gyrus.
The posterior inferior parietal lobe of the macaque monkey, variously designated 7a (Vogt and Vogt 1919) or PG (von Bonin and Bailey 1947), and more recently subdivided into 2 subregions, PG and Opt (Pandya and Seltzer 1982; Gregoriou et al. 2006), is a likely homologue of the human AG with similar heteromodal functional characteristics (Hyvarinen 1982). Its principal connections are with visual and “polysensory” regions in the upper bank and fundus of the STS (areas TPO, STP, MST, and IPa), the parahippocampal gyrus (areas TF and TH), dorsolateral prefrontal cortex (mainly area 46), rostrolateral orbitofrontal cortex (area 11), and posterior cingulate gyrus (Jones and Powell 1970; Mesulam et al. 1977; Leichnitz 1980; Petrides and Pandya 1984; Seltzer and Pandya 1984, 1994; Selemon and Goldman-Rakic 1988; Cavada and Goldman-Rakic 1989a, 1989b; Andersen et al. 1990). Notably, the same STS, parahippocampal, prefrontal, and posterior cingulate regions with which PG/7a is connected are themselves all strongly interconnected (Jones and Powell 1970; Seltzer and Pandya 1976, 1978, 1989, 1994; Baleydier and Mauguiere 1980; Vogt and Pandya 1987; Selemon and Goldman-Rakic 1988; Morris et al. 1999; Blatt et al. 2003; Kobayashi and Amaral 2003, 2007; Padberg et al. 2003; Parvizi et al. 2006). These 6 regions thus form a distinct, large-scale cortical network that is strikingly similar in location and function to the human semantic system (Fig. 7). The other chief component of the human system, VMPFC, is roughly homologous to the medial orbitofrontal (BA 10, 14, 25, 32) region of the macaque. Although this region has no connection with PG/7a, it is strongly connected to middle and anterior STS, posterior cingulate and retrosplenial cortex, parahippocampus, and hippocampus (Seltzer and Pandya 1989; Barbas 1993; Cavada 2000; Blatt et al. 2003; Kobayashi and Amaral 2003; Saleem et al. 2007). Thus, the macaque brain contains a well-defined network of polysensory, heteromodal, and paralimbic areas that are several processing stages removed from primary sensory and motor regions and likely to be involved in computation of complex, nonperceptual information. We propose that this network is a nonhuman primate homologue of the human semantic system, responsible for storage of abstract knowledge about conspecifics, food sources, objects, actions, and emotions. Anatomical differences between the human and macaque systems are consistent with the known expansion of prefrontal, parietal, and temporal heteromodal cortex in the human brain, which has enabled in humans further abstraction of knowledge from perceptual events, ultimately culminating in the development of formal symbol systems to represent and communicate this knowledge.
The macaque parietal/frontal/STS network illustrated in Figure 7 has often been interpreted as playing a central role in visuospatial processing and spatial allocation of attention (Mesulam 1981; Hyvarinen 1982; Seltzer and Pandya 1984; Selemon and Goldman-Rakic 1988). This view is supported by a large number of studies showing cells in the posterior inferior parietal lobe of the macaque that respond to oculomotor, limb movement, and spatial attention tasks (Mountcastle et al. 1975; Hyvarinen 1982; Andersen et al. 1997). This model is clearly at odds, however, with our proposal that these regions are involved in long-term storage and retrieval of object and action knowledge. In our view, the characterization of this network as visuospatial/attentional does not account for the prominent connections of these frontoparietal areas with polysensory and paralimbic areas. We believe these models can be reconciled by a consideration of known subdivisions of the macaque posterior parietal lobe. Research over the past 20 years has clarified the connectivity and functional properties of several areas immediately anteromedial to PG/7a in the macaque intraparietal sulcus (LIP, VIP, and MIP), which appear to play a greater role in visuospatial and attention processes than PG/7a (Andersen et al. 1990, 1997; Rushworth et al. 1997; Chafee and Goldman-Rakic 1998; Duhamel et al. 1998). Unlike PG/7a, these IPS regions have little or no connectivity with the temporal lobe or paralimbic regions (Seltzer and Pandya 1984; Cavada and Goldman-Rakic 1989a; Suzuki and Amaral 1994). They are connected strongly to the frontal eye fields and premotor cortex, whereas PG/7a is connected to more anterior and dorsal prefrontal regions (area 46) (Cavada and Goldman-Rakic 1989b; Andersen et al. 1990). Numerous functional imaging studies have also clearly linked the IPS and frontal eye fields in humans with visuospatial and attention functions (Corbetta et al. 1998; Grefkes and Fink 2005; Grosbras et al. 2005). Thus, we propose that the posterior parietal spatial attention system in both the human and macaque is confined mainly to cortex in the IPS and superior parietal lobule, and that there is a distinct functional and anatomical boundary between this IPS system and adjacent inferior parietal cortex involved in semantic knowledge representation. This boundary line appears to correspond in both species to the superior margin of the lateral (posterior) bank of the IPS (see Fig. 2).
In addition to brain networks supporting semantic processing in general, particular regions may be relatively specialized for processing specific object categories, attributes, or types of knowledge. Prior reviews on this topic have included studies that used object pictures as stimuli, whereas the present meta-analysis was confined to studies using words. The number of such studies that examined specific types of semantic knowledge was relatively small, and activation peaks from these studies showed little overlap. The clearest pattern emerged from the 10 studies examining action knowledge (Fig. 5). Two distinct activation clusters were observed in left SMG and posterior MTG. Lesions in these areas have been associated with impairments of action knowledge and ideomotor apraxia in many neuropsychological studies (Tranel et al. 1997, 2003; Haaland et al. 2000; Buxbaum et al. 2005; Jax et al. 2006). The SMG focus lies just posterior to somatosensory association cortex; thus, it seems likely that this region stores abstract somatosensory (e.g., proprioceptive) knowledge acquired during learning and performance of complex motor sequences. The likely homologue of this region in the macaque monkey is area PF in the anterior inferior parietal lobe, a region known to contain mirror neurons responsive to both action observation and performance (Rizzolatti and Craighero 2004). Activation of this region in humans by words, which merely refer conceptually to actions, lends support to the idea that the information stored there is semantic in nature, coding complex actions performed on objects for a specific purpose (Rothi et al. 1991; Buxbaum 2001; Buxbaum et al. 2005, 2006; Fogassi et al. 2005). The posterior MTG focus is just anterior to visual motion processing areas in the MT complex (Tootell et al. 1995), suggesting that this region stores knowledge about the visual attributes of actions. As previously suggested (Martin et al. 2000), this specialization of posterior MTG for processing action knowledge may explain the frequently observed preferential activation of this region by pictures of tools (Martin et al. 1996; Mummery et al. 1996; Cappa et al. 1998; Chao et al. 1999, 2002; Moore and Price 1999a; Perani et al. 1999; Devlin, Moore, et al. 2002; Phillips et al. 2002; Damasio et al. 2004). Indeed, the studies examining knowledge of manipulable artifacts (relative to living things) produced areas of overlap at very similar sites in the SMG and near the junction of posterior MTG, ITG, and lateral occipital lobe. In contrast, the studies examining knowledge of living things (usually animals) relative to other categories produced no significant areas of overlap, consistent with several prior reviews (Devlin, Russell, et al. 2002; Gerlach 2007).
Given the presence of mirror neurons in premotor cortex (Rizzolatti and Craighero 2004) and prior imaging evidence that the inferior frontal region responds to pictures of manipulable objects (Martin et al. 1996; Binkofski et al. 1999; Perani et al. 1999; Chao and Martin 2000; Gerlach et al. 2002; Kellenbach et al. 2003; Buxbaum et al. 2006), it is somewhat surprising that frontal cortex did not show consistent activation in studies of action or artifact words. Several of the included action and artifact studies did report activation in this general region (Martin et al. 1995; Kounios et al. 2003; Tyler, Statamatakis, et al. 2003; Wheatley et al. 2005; Ruschmeyer et al. 2007), yet these foci did not cluster sufficiently to produce an activation in the ALE analysis. Evidence suggests that inferior frontal and inferior parietal cortices play somewhat different roles in action processing, with the parietal system more closely associated with knowledge of specific object-related actions (Buxbaum 2001; Buxbaum et al. 2005; Creem-Regehr and Lee 2005; Fogassi et al. 2005). Consistent with this distinction, patients with inferior parietal lesions may have impaired recognition of object-related pantomimes performed by others (“representational ideomotor apraxia”), whereas patients with inferior frontal lesions apparently do not (Varney and Damasio 1987; Rothi et al. 1991; Buxbaum et al. 2005). Thus, it is possible that action and artifact words, which in any case do not seem to activate motor representations as readily as pictures do (Rumiati and Humphreys 1998), engage inferior frontal systems only weakly compared with inferior parietal areas concerned with object-related action knowledge.
Some theorists have emphasized a distinction between perceptual (“image-based”) and verbal representations in semantic memory (Paivio 1986), exemplified by the contrast between concrete and abstract words. A number of functional imaging studies have examined this distinction (see Binder 2007 for a review). Thirteen studies included in the present meta-analysis showed areas of stronger activation for concrete compared with abstract words, with overlap in bilateral AG, left mid-fusiform gyrus, left DMPFC, and left posterior cingulate cortex (Fig. 6). This pattern is very similar to the network observed in the general semantic meta-analysis (Fig. 4). Eight studies showed areas of stronger activation for verbal compared with perceptual knowledge. These included comparisons between abstract and concrete words, abstract and concrete stories, and encyclopedic (i.e., verbal factual) versus perceptual knowledge. Abstract concepts are generally more difficult to process than concrete concepts, and several other studies had to be excluded from the meta-analysis because of this confound. Areas associated with verbal semantic processing included the left IFG (mainly pars orbitalis) and left anterior STS. These dissociations support a distinction between perceptually based knowledge, stored in heteromodal association areas such as the AG and ventral temporal lobe, and verbally encoded knowledge, which places greater demands on left anterior perisylvian regions. This dissociation is supported by a range of neuropsychological studies showing relative impairment of abstract word processing in patients with perisylvian lesions (Goodglass et al. 1969; Coltheart et al. 1980; Roeltgen et al. 1983; Katz and Goodglass 1990; Franklin et al. 1995) and relative impairment of concrete word processing in patients with extrasylvian (mainly ventral temporal lobe) lesions (Warrington 1975, 1981; Warrington and Shallice 1984; Breedin et al. 1995; Marshall et al. 1998).
The semantic system identified here is virtually identical to the large-scale network identified in recent studies of autobiographical memory retrieval (Maguire 2001; Svoboda et al. 2006). (In their comprehensive review of the autobiographical memory literature, Svoboda et al.  refer to the AG as “temporoparietal junction,” though all of the foci they report in this region are in the inferior parietal lobe.) There are cogent theoretical and empirical reasons to distinguish general semantic from autobiographical memory (Tulving 1972; De Renzi et al. 1987; Yasuda et al. 1997), yet this nearly complete overlap observed in functional imaging studies is striking. Autobiographical memory refers to knowledge about one's own past, including both remembered events, known as episodic autobiographical memory (e.g., “I remember playing tennis last weekend”), and static facts about the self, known as semantic autobiographical memory (e.g., “I like to play tennis”). Most autobiographical memory begins as detailed knowledge about recently experienced events (i.e., episodic memories). With time, these specific memories lose their perceptual detail and come to more closely resemble semantic knowledge (Johnson et al. 1988; Levine et al. 2002; Addis et al. 2004).
There are several reasons to expect overlap between the general semantic and autobiographical memory retrieval systems. First, semantic autobiographical memories, though they differ from general semantic knowledge in referring to the self, are essentially learned facts and therefore might be supported by the same system that stores and retrieves other learned facts. Second, several theorists have proposed that retrieval of general concepts, such as particular temporal and spatial locations, people, objects, and emotions, is an early processing stage in the retrieval of autobiographical memories and serves to cue the retrieval of these more personal memories (Barsalou 1998; Conway and Pleydell-Pearce 2000). Finally, it is worth emphasizing the perhaps obvious point that autobiographical memories are necessarily composed of concepts and that there could be no retrieval of an autobiographical memory without retrieval of concepts. To recall, for example, that “I played tennis last weekend” logically entails retrieval of the concepts “tennis,” “play,” and “weekend.” Thus, the essential distinction between general semantic retrieval and autobiographical memory retrieval lies in the self-referential aspect of the latter, which may be a relatively minor component of the overall process, at least from a neural standpoint.
A few neuroimaging studies have directly compared autobiographical and general semantic retrieval and reported greater activation in the semantic network during the autobiographical condition, particularly in the medial prefrontal cortex, AG, and posterior cingulate region (Graham et al. 2003; Maguire and Frith 2003; Addis et al. 2004; Levine et al. 2004). Rather than indicating a specialization for autobiographical memory processing in these regions, we believe these data reflect the fact that semantic autobiographical memories are typically more perceptually vivid, detailed, contextualized, and emotionally meaningful than general semantic memories. Thus, we propose that semantic autobiographical and general semantic memory processing are supported by largely identical neural networks, but that retrieval of richer and more detailed memories (such as autobiographical memories) engages this network to a greater degree than retrieval of less detailed knowledge.
As illustrated in Figure 8, the semantic system identified here is also strikingly similar to the human brain “default network” thought to be active during the conscious resting state (Binder et al. 1999; Raichle et al. 2001). This network was originally defined as a set of brain areas that consistently show “task-induced deactivation” in functional imaging studies (Shulman et al. 1997; Binder et al. 1999; Mazoyer et al. 2001; Raichle et al. 2001; McKiernan et al. 2003). Task-induced deactivation refers to decreases in blood flow or BOLD signal during effortful tasks compared with more passive states such as resting, fixation, and passive sensory stimulation.
There is now a general consensus that task-induced deactivation occurs because certain types of neural processes active during passive states are “interrupted” when subjects are engaged in effortful tasks (Binder et al. 1999; Raichle 2006), though the precise nature of these default neural processes remains a topic of debate. Evidence suggests that high levels of ongoing “spontaneous” neural activity are necessary for maximizing responsiveness of the cortex to changes in input (Ho and Destexhe 2000; Salinas and Sejnowski 2001; Chance et al. 2002), and this ongoing activity seems to account for much of the high resting metabolic needs of the cortex (Attwell and Laughlin 2001). The mere fact that there are high levels of neural activity in the resting state does not explain, however, why this activity decreases in particular brain regions during active task states. Such an explanation would seem to depend on an account of resting state activity in terms of the cognitive and affective processes in which subjects preferentially engage during passive states, such as episodic memory encoding and retrieval, monitoring and evaluating the internal and external environment, visual imagery, emotion processing, and working memory (Andreasen et al. 1995; Shulman et al. 1997; Mazoyer et al. 2001; Raichle et al. 2001; Simpson et al. 2001; Stark and Squire 2001).
Binder et al. (1999) proposed that semantic processing constitutes a large component of the cognitive activity occurring during passive states. This proposal was based first on phenomenological considerations. The everyday experience of spontaneous thoughts, mental narratives, and imagery that James referred to as the “stream of consciousness” (James 1890) is ubiquitous and undeniable. Participants in controlled laboratory conditions reliably report such “task-unrelated thinking,” the content of which includes concepts, emotions, and images (Antrobus et al. 1966; Antrobus 1968; Horowitz 1975; Pope and Singer 1976; Teasdale et al. 1993, 1995; Giambra 1995; McKiernan et al. 2003). Performance of effortful perceptual and short-term memory tasks reliably suppresses task-unrelated thoughts, suggesting a direct competition between exogenous and endogenous signals for attentional and executive resources. “Interrupting the stream of consciousness” thus provides a straightforward explanation for task-induced deactivation.
The second consideration emphasized by Binder et al. was the potential adaptive advantage of systems that allow “ongoing” retrieval of conceptual knowledge. In contrast to the sensory, spatial attention, and motor processes engaged when an external stimulus requires a response, ongoing conceptual processes operate primarily on internal stores of knowledge built from past experience. They are not random or “spontaneous” but rather play a profoundly important role in human behavior. Their purpose is to “make sense” of prior experience, solve problems that require computation over long periods of time, and create effective plans governing behavior in the future. This uniquely human capability to perform high-level computations “off-line” is surely the principal explanation for our ability to adapt, create culture, and invent technology.
The final evidence offered by Binder et al. was an fMRI study in which participants were scanned while resting, while performing a perceptual task with no semantic content, and while making semantic decisions about words. Relative to the resting state, the perceptual task produced deactivation in the usual default network (AG, posterior cingulate gyrus, VMPFC, and DMPFC). Critically, however, the semantic decision task did not deactivate these regions. Finally, the same regions were activated when contrasting the semantic decision task with a phonological task that controlled for sensory and executive processes. These results show not only that the semantic network (as defined by the semantic–phonological task contrast) is virtually identical to the default state network, but also that task-induced deactivation in these regions can be modulated by task content. Deactivation occurs when the task or stimuli make little or no demands on semantic processing. When the task itself engages the semantic system, however, deactivation does not occur. This pattern indicates that the default network itself is engaged in semantic processing, both during passive states and when presented with a semantic task. This pattern has since been replicated in a number of studies comparing word tasks, pseudoword tasks, and passive or resting conditions. Pseudowords reliably deactivate the default network relative to the resting state, whereas words do not (Baumgaertner et al. 2002; Henson et al. 2002; Mechelli et al. 2003; Rissman et al. 2003; Ischebeck et al. 2004; Binder, Medler, et al. 2005; Xiao et al. 2005; Humphries et al. 2007). This result is fully consistent with the model just proposed, because pseudowords have little meaning and thus do not engage the semantic system.
The observation that semantic processing engages a network of heteromodal association areas distinct from modal sensory and motor systems provides support for the traditional distinction between conceptual and perceptual processes (Fodor 1983). Under this view, conceptual processes operate on “internal” sources of information, such as semantic and episodic memory, which can be retrieved and manipulated at any time, independent of ongoing external events. In contrast, perceptual processes operate on “external” information derived from immediate, ongoing sensory and motor processes that coordinate interactions with the external environment. Many tasks, of course, require simultaneous processing of both internal and external information, including, for example, any behavior involving recognition of a familiar stimulus such as a word or picture. In such instances, external sensory information is given meaning by activation of associated internal information (Neisser 1967; Aurell 1979).
Functional imaging evidence for such a distinction is illustrated dramatically in Figure 9, which shows areas identified with semantic processing in the present meta-analysis in red. Yellow areas in the figure were activated during a task in which subjects read pseudowords aloud, thereby engaging ventral visual form perception, dorsal visual and spatial attention, motor articulation, and auditory perceptual systems bilaterally (Binder, Medler, et al. 2005). Apart from a few areas of overlap in the anterior ventral visual stream, STS, and IFG, these large-scale networks are largely distinct and complementary, together covering much of the cortical surface. Similar imaging evidence supporting a general distinction between large-scale “intrinsic” and “extrinsic” networks has been reported recently by other researchers (Fox et al. 2005; Fransson 2005; Golland et al. 2007).
Although these observations confirm a general distinction between conceptual and perceptual systems in the brain, other evidence suggests that this distinction is not absolute (Gallese and Lakoff 2005). For example, several studies have shown involvement of primary motor and premotor cortex in the comprehension of action verbs (Hauk et al. 2004; Pulvermuller et al. 2005; Tettamanti et al. 2005). Similarly, there is evidence that high-level visual cortex participates in the processing of object nouns (Martin et al. 1995; James and Gauthier 2003; Kan et al. 2003; Simmons et al. 2007). Word-related activation of these motor and sensory areas is likely to be somewhat subtle compared with activation of heteromodal conceptual regions and to depend to a greater extent on the specific sensorimotor attributes of the word concept. Furthermore, there are as yet few published studies that have focused on such specific attributes. Thus, the present meta-analysis may underrepresent the involvement of sensory and motor systems in comprehension of word stimuli. Defining the extent of this involvement is an important topic for future research.
The current results differ somewhat from conclusions reached in previous reviews. Cabeza and Nyberg (2000) reviewed functional imaging studies published prior to 1999. From an initial sample of 275 studies in a range of cognitive domains, they found 31 involving semantic memory retrieval tasks. Activation foci from these studies clustered mainly in the left IFG, with a few additional foci in the left MTG. The authors provided a brief description of the task contrasts used in each study and drew attention to the importance of control tasks, but made no attempt to exclude studies with word-form or task difficulty confounds. Of these 31 studies, only 4 passed our criteria for inclusion, with the remainder excluded mainly for phonological or task difficulty confounds.
Vigneau et al. (2006) collected 65 studies published prior to 2005 that concerned semantic processing. The authors did not report how the studies were identified but indicated that they excluded contrasts that used visual fixation or resting controls. Both object and word studies were included. No attempt was made to identify phonological or task difficulty confounds. Only activation peaks in the lateral temporal, lateral frontal, and inferior parietal lobes were included in the analysis. Foci were found throughout these regions, including the STG, MTG, IFG, and premotor cortex. The findings of Vigneau et al. thus differ from the current results in showing numerous foci in the STG (due to inclusion of studies comparing speech with non–speech sounds) and in the posterior IFG and premotor cortex (likely due to phonological and working memory confounds), and in the a priori exclusion of data from ventral temporal, dorsomedial prefrontal, posterior cingulate, and ventromedial frontal areas. Of the 65 studies selected by Vigneau et al., only 17 passed our criteria for inclusion; most were excluded because of task confounds or use of object stimuli. Another 53 studies that passed our criteria and were published prior to 2005 were not identified by Vigneau et al.
Future meta-analyses of cognitive imaging studies would benefit from the establishment of a more formal ontological system for defining the cognitive processes represented by an “activation.” The aim of such meta-analyses is to identify the neural correlates of a specific cognitive process or related set of processes, but this enterprise cannot logically succeed without an objective means of defining the cognitive components represented by an activation. The “objects” in such an ontology would correspond to particular experimental conditions (i.e., cognitive processing states), specified by sets of operationally-defined stimulus and task attributes. Contrasts between experimental conditions would constitute “relations” defining the cognitive processes that differ between conditions. The system would rest on a set of agreed-upon axioms concerning stimuli, tasks, and their attributes (e.g., “words are familiar symbols,” “familiar symbols have associations,” “associations are stored in semantic memory,” etc.). The interpretive power of such an ontology would be invaluable not only for retrospective meta-analysis but also for designing and interpreting future studies within a common theoretical framework.
The neural systems specialized for storage and retrieval of semantic knowledge are widespread and occupy a large proportion of the cortex in the human brain. The areas implicated in these processes can be grouped into 3 broad categories: posterior heteromodal association cortex (AG, MTG, and fusiform gyrus), specific subregions of heteromodal prefrontal cortex (dorsal, ventromedial, and inferior prefrontal cortex), and medial paralimbic regions with strong connections to the hippocampal formation (parahippocampus and posterior cingulate gyrus). The widespread involvement of heteromodal cortex is notable in that these regions are greatly expanded in the human relative to the nonhuman primate brain. This evolutionary expansion of neural systems supporting conceptual processing probably accounts for uniquely human capacities to use language productively, plan the future, solve problems, and create cultural and technological artifacts, all of which depend on the fluid and efficient retrieval and manipulation of conceptual knowledge.
National Institutes of Health (grants R01 NS33576, R01 NS35929, R01 DC006287, R03 DC008416, T32 MH019992, and F32 HD056767).
The authors thank B. Douglas Ward for invaluable technical assistance. Conflict of Interest: None declared.
|No.||1st Author||N||Method||Type||Contrast||Activation in brain regions of interest|
Note: The Contrast column provides a code indicating the contrast used, with tasks indicated in parentheses after stimuli. The terms Action, Animal, Artifact, Association, Auditory, Category, Causation, Color, Emotion, Figurative, Function, General, Literal, Location, Motion, Natural, Number, Perceptual, Self, Size, Specific, Taste, Tool, Verbal, and Visual refer to the object category or knowledge domain emphasized by the stimulus or task. “Other” indicates a miscellaneous combination of categories or domains. Abbreviations used for stimuli: Abst = abstract words, Conc = concrete words, Fam = familiar concepts, Unfam = unfamiliar concepts, N = nonwords (i.e., pseudowords), W = words, HighM = sentences or words with high meaningfulness, LowM = sentences or words with low meaningfulness, HighA = highly associated word pairs, LowA = weakly associated word pairs, Rel = semantically related words, Unrel = semantically unrelated words, Sent = sentences, Story = connected narrative or discourse, UnrelSent = unrelated sentences, SemSP = semantically distinct sentence pairs, SynSP = syntactically distinct sentence pairs, ThM = theory-of-mind narrative, Verb1 = single-argument verb, Verb2 = 2-argument verb. Abbreviations for tasks: GJ = grammaticality judgment, LD = lexical decision, Match = identity matching, Mem = memorization, OD = orthographic decision, PD = phonological decision, cRead = covert reading, Read = oral reading, Rep = oral repetition, Rec = recognition, SD = semantic decision, PWG = phonological word generation, SWG = semantic word generation, cSWG = covert semantic word generation, VTarg = detect a target word or phrase, NVTarg = detect a target nonverbal feature. Examples: W(SD)–N(PD) indicates a semantic decision task on words contrasted with a phonological decision on pseudowords; Conc–Abst(LD) indicates a contrast between concrete and abstract words presented during a lexical decision task. Specific contrasts were almost always performed in both directions (e.g., Animal–Tool and Tool–Animal); for simplicity such complementary contrasts are collapsed into a single row in the table.
Other abbreviations: N = number of subjects in study; Gen = general contrast; Spec = specific contrast; IPL = inferior parietal lobe (angular and supramarginal gyri); MTG = middle temporal gyrus; FG/PH = fusiform and parahippocampal gyri; DMPFC = dorsomedial prefrontal cortex; IFG = inferior frontal gyrus; VMPFC = ventromedial prefrontal cortex; PC = posterior cingulate gyrus.