PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Brain Lang. Author manuscript; available in PMC 2017 August 1.
Published in final edited form as:
PMCID: PMC5155332
NIHMSID: NIHMS794544

Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

Abstract

Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory.

Keywords: Semantic cognition, Anterior Temporal Lobe, Semantic Access, Concept Acquisition, Naming

1. Introduction

The binding problem reflects a longstanding question within the cognitive neurosciences regarding the manner(s) in which the human brain amalgamates features from disparate sensorimotor modalities into coherent conceptual representations. Object knowledge is comprised of complex conjunctions of features from visual, auditory, olfactory, haptic and motoric modalities, in addition to a range of other affective, episodic and lexical associations. Much of the early stage perceptual processing associated with these modalities occurs within distinct and remote regions of the brain. Accordingly, there exists a rich and long-standing debate regarding the mechanistic neural architecture that binds multimodal feature information into coherent wholes.

Perhaps the most historically dominant neurocognitive hypothesis of conceptualization is that it reflects auto-associated and conjoint activation of information stored in the multiple remote modality-specific regions of the brain (Wernicke, 1874; Hauk, Johnsrude, & Pulvermüller, 2004; Eggert, 1977; Gage & Hickok, 2005; Pulvermüller, 2001). For example, hearing the distinct ‘meow’ of a cat will activate auditory association cortices in addition to a range of associated features in other modalities (e.g., form, color) (See Figure 1, Panel 1). There is considerable evidence for these distributed-only accounts in the form of functional neuroimaging studies that have demonstrated that different types of concept (e.g., action concepts vs. object concepts) activate modality-specific association cortices (motor cortex vs. visual cortex) linked to their most crucial sources of defining information (motor sequences to perform them vs. the way they look) (Hoenig, Sim, Bochev, Herrnberger, & Kiefer, 2008; Kellenbach, Brett, & Patterson, 2001; Simmons, Martin, & Barsalou, 2005; Trumpp, Kliese, Hoenig, Haarmeier, & Kiefer, 2012).

Figure 1
Illustration of two competing models of cortical networks underpinning semantic memory. See main text for further description.

Alternative perspectives argue the necessity for transmodal brain regions; tertiary cortical areas that transcend particular modalities. It has been hypothesized that one or more of these regions fulfill a central organizing principle by which multiple modalities can be drawn together for the process of conceptualization (see Figure 1, Panel 2). In particular, such a region constitutes a convergence zone that is massively reciprocally linked to the distributed network of sensorimotor regions and thus uniquely suited as a nexus point for coordination of polymodal feature processing (Damasio, 1989). It has been further proposed that such ‘hubs’ play a necessary role in the computations required for coherent conceptual representation (Lambon Ralph, Sage, Jones, & Mayberry, 2010; Rogers & McClelland, 2004).

Patterns of behavioral performance among patients with relatively focal versus diffuse neuropathologies present a compelling test of these competing hypotheses. A theoretical model with no center (i.e., distributed only) predicts that only catastrophic, diffuse, bilateral brain damage is sufficient to cause multi-modal semantic impairment. In contrast, the hub perspective holds that profound semantic memory disorders can result from relatively circumscribed damage focused upon regions of transmodal representational cortex. The case for the existence of a transmodal hub for conceptual representation has historically relied heavily on studies of patients with semantic dementia. This clinical population exhibits progressive, yet relatively circumscribed bilateral anterior temporal atrophy coupled with a selective dissolution of conceptual knowledge that transcends task and modality (Bozeat, Lambon Ralph, Patterson, Garrard, & Hodges, 2000; Coccia, Bartolini, Luzzi, Provinciali, & Lambon Ralph, 2004; Goll et al., 2010; Piwnica-Worms, Omar, Hailstone, & Warren, 2010). Decades worth of detailed neuropsychological investigations of semantic dementia provide a compelling foundation of support for the hypothesis that the bilateral anterior temporal lobes play key roles in computing transmodal conceptual representations (Lambon Ralph, 2014; Patterson, Nestor, & Rogers, 2007). These assertions primarily informed by patient-based dissociations have been bolstered by a growing body of convergent evidence from functional imaging and neurostimulation studies (Binney, Embleton, Jefferies, Parker, & Lambon Ralph, 2010; Halgren et al., 2006; Marinkovic et al., 2003; Mion et al., 2010; Pobric, Jefferies, & Lambon Ralph, 2007; Shimotake et al., 2014; Vandenberghe, Price, Wise, Josephs, & Frackowiak, 1996; Visser & Lambon Ralph, 2011).

Evidence for both distributed and local (hub) components of the neural architecture subserving semantic cognition is reconciled within a class of hybrid, pluralistic theories, the most influential, to date, being the Hub and Spoke model proposed by Patterson, Lambon Ralph, Rogers and colleagues (Hoffman, Jones, & Ralph, 2012; Lambon Ralph & Patterson, 2008; Rogers & McClelland, 2004). The architecture and functioning of the Hub and Spoke model is made explicit by the connectionist computational implementation of Rogers et al. (Rogers et al., 2004; see also Patterson et al., 2007), wherein modality-specific spokes are interfaced via a central transmodal hub. According to Lambon Ralph (2014) and colleagues, the spokes are the substrate of invariant representations important for recognition of perceptual objects within their respective modality and the coding of similarities between different perceptual objects. The surface similarities computed by the spokes are necessary but insufficient for the formation of coherent and generalizable concepts, providing only a fragmentary guide to meaning. Herein lies the importance of a hub. The additional representational layer afforded by the ATL hub provides a tertiary level of abstraction allowing for distillation of the highly complex, non-linear transmodal relationships between multi-modal features that comprise concepts (Lambon Ralph et al., 2010; Reilly, Peelle, Garcia, & Crutch, 2016). Importantly, this does not imply that conceptualization can be achieved solely by activation of the representations subserved by the hub. Instead, in the Rogers et al. model, the hub and spokes are bi-directionally connected and complete conceptualization arises from the conjoint action of both the transmodal hub and each of the modality specific sources of information (Pobric, Jefferies, & Lambon Ralph, 2010). This approach, therefore, predicts that conceptual processing will be reflected in activation of both transmodal representational cortex (in particular, the ATL) and the distributed network of modality-specific association regions, and that this full network will be activated regardless of the input modality through which the concept is probed. Neuroimaging techniques such as fMRI are useful tools to explore this hypothesis, given they enable visualization of activation across the whole brain when individuals are performing semantic tasks. Several prior fMRI and PET studies have compared activation associated with conceptual processing probed in different modalities (Taylor, Moss, Stamatakis, & Tyler, 2006; Thierry & Price, 2006; Tranel, Grabowski, Lyon, & Damasio, 2005) or probed via verbal versus non-verbal (pictures) representational formatsn (Jouen et al., 2014; Thierry & Price, 2006; Vandenberghe et al., 1996; Visser & Lambon Ralph, 2011). However, the prevailing emphasis in such studies has been upon identifying regions that are commonly activated across modalities in hope of identifying putative transmodal ‘hub’ regions (Binder, Desai, Graves, & Conant, 2009). Less attention has been paid to the contribution of spoke regions. Indeed, there exists no clear operationalization of what constitutes a spoke region within the brain. As far as we are aware, there have been no explicit predictions under the hub and spoke framework beyond that they are higher-order modality-specific association regions that lie at the apex of each unimodal processing streams. Some predictions could be derived from a parallel set of functional imaging studies motivated by the distributed-only perspective that, as discussed above, holds that conceptual processing recruits the same modality-specific regions involved in perception and action. Such studies have identified lateral and ventral temporo-occipital regions that show responses selective to processing of visual features of objects and object nouns, such as visual form, color and motion, and that certain sub-areas of these brain regions preferentially respond to certain semantic categories of objects such as faces, animals and tools (Chao, Haxby, & Martin, 1999; Chao, Weisberg, & Martin, 2002; Kanwisher & Yovel, 2006; Wheatley, Weisberg, Beauchamp, & Martin, 2005). Other studies have found similarly selective profiles of regions of the lateral temporal cortex for auditory features (e.g., environmental sounds) (Kiefer, Sim, Herrnberger, Grothe, & Hoenig, 2008), the ventral premotor cortex for object manipulability (Chao & Martin, 2000), and the orbito-frontal cortex for olfactory and gustatory features (Goldberg, Perfetti, & Schneider, 2006). There is also complementary evidence for a contribution of distributed sensorimotor association regions to concepts stemming from neuropsychological investigations and electrophysiological and neurostimulation studies (Reilly et al., 2011; Kemmerer, Castillo, Talavage, Patterson, & Wiley, 2008; Pulvermüller, 2013; Pulvermüller, Shtyrov, & Ilmoniemi, 2005; Shtyrov & Pulvermüller, 2007; Pobric et al., 2010). In the present study, we sought to investigate whether some of these potential candidate ‘spoke’ regions (See Methods) would show conjoint activation with the ATL hub region during a task involving semantic processing, and whether the differential roles of the hub and spoke regions can be observed by virtue of their response profiles. In particular, spoke regions should show an effect of modality in line with their unimodal function, but we were specifically interested in whether they exhibit a differential response according to a semantic manipulation which would indicate that they are involved in conceptual processing. Moreover, in line with the connectionist implementation of the hub and spoke model, we hypothesized that the spoke regions would show a semantic effect on their activation regardless of the modality of stimulus presentation. Regarding the ATL hub, we hypothesized an effect of the semantic manipulation but no effect of modality (i.e., all representational modalities ultimately converge upon an amodal semantic representation) in line with its purported transmodal function.

To the best of our knowledge, there have been no prior functional imaging studies that have attempted to simultaneously test hypotheses regarding interactivity between hub and spoke regions in this manner. This could reflect, at least in part, the considerable methodological challenge of isolating activation specific to semantic processes from that associated with sensorimotor perceptual processes per se. Conventional subtraction analyses in fMRI typically contrast activation associated with semantic tasks (unimodal or crossmodal) to that associated with non-semantic perceptual judgments (unimodal or cross-modal) in order to subtract out low-level perceptual processes or cross-modal integration per se (i.e., at a pre-semantic stage) and control for domain-general function such as decision-making processes. The baseline control tasks often use stimuli such as false fonts or noise vocoded speech that have been distorted or scrambled to remove their ‘meaningfulness’ but retain, at least partially, perceptual complexity and the decision-making element of the task. However, the complexity of stimuli and the task difficulty often remain variable between the semantic (experimental) and control (low level perceptual baseline) conditions and thus semantic processes continue to be confounded with other processes in the interpretation of activations. Moreover, control conditions for different modalities may vary in their effectiveness to control for low-level processes and, as such, comparisons between activations in one modality and another are difficult to interpret. Alternative approaches include parametric modulation of the ‘meaningfulness’ of stimuli, to which activation related to semantic processing should covary. We undertook a novel approach to addressing the role of ‘spoke’ regions in semantic representation by contrasting activation when subjects named familiar entities compared to newly learned object concepts. Our rationale for choosing newly learned objects as a baseline was that the familiar and novel stimuli would be well matched in terms of perceptual complexity and that task requirements would be identical for the two stimulus sets. Moreover, the familiar/novel exemplar distinction reflects a manipulation of ‘meaningfulness’ that may be used to yield activations associated with evocation of conceptual representations. The novel entities were fictional animals and artefacts with distinct form, motion and sounds. Thus, these objects might be considered meaningful novel exemplars of superordinate categories (e.g., animals) but with relatively impoverished representation as basic level concepts or unique exemplars/entities as they do not have the multiple episodic, affective or encyclopedic associations (to name a few) that enrich the conceptualization of familiar animals/artefacts. Contemplating connectionist frameworks, we hypothesized that the familiar concepts will have stronger connection weightings and more robust neural representations, and thus would evoke more robust activation across the neural network subserving conceptual representation. Under this assumption we attempted to use this contrast (familiar > novel) to yield semantic activations while controlling for pre- and post-semantic processes/activations. We examined activations associated with semantic processing when the input was constrained to one of three feature types (an object’s static visual form, point light motion or characteristic sound) which each have dissociable cortical regions associated with perceptual processing (See Methods). We used whole brain and targeted region of interest analyses to reveal differences and commonalities between the networks activated by conceptual processing probed by different stimulus presentation modalities.

2. Method

2.1. Study Design

We examined patterns of activation for a series of familiar object concepts (i.e., bird, dog, scissors, clock) relative to four newly learned object concepts (plufky, korbok, wilzig, blerga). During the learning phase, participants learned the names of these novel concepts by watching animated videos of the target concepts in action, moving in distinctive paths/manners and making unique animal/tool noises while a narrated voiceover announced each item’s name. Three days later the participants underwent the same exposure condition. One week after initiation of the learning phase, we scanned participants as they repeatedly named novel and familiar items upon exposure to only one of their modality-specific semantic features. We segregated the three input modalities (i.e., visual form, environmental sound, and point light motion) into separate scanning runs. For example, in the visual form session participants named DOG from a grayscale photograph. In the auditory session, participants named DOG from the sound of a barking dog, and in the motion session participants named DOG from a point-light video depicting the canonical motion of a dog walking. Our analyses were aimed at detecting activations both associated to each input modality and areas common across all the modalities. In contrasting familiar versus novel items, we aimed to isolate deep semantic processing via a cognitive subtraction. Novel objects are matched to familiar in terms of perceptual processing (items were approximately comparable in visual complexity; see Figure 2), and phonological encoding demands. Training of novel items was continued until 100% accuracy was achieved such that differences in activations for novel and familiar items was unlikely to be attributed to difficulty. As such, the familiar-novel contrast was intended to isolate enhanced activation associated with retrieval of conceptual knowledge associated with familiar objects.

Figure 2
Stimuli characteristics

2.2. Participants

18 healthy young adults (14 females) were recruited from the University of Florida. Participants reported no prior neurological injury, learning disability (e.g., dyslexia), and were not on a current regimen of sedative medications. All were right-hand dominant, native English speakers. Mean age was 22.8 and mean education was 16.1 years. Participants provided informed consent in accord with the Institutional Review Board of the University of Florida and were nominally compensated for taking part in the study.

2.3. Stimuli

The familiar objects included two animals (dog, bird) and two manufactured artifacts (clock, scissors). Items from both the living and non-living domain were included such that our results were not entangled with category-specific effects. Visual form presentation involved grayscale, cartoon-like photographs of the target items on a black background. Auditory stimuli included clips (e.g., ticking clock) originally obtained from online sound repositories which we subsequently trimmed to 2000ms, matched in sampling rate and normalized on amplitude using the GoldWave waveform editor. The motion stimuli involved 2000ms videos of white points placed along articulated joints and edges against a black background.

Four novel objects were learned one week before scanning. Participants learned these names by watching 20s long videos of the target objects moving in a distinctive path/manner and making a distinct sound while a male narrator announced each item’s name. Novel sounds were created by first subjecting real sounds to low-pass filtering and warbling effects using the GoldWave waveform editor. The audio clips were then all matched in duration and batch normalized for root mean square amplitude (i.e., perceived as roughly equal volume). Point light videos analogous to the familiar items were created using LightWave animation software. Figure 2 illustrates the stimuli characteristics.

2.4. Procedure

2.4.1. Learning phase

On Day 1 of the experiment, participants learned the names of four novel concepts by watching videos depicting simultaneous motion, sound, and form. Participants passively viewed each video two times, and the order of video presentation was randomized. Upon completion of this learning sequence, we tested naming ability by showing truncated (2s) clips of each target item and asking the participant to name the item. For any participant who failed to name a single item, we repeated presentation of the entire video sequence (4 items * 2 repetitions) and probed naming until that participant achieved 100% accuracy. The learning session was terminated when participants named all items with 100% accuracy. We repeated this session using the exact parameters three days later.

2.4.2. Imaging phase

On the day of the scanning session, participants were informed that they were to name the items they had learned a week before in addition to several untrained but familiar items but that only one of their constituent features would be presented in a given scanning run. Furthermore, they were instructed that overt naming was required on only a subset of trials and this would be cued at random during the experiment. Before entering the scanner, we conducted a brief (5 minute) familiarization procedure where participants performed the task with experimenter feedback in each of the test modalities (point light, sound, picture). Again, 100% accuracy for naming familiar and novel items was required before participants proceeded to scanning.

Participants were instructed to overtly name target stimuli as quickly and accurately as possible from the onset of the cue (a tilde presented in isolation following stimulus presentation). Probes were pseudorandomly interspersed over the duration of a scanning run, with 24 of 64 trials requiring an overt speech response. As such, participants were unaware on any given trial of whether they were required to make an overt response, and as such were encouraged to identify the object on each stimulus presentation (covert naming).

Participants completed three functional scanning runs (8 minutes each) spaced over one hour, with approximately 2–4 minutes of inactivity between each. Each functional scanning run contained only features presented in one of the modalities (i.e., visual form, sound, or point-light motion). Each object was presented 8 times for a duration ≈2000ms, for a total of 64 pseudo-randomized stimulus presentations per scanning run. Each object was cued for naming 3 times per scanning run. The intervals between stimulus presentation and onset of the cue was 2s. An interstimulus interval of either 2s or 4s was pseudorandomly assigned across stimuli. Figure 3 illustrates the trial structure and timing parameters.

Figure 3
Example fMRI trial procedure

We presented stimuli using a laptop computer running E-Prime 2.0 Professional software. We synchronized scanning with stimulus presentation by timelocking the offset of each trial to the radiofrequency (RF) pulse to the using a TTL synchronization box (Nordic NeuroScan, Inc). Trial durations were a maximum of 2000ms (TR duration) but sometimes slightly shorter due to receipt of the pulse signal. This procedure ensured that no cumulative timing errors were introduced into the experiment as a result of stimulus buffering.

Participants viewed all picture and motion stimuli on a monitor situated behind the scanner bore via a mirror slotted onto the head coil. Participants heard auditory stimuli over MR compatible pneumatic headphones (ScanSound Inc).

2.5. Imaging acquisition

All imaging was performed on a Philips Achieva 3 Tesla scanner with a 32-channel SENSE head coil. We acquired a high resolution T1-weighted anatomical image using a magnetization-prepared rapid acquisition with gradient echo (MPRAGE) sequence with the following parameters: in-plane resolution = mm2, Field of View (FOV)=240mm2, slice thickness=1mm, Number of Slices=170, Flip angle=8°; Repetition Time (TR)=7.0ms; Echo Time (TE)=3.2ms. The gradient-echo echo-planar fMRI sequences were performed with the following parameters; in-plane resolution =3mm × 3mm, Field of View (FOV)=240mm2, Slice thickness=4mm, Number of Slices=38, Flip angle=90°; Repetition Time (TR)=2000ms; Echo Time (TE)=30ms, Slice Gap=None; Interleaved Slice Acquisition, Number of Dynamic Scans=242. We restricted FOV for temporal and inferior parietal lobe coverage, excluding the highest convexities of the dorsal cerebrum.

2.6. fMRI Data Analysis

Analysis was carried out using statistical parametric mapping (SPM8) software (Wellcome Trust Centre of Neuroimaging, UK). Within each subject, the functional EPI volumes from across all three scanning sessions were re-aligned using a 6-parameter rigid body transform estimated using a least squares approach and a two-pass procedure. The aligned images and a mean volume were then re-sliced using fourth-degree B-spline interpolation. This procedure corrected for differences in subject positioning between sessions and minor motion artefacts within a session. The T1-weighted anatomical image was registered to the mean functional volume using a six-parameter rigid-body transform and subsequently subjected to SPM8’s unified tissue segmentation and normalization procedure in order to estimate a spatial transformation from subject space to a standard stereotactic space, according to the Montreal Neurological Institute (MNI) protocol, for inter-subject averaging. This transform was applied to each of the subject’s functional volumes, resampling to a 3 × 3 × 3 mm voxel size using trilinear interpolation and preserving the intensities of the original images (i.e., no “modulation”). These volumes were then smoothed with an 8 mm full-width half-maximum (FWHM) Gaussian filter.

Data were analyzed using a general linear model approach (GLM). At the individual subject level, each scanning run (and thus presentation modality) was modeled within a separate fixed effects analysis. Our rationale for analyzing each run/modality in a separate model was that some participants did not complete all three runs/modalities due to technical difficulties with audio presentation. Within the model for a given scanning run, the presentation of familiar items (e.g., Dog) and newly learned items (e.g., Plufky) were modelled as two separate boxcar functions. Instances where subjects made an overt verbal response (probed trials only) were also modelled with a two-second boxcar function in order to capture speech-related changes in BOLD as an independent nuisance covariate. Rest periods were modelled implicitly. A set of six motion parameters for each scan session, which were estimated during the realignment step, were also included as nuisance regressors. Regressors were subsequently convolved with a canonical hemodynamic response function. Data were further treated with a high-pass filter with a cutoff of 128 s. Planned contrasts were calculated to assess differences in activation between naming familiar items and newly-learnt items [Familiar – Novel]. These contrast images were entered into subsequent multi-subject mixed-effect analyses directed at our a priori hypotheses, as follows.

2.7. Whole-brain analyses of modality-specific effects

First, we examined activation for semantic access of familiar entities relative to novel entities in separate whole-brain analyses for each of the stimulus presentation modalities (one sample t-tests). This allowed for an initial assessment of brain-wide similarities and differences in the topology of activations recruited by semantic processing of stimuli in the different modalities. Statistical parametric maps were thresholded with an uncorrected voxel-height threshold of P<0.005 and a minimum cluster size of 30 contiguous voxels. Inferences were made on activations surviving this threshold if they occurred in a priori predicted regions. These predicted regions included occipito-temporal visual association cortex and superior temporal auditory association cortex and also regions implicated in the semantic neuroscience literature, namely, the anterior temporal lobe, the posterolateral temporal lobe, the frontal operculum and ventral parietal cortex. Activations outside of these predicted regions were assessed using a more stringent threshold corrected for the multiple comparisons problem using the topological false discovery rate as implemented in SPM8 (p<0.05). All reported coordinates are in Montreal Neurological Institute (MNI) space.

2.8. Whole-brain analyses of common activation to familiar > novel entities across different modalities

Our second planned analysis sought to identify regions areas activated by semantic processing irrespective of the feature type (form, sound or motion) that was presented, and thus fulfill hypothesized characteristics of high-order regions involved in transmodal semantic processing, such as the ATL hub. In particular, we planned a conjunction analysis (within a one-way Analysis of Variance (ANOVA)) targeting voxels that are significantly activated independently in both conditions/modalities (Nichols et al, 2005). The threshold for this analysis was p<0.01, uncorrected, in effect being 0.001 for pairwise commonalities. We also performed post hoc analyses assessing main effects and interactions of modality and familiarity within a factorial ANOVA. This required calculation of contrasts at the first level for [Visual Familiar] only, [Visual Novel] only, [Auditory Familiar] only, and so on. Statistical parametric maps generated by these analysis were assessed following an application of a p<0.001, uncorrected voxel-height threshold and an FDR-corrected cluster extent threshold at p<0.05.

2.9. Targeted regions of interest (ROI) analyses

In addition to the more conservative whole-brain analysis, we applied a planned targeted region of interest (ROI) analysis using the MARSBAR toolbox (Brett, Anton, Valabregue, & Poline, 2002). This method is an alternative to approach to overcoming the multiple comparisons problem and is ideal for regional hypothesis testing. We reasoned that differences in activation to familiar relative to novel concepts may be subtle and thus less conservative analyses may be required to detect them. Moreover, plotting the parameter estimates extracted from an ROI can provide a more intuitive means to interpreting differential effects compared to statistical maps. MARSBAR calculates a single summary value to represent activation across all voxels within a given ROI (in the present case, the median of the parameter estimates). Details on ROI selection and definition are provided below. We extracted an estimate for the [Familiar-Novel] contrast for each ROI in each modality and performed group analyses using statistical software outside of SPM. In the group level statistics (Figure 6), a positive value indicates greater activation for familiar item, a negative indicates greater activation for novel items, and a zero or insignificant value indicates no differential activation between familiar and novel items.

Figure 6
Contrast estimates (and standard measurement error) for familiar minus novel concepts in the auditory and visual presentation modality are displayed for a priori targeted regions of interest (ROIs) in each hemisphere. Significant positive values indicate ...

2.10. Region of Interest Definition

Candidate ‘spoke’ regions of interest (ROIs) were defined on the basis of prior functional imaging data from studies investigating higher order and/or semantic processing of sensorimotor features within unimodal association cortices. Given the presentation modalities we investigated specifically targeted (i) superior temporal (anterior parabelt) auditory association cortex that is associated with processing of higher-order auditory objects, and (ii) the ventral occipito-temporal visual association cortex that is particularly associated with higher-order processing of object form. An ROI for motion-associated regions was omitted from this analysis, as was the motion condition data, due to not finding an effect at the whole-brain-level in the predicted temporal or parietal regions (see above), even at the more liberal threshold (see Results). A third ROI represented the ATL hub region. Each ROI was spherical with a radius of 10mm (see Figure 6 for a visual depiction).

The precise location of the superior temporal ROI was defined on the basis of work from Warren, Jennings and Griffiths (2005) who identified regions associated with the higher order and abstracted spectral analysis of sounds. We averaged the coordinates of peaks reported by these authors for the contrast they interpreted as relating to an abstraction of spectral shape, which may be relevant to the analysis of auditory sources or objects, such as voices. This average coordinate (57,−22,3) defined the center of mass for the superior temporal ROI. Given the 10mm radius of our ROI, it encompassed many of the individual peak coordinates. We had no prior hypotheses regarding lateralization of activation in this region, and thus included a left hemisphere homologue (−57,−22,3) ROI in the analysis.

It is well established that the ventral occipito-temporal cortex (vOTC) is involved in the processing of higher-level visual properties of objects such as global shape/form. Moreover, it has been demonstrated that portions of the mid-to-posterior fusiform gyrus preferentially respond to certain categories of objects, such as animals or tools (Chao et al., 1999; Chao & Martin, 2000; Wheatley et al., 2005), implying at least a near-conceptual level of processing but still within modality-specific cortex. These results are usually discussed as fitting the predictions of the distributed-only perspective, but they also suggest this region is a good candidate for a visual form ‘spoke’ in the context of the hub-and-spoke model. We used reported coordinates from the fMRI study of Chao et al. (1999) to define our ventral occipito-temporal cortex ROI. Given we were not interested in distinction between categories in the present study, we averaged their coordinates for the animal and tool selective regions (transformed from Talairach coordinates to MNI coordinates using the tal2icbm function; http://biad02.uthscsa.edu/icbm2tal/tal2icbm_spm.m). We included both left (−34 −56 −18) and right (35 − 53 −20) hemisphere vOTC ROIs.

Recent refinement of hypotheses regarding ATL involvement in conceptual knowledge representation have pinpointed the ventral surface in particular as a transmodal substrate (Binney et al., 2010; Mion et al., 2010; Lorraine K. Tyler et al., 2013). Conventional gradient-echo EPI imaging protocols, such as that used in the present study, suffer from signal loss and distortion in this region (Devlin et al., 2000; Embleton, Haroon, Morris, Ralph, & Parker, 2010) and, therefore, reliable BOLD measurements can be difficult to achieve. However, it is possible to obtain good signal within lateral ATL regions that have been equally implicated, bilaterally, on the basis of fMRI and TMS studies, albeit more variably and as function of task demands or input/output modality (Binder, Desai, Graves, & Conant, 2009; Binney et al., 2010; Hoffman, Binney & Lambon Ralph, 2015; Lambon Ralph, Pobric, & Jefferies, 2009; Zahn et al., 2007). We calculated the average of the two sets of coordinates reported for the left lateral ATL in Binney et al.’s (2010) study and used this (−54,6,−28) and its right hemisphere homologue (54,6,−28) to define the center of mass for two ATL ROIs for the present study.

3. Results

We acquired full datasets for 13 of 18 participants. For the remaining five participants, the auditory run failed due to an amplifier malfunction. Thus, in the analyses to follow we modeled activation via an unbalanced design including the visual form and point-light motion runs for all 18 participants and auditory runs for 13 participants.

3.1. Whole-brain analyses of modality-specific effects

3.1.1. Activation associated with visual form for Familiar relative to Novel Entities [Visfam-Visnov]

Results of the whole brain analysis for familiar >novel entities in the visual form condition are displayed in Figure 4, Row A and Table 1. A large right-hemisphere cluster included three regions associated with transmodal semantic processing and language processing (albeit more so in the left hemisphere), namely the lateral anterior temporal lobe (peaking in the middle temporal gyrus), the inferior frontal gyrus, the posterolateral temporal lobe (including the MTG, STG and STS) and the ventral parietal lobe. This cluster extended ventromedially to reveal greater activation for familiar entities in the posterior and mid parahippocampal gyrus and visual association cortex within the bilateral lingual gyri. As such, familiar concepts invoke greater activation than novel entities in the association cortex tied with the modality of presentation, despite equivalent complexity in the perceptual features of stimuli.

Figure 4
Whole-brain analyses of familiar minus novel concept processing probed by (a) environmental sound, (b) visual form, and (c) point-light motion. Statistical maps are displayed on a glass brain rendering with an uncorrected voxel-height threshold of p<0.005 ...
Table 1
Whole-brain analyses of greater activation for familiar concepts relative to novel concepts in each presentation modality

The same cluster extended to auditory association cortex in the anterior superior temporal gyrus and the superior temporal sulcus and also to the posterior planum temporale adjacent to Heschl’s gyrus. Therefore, the increased activation for familiar entities extends beyond association cortex for the presentation modality and transmodal association cortex to association cortex of other modalities (in this case, auditory cortex). This large cluster also extended to the right hippocampus, the right insula, the right orbitofrontal cortex and across ventromedial frontal and anterior cingulate cortex bilaterally. A small cluster was also observed in the right thalamus, although it did not survive correction for multiple comparisons.

Four left hemisphere clusters mirrored many of the activations in the right hemisphere cluster. The largest occurred predominately in the left parahippocampal gyrus and left ventral occipitotemporal visual cortex. Another cluster extended over a large swathe of the dorsal posterolateral temporal cortex, ascending to ventral parietal regions and descending to dorsolateral extrastriate visual cortex. A third cluster was observed in left orbitofrontal cortex, while a fourth cluster encompassed portions of left planum temporale, and left insula, extending toward medial temporal regions. These final two clusters did not survive correction for multiple comparison.

3.1.2. Activation associated with environmental Sound for Familiar relative to Novel Entities [Soundfam-Soundnov]

The familiar > novel entity contrast in the sound condition (Figure 4, Row B and Table 1) revealed a largely symmetrical distribution of activation across the hemispheres. However, large clusters were observed bilaterally at the occipito-temporal-parietal junction encompassing the supramarginal gyri, the posterior superior temporal gyri/sulci and posterior middle temporal gyri. In the right hemisphere, the cluster extended rostrally to auditory cortex in the mid superior temporal gyrus and planum temporale. In the left hemisphere, primary and secondary auditory cortex activation was observed in two smaller clusters, one in the posterior superior temporal gyrus and another more anterior which stretched across the planum temporale to insula cortex. These clusters did not survive correction for multiple comparisons but nevertheless fell within a priori predicted regions.

In both hemispheres, the largest clusters also extended ventrally to extrastriate visual cortex from the posterior middle temporal gyrus. In the right hemisphere, an additional smaller cluster was observed within lateral occipital-temporal visual association cortex with an extension into ventral occipital-temporal regions. This cluster did not survive corrections for multiple comparisons. Contrary to the visual form modality, parahippocampal cortex was not activated.

Three bilateral clusters were observed in posterior cingulate cortex, the precuneus including retrosplenial cortex (this cluster also encroached on the lingual gyri bilaterally) and medial prefrontal cortex.

3.1.3. Activation associated with point-light Motion for Familiar relative to Novel Entities [Motionfam-Motionnov]

In the point-light motion condition, the familiar minus novel contrast revealed two small clusters of activation, (Figure 4, Row C and Table 1). These clusters were located within the ventral central sulcus and ventral postcentral gyrus in the left and right hemisphere, respectively. Neither cluster fell in a priori predicted regions, nor did they survive correction for multiple comparisons. In light of this null result, we excluded the point-light motion condition from subsequent analyses reasoning that it would further produce a null result in a conjunction analysis.

3.2. Whole-brain analyses of common activation to familiar > novel entities across the visual form and sound conditions

The next set of analyses continued with a whole-brain approach but sought to identify brain regions activated by semantic processing irrespective of the feature type (visual form or sound) that was presented. In particular, we planned a conjunction analysis to reveal overlap of regions activated in the familiar-novel contrast in each modality. Before we present those findings, however, we will describe the result of a post-hoc analysis in which we used a factorial ANOVA to examine the main effect of Familiarity, as well as that of Modality and the interaction effect.

The results of an F-test on the main effect of Modality are presented in Table 2 and Figure 5, Row A. Modality-specific activations were observed in visual primary and association cortex from the occipital poles to medial occipito-temporal cortex bilaterally and in bilateral auditory cortex including Heschl’s gyrus, the planum temporale and the mid-to-posterior superior temporal gyrus. The results of a T-test on the positive effect of Familiarity (Familiar > Novel), a manipulation we hypothesized to reveal semantic processing, are presented in Table 3 and Figure 5, Row B. Greater activation for familiar objects was observed within the occipito-temporal parietal junctions bilaterally, consistent with the results of the Familiar>Novel contrasts in the sound and visual form modalities separately (Figure 4, Rows A and B). In the left hemisphere, the cluster peaked in the posterior middle temporal gyrus where it abuts the extrastriate visual cortex. In the right hemisphere, the cluster peaked in the posterior superior temporal gyrus and extended more dorsally into the ventral parietal lobe than the left hemisphere cluster. The right lateralized cluster also extended rostrally into auditory cortex in the mid-to-posterior superior temporal gyrus. In the left hemisphere, auditory cortex was revealed in separate cluster in the planum temporale that also extended to the posterior insula. A right hemisphere cluster was observed in the parahippocampal gyrus, and area associated with high level visual processing and object recognition. Indeed, some of the regions showing an effect of Familiarity were located just anterior to those showing a main effect of modality suggesting that conceptual-level processing occurs upstream in the modality-specific pathways. In the superior temporal gyrus, the effect was observed in the most anterior aspect of the area showing the effect of modality. Familiarity also drove greater activation in lateral occipito-temporal regions adjacent to and including extras-striate regions around the posterior middle temporal gyrus, whereas effects of modality were in early visual cortex and medial occipito-temporal regions.

Figure 5
Whole-brain analysis of common activation to familiar > novel entities across the visual form and sound conditions. (a) F-test on the main effect of Modality, (b) T-test on the positive main effect of familiarity. (c) Conjunction of the independent ...
Table 2
Whole-brain analyses of Main effect of Modality (Sound versus Form only)
Table 3
Whole-brain analyses of positive main effect of Familiarity (Familiar > Novel) across the Form and Sound presentation modalities

Consistent with a role for the region in transmodal semantic processing, there was a cluster in both in the left and right anterior temporal lobe, peaking at the middle temporal gyrus, which responded to the familiarity manipulation (but did not reveal a modality effect). When each modality was considered separately, we only observed right ATL activation and for the visual form condition only. It is possible that the bilateral ATL activation in the present analysis reflects a greater statistical power to detect subtle effects owing to collapsing across the conditions. In particular, the absence of ATL activation in the sound condition may reflect lower statistical power than the visual form condition due to a lower number of subjects. Moreover, as observed in the above analyses (Section 3.1), there was also greater activation in the medial prefrontal cortex, the precuneus and the posterior cingulate cortex. While it is possible that the results of this analysis reflect common processing across the presentation modalities, they could be predominantly driven by strong activation in only one of the two conditions (Nichols, Brett, Andersson, Wager, & Poline, 2005). Therefore, a conjunction analysis that reveals only voxels that were activated independently in both conditions, was necessary. The results of the conjunction analysis are displayed in Figure 5, Row C. It revealed the largely the same clusters (albeit much smaller, due to the much more stringent nature of the analysis), with the exception of the posterior cingulate cortex, which was absent. Although not clearly shown in Figure 5, there was bilateral posterior parahippocampal gyrus activation. The left ATL cluster was greatly reduced in size to two small separate clusters in the anterior MTG. This is likely driven by the lack of supra-threshold activation in the auditory modality, which again may reflect reduced power due to there being fewer subjects than in the visual form modality.

No interactions between Familiarity and Modality were observed in the whole-brain analysis. A strong form of our hypothesis regarding contributions of modality-specific regions to conceptual processing would be that an interaction should be observed such that they will respond equally to novel and familiar concepts in the associated input modality, whereas other presentation modalities will evoke activation only for familiar concepts that are more robustly represented across ‘hubs’ and ‘spoke’ regions. Statistically speaking, we believe this is unlikely to be observed, especially in a conservative whole-brain approach, as the effects of familiarity are in all probability much more subtle than modality effects and this has the potential to cloud an interaction effect. Additionally, the stimuli in each presentation modality differ not only in modality but in perceptual richness and task difficulty. These confounds are likely to introduce noise that make direct comparisons between modalities difficult to interpret and interactions unlikely. As such, we focused primarily upon regional effects of familiarity independently within each modality.

3.3. Targeted regions of interest (ROI) analyses

The location of ROIs and the results of this analysis are displayed in Figure 6. The results are largely consistent with those in the whole-brain analyses above (Sections 3.1 and 3.2). Both the left and the right ATL ROIs were significantly more active for familiar entities regardless of whether the stimulus was the visual form of or the sound made by objects. In the whole-brain analyses in Section 3.1, only the right ATL was activated and only in the visual form condition. The greater statistical power afforded by the ROI analyses therefore appeared to greatly improve sensitivity to effects of familiarity.

Likewise, the superior temporal gyrus (STG) ROI, positioned over auditory association cortex, was significantly more active for familiar > novel entities when probed in the auditory domain and when probed by the visual form, albeit the effect was smaller in the latter. This pattern held for both the left and right hemisphere homologues. On the contrary, no significant effects of familiarity were observed in the ventral occipito-temporal cortex (vOTC) ROI, although there was a trend in the sound condition in the right hemisphere ROI (p = 0.11). This ROI was situated more posteriorly than the activation observed in the whole-brain analysis and thus it may have not been optimally situated to detect such an effect. The whole-brain results implicated the mid parahippocampal gyri and the lateral extrastriate visual cortex, further downstream in the visual processing hierarchy. This is consistent with prior observations that increased cortical activity of visual regions associated with recognition shifts more anteriorly with increased certainty of an object’s identity (Bar et al., 2001).

4. Discussion

In this fMRI study, we set out to explore the roles of unimodal sensorimotor association areas and transmodal temporal association cortices in semantic processing. Distributed-only “embodied” theories propose that conceptualization is underpinned by reactivation of perception-action traces distributed across a network of remote unimodal association areas (Allport, 1985; Pulvermüller, Moseley, Egorova, Shebani, & Boulenger, 2014). An alternative theory suggests than in addition to the contribution of sensorimotor regions, or “spokes”, there is convergence of multi-modal information and further abstraction/computation within a centralized transmodal representational “hub”. Critically, the ‘hub-and-spoke’ model, as instantiated computationally by Rogers et al. (2004), requires the conjoint simultaneous action/computations of all the distributed ‘spokes’ and the ‘hub’ for conceptual representation. The hub is purported to be located specifically in the anterior temporal lobes, bilaterally. We tested a hypothesis congruent with the hub and spoke model, specifically that activation during semantic processing tasks should be observed both in a distributed set of unimodal association areas and the ATL. Moreover, this distributed pattern of activation should be observed irrespective of the input modality such that association areas not directly engaged by the stimulus should be activated in addition to those directly stimulated (e.g., auditory areas should be engaged by visual semantic stimuli). In order to disentangle activation related to semantic processing from that related to modality-specific perceptual processes, we employed a semantic manipulation of familiarity. Familiar and novel entities were closely matched in terms of perceptual complexity and therefore we hypothesized that any difference in activation between these groups of entities would reflect ‘depth’ of semantic encoding rather than pre-semantic perceptual differences. Familiar object concepts are likely to have more coherent representations in the sensorimotor spokes, and as such recognition will evoke greater, more robust activations (see also Bar et al., 2001).

We interpret our findings as grossly supportive of the overarching predictions of the hub-and-spoke model. The bilateral ATL exhibited a greater response to familiar items compared to newly learned entities. The ROI analyses demonstrated that this ATL activation occurred irrespective of the input modality (environmental sound or visual form), a finding that is consistent with previous investigations delineating this cortical region as a transmodal hub (Vandenberghe et al., 1996; Visser, Jefferies, & Lambon Ralph, 2010). Our major finding relates to the contribution of unimodal spoke regions to semantic processing. We demonstrated that activation of auditory and visual association regions during a semantic task does not demand direct sensory stimulation. Auditory association cortex exhibited activation not only when semantic knowledge was probed by auditory stimuli but also, albeit less strongly, when it was probed via visual form. This result suggests that presentation of the visual form of, for example, a dog activates representations of its characteristic environmental sound. Our results regarding the contribution of visual association regions were less straightforward, although we believe they are not at odds with our hypothesis.

Our a priori hypothesis regarding the location of a visual spoke was derived from previous functional imaging studies that demonstrated semantic category-related patterns of activation in the ventral occipito-temporal cortex including the posterior fusiform gyrus (Aguirre, Zarahn, & D’esposito, 1998; Chao et al., 1999; Kanwisher, McDermott, & Chun, 1997). This region, situated anterior to early visual cortex (and the effect of modality in the present study), is associated with high-level visual processing, including object identification. Under the distributed-only framework, these category-related effects have been interpreted as indicative of an exclusive role in visual semantic feature information, which renders it a candidate visual spoke region (e.g., Martin et al., 2000). However, we failed to observe any significant differential response to familiar objects compared to newly learned objects within this region of interest that would support a conceptual-level function, particularly one of representing object-specific information. Category-related patterns of activity may, therefore, reflect a function that is limited to global visual feature conjunctions shared across many category-exemplars. Tyler and colleagues (2004) have argued that there exists a gradient of specificity along the posterior-anterior axis of the ventral temporal lobe, such that posterior regions encode gross domain distinctions with progressive conjunctions of features occurring along the anterior axis, giving rise to specific person and object-knowledge distinctions in the most anterior portions of the ventral visual pathway, including perirhinal cortex and proximal regions such as parahippocampal cortex (see also Bar et al., 2001). Our whole-brain analyses revealed greater activation for familiar objects in the medial temporal cortex bilaterally. This raises the possibility that the visual spoke is located within a more anterior distribution to our ROI. The effect of familiarity in the medial temporal lobe was, however, only observed in the visual form condition. Therefore, our hypothesis that an object’s sound would activate visual association regions reflecting recapitulation of its form was not met in this region. Still, failure to reject the null hypotheses may stem from the lower number of subjects in the sound condition and a lack of statistical power to detect this effect in this region.

The lateral posterior temporal and ventral parietal cortex abutting the lateral extrastriate visual cortex exhibited an effect of familiarity in both the visual form and sound conditions. The lack of a modality-specific effect implicates some of this region as omni-modal in nature. Indeed, this area has been described as heteromodal cortex important for conceptual representation (Bonner, Peelle, Cook, & Grossman, 2013; Price, Bonner, Peelle, & Grossman, 2015) and the fact that it is located between dorsal auditory region and ventral visual regions make it ideally suited for cross-modal integration (Beauchamp, Argall, Bodurka, Duyn, & Martin, 2004; Binney, Parker, & Lambon Ralph, 2012). It has also been implicated as part of a network subserving an executive component of semantic processing (Jefferies, 2013; Noonan, Jefferies, Visser, & Lambon Ralph, 2013). On the other hand, this region has also been demonstrated to exhibit category-specific patterns of activation (Chao et al., 1999) and thus might also be considered a candidate spoke region, possibly even a visual spoke. Specifically though, it has been suggested that it may be site for stored information about object motion (Martin, 2000; Beauchamp et al, 2003). Activation in this region could reflect retrieval of object motion features. However, the present results could feasibly be explained by either one of these accounts, and our study design cannot adjudicate between either.

Analysis of the point-light motion condition failed to reveal any significant effects of familiarity. This brings into question whether the stimuli were suited to detect neural responses associated with access to object-specific information. Point light displays have been successfully demonstrated to evoke responses in the lateral posterior temporal cortex (Beauchamp, Lee, Haxby, & Martin, 2003) but these responses were modulated by category-level distinctions (biological vs. non-biological motion) that we did not explicitly contrast in the present study. Moreover, the lateral posterior temporal cortex may be rather less tuned to finer-grained within-category distinctions probed by familiarity. It is unclear, however, why there was no clear response to familiar and newly learned concepts in the whole brain analysis (with the exception of a small motor cortex cluster). One possibility is that point light may reflect an ecologically unnatural means to probe object recognition and subsequent conceptualization compared to the form and sound conditions (i.e., we have no evolutionary precedent for naming objects depicted from point light). As such, point light stimuli may elicit high inter-subject and inter-item variability that consequently produce null results at the group level.

The cognitive subtraction of novel from familiar objects offers a unique approach to isolating semantic activation. We observed such activation in regions considered to be some of the key components of the core network subserving semantic cognition and language comprehension (Jefferies, 2013; Turken & Dronkers, 2011), including the bilateral ATL, the posterior middle temporal gyrus and the temporo-parietal junction, which supports the effectiveness of this approach. This also suggests that greater activation for familiar relative to novel objects observed in unimodal sensory association regions reflects semantic upregulation of these regions. It remains to be demonstrated, however, whether this reflects processes key to conceptualization or other cognitive or perceptual processes that are differentially engaged by familiar objects/concepts but not directly associated with conceptual processing (see also Mahon, 2014; Reilly et al., 2014). Limitations in the temporal resolution of fMRI preclude addressing these contingencies directly in the present study. In lieu of future investigations that will, some alternatives are worthy of consideration. Rather than reflecting a process of conceptualization, greater activation of a directly stimulated modality-specific region by familiar compared to newly learned objects could reflect greater semantic feedback to sensorimotor regions acting as a top-down influence on high-level perceptual processing (Hon, Thompson, Sigala, & Duncan, 2009). When the modality-specific region is not directly engaged by the stimulus, the differential activity could be interpreted as reflecting enhanced mental imagery or perceptual simulation (although not necessarily at the level of conscious awareness). This might follow from familiar items having more coherent representation in the sensorimotor domain and richer associations in episodic memory. Indeed, we also observed greater activation for familiar objects in posterior ventral cingulate cortex which has been linked to episodic memory processes, including the internal generation and maintenance of mental imagery (Vann, Aggleton, & Maguire, 2009). This is to say that activation of action-perception traces and episodic associations could be a result of feedforward propagation and secondary to conceptual processing, playing an elaborative rather than integral role. Mahon (2014) likened this phenomenon to the domain of speech perception where a common finding is that as we hear words, their corresponding orthographic representations are rapidly activated. This form of ‘passive priming’ may occur, but few would argue that activation of orthography is a necessary pre- or co-requisite for effective speech perception.

Contrasting familiar and novel entities to reveal semantic activation may also come with pitfalls. In particular, naming familiar entities is likely to be easier than naming novel entities and thus contrasting the two has the potential to yield activation patterns not entirely associated with conceptual processing. Such contrasts may invoke regions considered to play a functional role in the Default Mode Network (DMN), for example. The DMN is an anatomically-defined network that shows deactivation during goal-directed tasks as compared to rest, with some component regions showing stronger task-induced deactivations as task difficulty increases (Humphreys, Hoffman, Visser, Binney, & Lambon Ralph, 2015; Seghier & Price, 2012). The DMN includes medial prefrontal cortex, the precuneus, the posterior cingulate cortex, the hippocampus and the temporo-parietal junction (including the angular gyrus) (Buckner, Andrews-Hanna, & Schacter, 2008). Many of these DMN regions were activated in the familiar > novel contrasts we reported here. One explanation for this is that during rest, the brain is afforded more opportunity for task-unrelated thought, which may engage processes such as internal speech and mental imagery (Binder et al., 1999). Along the same lines, if the familiar items are easier to name, then component of the DMN may appear to activate more during these trials than in novel item trials not because they are more engaged but because they are less engaged in the task. Although we trained participants on both familiar and novel items to 100% accuracy, it remains possible/probable that naming novel items was more cognitively demanding than naming familiar items. Thus, task difficulty poses an alternative explanation for the large areas of activation we observed in the posterior cingulate cortex, precuneus and medial frontal cortex.

In conclusion, we provide novel empirical support for the hypothesis that both the ATL hub and modality-specific spokes are engaged conceptual processing. Future work will address some of the above outlined issues and determine whether the observed involvement of modality-specific ‘spoke’ regions is necessary for performance of semantic tasks. The question of necessity arguably cannot be answered through fMRI or any other single experimental method currently at our disposal. For example, in the current study, we cannot know whether spokes are engaged in parallel or are instead sequentially “turned on” by rapid distillation through a hub. The question of necessity for spoke activation will be gleaned through converging and perhaps tandem evidence from a variety of sources with varying scales of temporal and spatial resolution (e.g., MEG, TMS, neuropsychology, fMRI).

Highlights

  • Isolate semantic activation by subtracting familiar concepts and novel concepts.
  • Activation in unimodal association cortex greater for familiar concepts
  • This activation extends to unimodal cortex not directly stimulated by stimuli
  • Also extends to putative anterolateral temporal lobe hub
  • Demonstrates conjoint involvement of unimodal spokes and transmodal hub in semantic processing

Acknowledgments

We thank Alison O’Donoughue for her assistance in scheduling and recruiting participants. This study was supported by US Public Health Service Grants R01 DC013063 (JR), and T32 T32AG020499 (AG).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

REFERENCES

  • Aguirre GK, Zarahn E, D’esposito M. An area within human ventral cortex sensitive to “building” stimuli: evidence and implications. Neuron. 1998;21(2):373–383. [PubMed]
  • Allport DA. Distributed memory, modular subsystems and dysphasia. In: Newman SK, Epstein R, editors. Current perspectives in dysphasia. Edinburgh: Churchill Livingstone; 1985. pp. 207–244.
  • Bar M, Tootell RB, Schacter DL, Greve DN, Fischl B, Mendola JD, Dale AM. Cortical mechanisms specific to explicit visual object recognition. Neuron. 2001;29(2):529–535. [PubMed]
  • Beauchamp MS, Argall BD, Bodurka J, Duyn JH, Martin A. Unraveling multisensory integration: Patchy organization within human STS multisensory cortex. Nature Neuroscience. 2004;7(11):1190–1192. [PubMed]
  • Beauchamp MS, Lee KE, Haxby JV, Martin A. FMRI responses to video and point-light displays of moving humans and manipulable objects. Journal of Cognitive Neuroscience. 2003;15(7):991–1001. http://doi.org/10.1162/089892903770007380. [PubMed]
  • Binder JR, Desai RH, Graves WW, Conant LL. Where Is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex. 2009;19(12):2767–2796. [PMC free article] [PubMed]
  • Binder JR, Frost JA, Hammeke TA, Bellgowan PSF, Rao SM, Cox RW. Conceptual processing during the conscious resting state: a functional MRI study. Journal of Cognitive Neuroscience. 1999;11(1):80–93. [PubMed]
  • Binney RJ, Embleton KV, Jefferies E, Parker GJM, Lambon Ralph MA. The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: Evidence from a novel direct comparison of distortion-corrected fMRI, rTMS, and Semantic Dementia. Cerebral Cortex. 2010;20(11):2728–2738. http://doi.org/10.1093/cercor/bhq019. [PubMed]
  • Binney RJ, Parker GJM, Lambon Ralph MA. Convergent connectivity and graded specialization in the rostral human temporal lobe as revealed by diffusion-weighted imaging probabilistic tractography. Journal of Cognitive Neuroscience. 2012;24(10):1998–2014. [PubMed]
  • Bonner MF, Peelle JE, Cook PA, Grossman M. Heteromodal conceptual processing in the angular gyrus. Neuroimage. 2013;71:175–186. [PMC free article] [PubMed]
  • Bozeat S, Lambon Ralph MA, Patterson K, Garrard P, Hodges JR. Non-verbal semantic impairment in semantic dementia. Neuropsychologia. 2000;38:1207–1215. [PubMed]
  • Brett M, Anton J-L, Valabregue R, Poline J-B. Region of interest analysis using the MarsBar toolbox for SPM 99. Neuroimage. 2002;16(2):S497.
  • Buckner RL, Andrews-Hanna JR, Schacter DL. The brain’s default network. Annals of the New York Academy of Sciences. 2008;1124(1):1–38. [PubMed]
  • Chao LL, Haxby JV, Martin A. Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nature Neuroscience. 1999;2(10):913–919. [PubMed]
  • Chao LL, Martin A. Representation of manipulable man-made objects in the dorsal stream. Neuroimage. 2000;12:478–484. [PubMed]
  • Chao LL, Weisberg J, Martin A. Experience-dependent modulation of category-related cortical activity. Cerebral Cortex. 2002;12(5):545–551. [PubMed]
  • Coccia M, Bartolini M, Luzzi S, Provinciali L, Lambon Ralph MA. Semantic memory is an amodal, dynamic system: Evidence from the interaction of naming and object use in semantic dementia. Cognitive Neuropsychology. 2004;21(5):513–527. [PubMed]
  • Damasio AR. Time-locked multiregional retroactivation: A systems-level proposal for the neural substrates of recall and recognition. Cognition. 1989;33(1–2):25–62. [PubMed]
  • Devlin JT, Russell RP, Davis MH, Price CJ, Wilson J, Moss HE, Tyler LK. Susceptibility-induced loss of signal: comparing PET and fMRI on a semantic task. Neuroimage. 2000;11(6):589–600. [PubMed]
  • Eggert GH. Wernicke’s Works on Aphasia: A Sourcebook and Review. Vol. 1. The Hague, Netherlands: Mouton; 1977.
  • Embleton KV, Haroon HA, Morris DM, Ralph MAL, Parker GJ. Distortion correction for diffusion-weighted MRI tractography and fMRI in the temporal lobes. Human Brain Mapping. 2010;31(10):1570–1587. [PubMed]
  • Gage N, Hickok G. Multiregional cell assemblies, Temporal binding and the representation of conceptual knowledge in cortex: A modern theory by a “classical” neurologist, Carl Wernicke. Cortex. 2005;41(1998):823–832. [PubMed]
  • Gallese V, Lakoff G. The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology. 2005;22(3):455–479. http://doi.org/10.1080/02643290442000310. [PubMed]
  • Goldberg RF, Perfetti CA, Schneider W. Perceptual knowledge retrieval activates sensory brain regions. The Journal of Neuroscience. 2006;26(18):4917–4921. [PubMed]
  • Goll JC, Crutch SJ, Loo JH, Rohrer JD, Frost C, Bamiou D-E, Warren JD. Non-verbal sound processing in the primary progressive aphasias. Brain. 2010;133(1):272–285. [PMC free article] [PubMed]
  • Halgren E, Wang C, Schomer DL, Knake S, Marinkovic K, Wu J, Ulbert I. Processing stages underlying word recognition in the anteroventral temporal lobe. Neuroimage. 2006;30(4):1401–1413. [PMC free article] [PubMed]
  • Hauk O, Johnsrude I, Pulvermüller F. Somatotopic representation of action words in human motor and premotor cortex. Neuron. 2004;41:207–301. [PubMed]
  • Hoenig K, Sim EJ, Bochev V, Herrnberger B, Kiefer M. Conceptual flexibility in the human brain: dynamic recruitment of semantic maps from visual, motor, and motion-related areas. Journal of Cognitive Neuroscience. 2008;20(10):1799–1814. [PubMed]
  • Hoffman P, Jones RW, Ralph MA. The degraded concept representation system in semantic dementia: damage to pan-modal hub, then visual spoke. Brain. 2012;135(Pt 12):3770–3780. http://doi.org/10.1093/brain/aws282. [PubMed]
  • Hon N, Thompson R, Sigala N, Duncan J. Evidence for long-range feedback in target detection: Detection of semantic targets modulates activity in early visual areas. Neuropsychologia. 2009;47(7):1721–1727. [PubMed]
  • Humphreys GF, Hoffman P, Visser M, Binney RJ, Lambon Ralph MA. Establishing task-and modality-dependent dissociations between the semantic and default mode networks. Proceedings of the National Academy of Sciences. 2015;112(25):7857–7862. [PubMed]
  • Jefferies E. The neural basis of semantic cognition: Converging evidence from neuropsychology, neuroimaging and TMS. Cortex. 2013;49(3):611–625. http://doi.org/10.1016/j.cortex.2012.10.008. [PubMed]
  • Jouen AL, Ellmore TM, Madden CJ, Pallier C, Dominey PF, Ventre-Dominey J. Beyond the word and image: characteristics of a common meaning system for language and vision revealed by functional and structural imaging. NeuroImage. 2014 http://doi.org/10.1016/j.neuroimage.2014.11.024. [PubMed]
  • Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience. 1997;17(11):4302–4311. [PubMed]
  • Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2006;361(1476):2109–2128. [PMC free article] [PubMed]
  • Kellenbach ML, Brett M, Patterson K. Large, colorful, or noisy? Attribute- and modality-specific activations during retrieval of perceptual attribute knowledge. Cogn Affect Behav Neurosci. 2001;1(3):207–221. [PubMed]
  • Kemmerer D. Are the motor features of verb meanings represented in the precentral motor cortices? Yes, but within the context of a flexible, multilevel architecture for conceptual knowledge. Psychonomic Bulletin and Review. (in press) [PubMed]
  • Kemmerer D, Castillo JG, Talavage T, Patterson S, Wiley C. Neuroanatomical distribution of five semantic components of verbs: Evidence from fMRI. Brain and Language. 2008;107(1):16–43. [PubMed]
  • Kiefer M, Sim E-J, Herrnberger B, Grothe J, Hoenig K. The sound of concepts: four markers for a link between auditory and conceptual brain systems. The Journal of Neuroscience. 2008;28(47):12224–12230. [PubMed]
  • Lambon Ralph MA. Neurocognitive insights on conceptual knowledge and its breakdown. Philosophical Transactions of the Royal Society B. 2014;369:20120392. [PMC free article] [PubMed]
  • Lambon Ralph MA, Mcclelland JL, Patterson K, Galton CJ, Hodges JR. No right to speak? The relationship between object naming and semantic impairment: Neuropsychological evidence and a computational model. Journal of Cognitive Neuroscience. 2001;13(3):341–356. [PubMed]
  • Lambon Ralph MA, Patterson K. Generalization and differentiation in semantic memory: insights from semantic dementia. Annals of the New York Academy of Science. 2008;1124:61–76. http://doi.org/10.1196/annals.1440.006. [PubMed]
  • Lambon Ralph MA, Pobric G, Jefferies E. Conceptual knowledge is underpinned by the temporal pole bilaterally: convergent evidence from rTMS. Cerebral Cortex. 2009;19(4):832–838. http://doi.org/10.1093/cercor/bhn131. [PubMed]
  • Lambon Ralph MA, Sage K, Jones RW, Mayberry EJ. Coherent concepts are computed in the anterior temporal lobes. Proceedings of the National Academy of Sciences USA. 2010;107(6):2717–2722. http://doi.org/10.1073/pnas.0907307107. [PubMed]
  • Mahon BZ. What is embodied about cognition? Language, Cognition, and Neuroscience. 2014;30(4):420–429. [PMC free article] [PubMed]
  • Marinkovic K, Dhond RP, Dale AM, Glessner M, Carr V, Halgren E. Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron. 2003;38(3):487–497. [PMC free article] [PubMed]
  • Mion M, Patterson K, Acosta-Cabronero J, Pengas G, Izquierdo-Garcia D, Hong YT, Nestor PJ. What the left and right anterior fusiform gyri tell us about semantic memory. Brain. 2010;133(11):3256–3268. http://doi.org/awq272. [PubMed]
  • Nichols T, Brett M, Andersson J, Wager T, Poline J-B. Valid conjunction inference with the minimum statistic. Neuroimage. 2005;25(3):653–660. [PubMed]
  • Noonan KA, Jefferies E, Visser M, Lambon Ralph MA. Going beyond inferior prefrontal involvement in semantic control: evidence for the additional contribution of dorsal angular gyrus and posterior middle temporal cortex. J Cogn Neurosci. 2013;25(11):1824–1850. http://doi.org/10.1162/jocn_a_00442. [PubMed]
  • Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience. 2007;8(12):976–987. [PubMed]
  • Piwnica-Worms KE, Omar R, Hailstone JC, Warren JD. Flavour processing in semantic dementia. Cortex. 2010;46(6):761–768. [PMC free article] [PubMed]
  • Pobric G, Jefferies E, Lambon Ralph MA. Anterior temporal lobes mediate semantic representation: mimicking semantic dementia by using rTMS in normal participants. Proceedings of the National Academy of Sciences, U S A. 2007;104(50):20137–20141. http://doi.org/0707383104. [PubMed]
  • Pobric G, Jefferies E, Lambon Ralph MA. Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation. Current Biology. 2010;20(10):964–968. [PMC free article] [PubMed]
  • Price AR, Bonner MF, Peelle JE, Grossman M. Converging Evidence for the Neuroanatomic Basis of Combinatorial Semantics in the Angular Gyrus. The Journal of Neuroscience. 2015;35(7):3276–3284. http://doi.org/10.1523/JNEUROSCI.3446-14.2015. [PMC free article] [PubMed]
  • Pulvermüller F. Brain reflections of words and their meaning. Trends in Cognitive Sciences. 2001;5(12):517–524. http://doi.org/10.1016/s1364-6613(00)01803-9. [PubMed]
  • Pulvermüller F. Semantic embodiment, disembodiment or misembodiment? In search of meaning in modules and neuron circuits. Brain and Language. 2013;127(1):86–103. http://doi.org/10.1016/j.bandl.2013.05.015. [PubMed]
  • Pulvermüller F, Moseley RL, Egorova N, Shebani Z, Boulenger V. Motor cognition-motor semantics: Action perception theory of cognition and communication. Neuropsychologia. 2014;55:71–84. http://doi.org/10.1016/j.neuropsychologia.2013.12.002. [PubMed]
  • Pulvermüller F, Shtyrov Y, Ilmoniemi R. Brain signatures of meaning access in action word recognition. Journal of Cognitive Neuroscience. 2005;17(6):884–892. [PubMed]
  • Reilly J, Harnish S, Garcia A, Hung J, Rodriguez AD, Crosson B. Lesion symptom mapping of manipulable object naming in nonfluent aphasia: can a brain be both embodied and disembodied? Cognitive Neuropsychology. 2014;31(4):287–312. http://doi.org/10.1080/02643294.2014.914022. [PMC free article] [PubMed]
  • Reilly J, Peelle JE, Garcia A, Crutch SJ. Linking somatic and symbolic representation in semantic memory: The Dynamic Multilevel Reactivation framework. Psychonomic Bulletin & Review. (in press) [PMC free article] [PubMed]
  • Rogers TT, Lambon Ralph MA, Garrard P, Bozeat S, McClelland JL, Hodges JR, Patterson K. Structure and deterioration of semantic memory: A Neuropsychological and computational investigation. Psychological Review. 2004;111(1):205–235. [PubMed]
  • Rogers TT, McClelland JL. Semantic cognition: A parallel distributed processing approach. Cambridge, MA US: MIT Press; 2004.
  • Seghier ML, Price CJ. Functional heterogeneity within the default network during semantic processing and speech production. Frontiers in Psychology. 2012;3(10) [PMC free article] [PubMed]
  • Shimotake A, Matsumoto R, Ueno T, Kunieda T, Saito S, Hoffman P, Kikuchi T, Fukuyama H, Miyamoto S, Takahashi R, Ikeda A, Lambon Ralph MA. Direct exploration of the role of the ventral anterior temporal lobe in semantic memory: cortical stimulation and local field potential evidence from subdural grid electrodes. Cerebral Cortex. 2015;25(10):3802–3817. [PMC free article] [PubMed]
  • Shtyrov Y, Pulvermüller F. Early MEG activation dynamics in the left temporal and inferior frontal cortex reflect semantic context integration. Journal of Cognitive Neuroscience. 2007;19(10):1633–1642. [PubMed]
  • Simmons WK, Martin A, Barsalou LW. Pictures of appetizing foods activate gustatory cortices for taste and reward. Cerebral Cortex. 2005;15(10):1602–1608. http://doi.org/bhi038. [PubMed]
  • Taylor KI, Moss HE, Stamatakis EA, Tyler LK. Binding crossmodal object features in perirhinal cortex. PNAS Proceedings of the National Academy of Sciences of the United States of America. 2006;103(21):8239–8244. [PubMed]
  • Thierry G, Price CJ. Dissociating verbal and nonverbal conceptual processing in the human brain. Journal of Cognitive Neuroscience. 2006;18(6):1018–1028. [PubMed]
  • Tranel D, Grabowski TJ, Lyon J, Damasio H. Naming the same entities from visual or from auditory stimulation engages similar regions of left inferotemporal cortices. Journal of Cognitive Neuroscience. 2005;17(8):1293–1305. http://doi.org/10.1162/0898929055002508 [doi] [PubMed]
  • Trumpp NM, Kliese D, Hoenig K, Haarmeier T, Kiefer M. Losing the sound of concepts: Damage to auditory association cortex impairs the processing of sound-related concepts. Cortex. 2012 http://doi.org/S0010-9452(12)00057-3. [PubMed]
  • Turken AU, Dronkers NF. The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses. Frontiers in Systems Neuroscience. 2011;5:1. http://doi.org/10.3389/fnsys.2011.00001. [PMC free article] [PubMed]
  • Tyler LK, Chiu S, Zhuang J, Randall B, Devereux BJ, Wright P, Taylor KI. Objects and categories: feature statistics and object processing in the ventral stream. Journal of Cognitive Neuroscience. 2013;25(10):1723–1735. [PMC free article] [PubMed]
  • Tyler LK, Stamatakis EA, Bright P, Acres K, Abdallah S, Rodd JM, Moss HE. Processing objects at different levels of specificity. Journal of Cognitive Neuroscience. 2004;16(3):351–362. [PubMed]
  • Vandenberghe R, Price C, Wise R, Josephs O, Frackowiak RS. Functional anatomy of a common semantic system for words and pictures. Nature. 1996;383(6597):254–256. [PubMed]
  • Vann SD, Aggleton JP, Maguire EA. What does the retrosplenial cortex do? Nature Reviews Neuroscience. 2009;10(11):792–802. [PubMed]
  • Visser M, Jefferies E, Lambon Ralph MA. Semantic processing in the anterior temporal lobes: a meta-analysis of the functional neuroimaging literature. Journal of Cognitive Neuroscience. 2010;22(6):1083–1094. [PubMed]
  • Visser M, Lambon Ralph MA. Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. Journal of Cognitive Neuroscience. 2011;23(10):3121–3131. http://doi.org/10.1162/jocn_a_00007. [PubMed]
  • Warren JD, Jennings AR, Griffiths TD. Analysis of the spectral envelope of sounds by the human brain. Neuroimage. 2005;24(4):1052–1057. [PubMed]
  • Wernicke C. Der aphasische symptomemkomplex: Eine psychologische Studie auf anatomischer basis. Breslau: Cohn und Weigert; 1874.
  • Wheatley T, Weisberg J, Beauchamp MS, Martin A. Automatic priming of semantically related words reduces activity in the fusiform gyrus. Journal of Cognitive Neuroscience. 2005;17(12):1871–1885. [PubMed]
  • Zahn R, Moll J, Krueger F, Huey ED, Garrido G, Grafman J. Social concepts are represented in the superior anterior temporal cortex. Proceedings of the National Academy of Sciences USA. 2007;104(15):6430–6435. http://doi.org/0607061104. [PubMed]