The most striking result of this study was the lack of activation within the mirror neuron system (inferior frontal gyrus, ventral premotor cortex, and inferior parietal lobule) for deaf ASL signers when passively viewing either signs or communicative gestures, compared to a fixation baseline (see ). In contrast, hearing non-signers showed robust activation within the MNS for both sets of stimuli despite the fact that, for these participants, pantomimes are meaningful and signs are not. These findings replicate previous studies of human action observation (e.g., Buccino et al., 2001
; Grèzes, Armony, Rowe, and Passingham, 2003
; Villarreal et al., 2008
) and extend Corina et al.'s (2007)
study, which found that deaf signers did not engage fronto-parietal cortices during passive viewing of human actions with objects. We interpret these results as indicating that extensive experience with meaningful hand movements (sign language and pantomime) substantially reduces or eliminates the role of the MNS in passively viewing actions that are communicative, either with linguistic form (ASL signs) or without (pantomimes). In addition, such results argue against a universal, automatic resonance of the motor system during action observation (e.g., Rizzolatti and Craighero, 2004
). If activation within the MNS occurs automatically in response to observed human body actions, we would expect neural activity in fronto-parietal cortices when passively viewing hand and body movements in signers as well as non-signers.
However, it is not the case that deaf signers never engage the MNS, or elements of the MNS, when comprehending communicative gestures and signs, particularly when an active task is added to the paradigm. Many studies report activation within the left inferior frontal gyrus (BA 44/45) and inferior parietal cortex (BA 40) when signers perform a semantic judgment or memory task with sign language (e.g., Bavelier et al., 2008
; Neville et al., 1998
; MacSweeney et al., 2002
; MacSweeney et al., 2006
). As noted earlier, MacSweeney et al. (2004)
found fronto-parietal activation when signers were asked to look for meaning in Tic Tac gestures. Thus, when additional linguistic or cognitive demands are superimposed upon passive viewing, activation within the MNS is observed for signers. Interestingly, activation in these areas is observed for non-signers even without such demands.
Mounting evidence from functional neuroimaging reveals that practice or repeated exposure to a particular task can create significant changes in neural representations and functional connectivity (see Clare Kelly and Garavan, 2005
, for review). Practice and experience can result in either an increase or a decrease in activation within task-relevant brain areas – or they can cause functional reorganization of neural activity (both increases and decreases across cortical regions). Decreases in the extent or intensity of activation are most commonly attributed to increases in neural efficiency. For example, the overall activation level in cortices that support distributed representations may be reduced because only a minority of neurons fire in response to relevant stimuli, and activity of the majority of other neurons is suppressed (Poldrack, 2000
). We hypothesize that deaf signers recognize signs and pantomimes quickly and relatively automatically and that this ease of processing leads to a substantial reduction or a complete lack of neuronal firing within the MNS, in contrast to hearing non-signers who do not have life-long experience with a conventional sign language.
At first glance, our findings and explanation may appear at odds with the results of Calvo-Merino and colleagues. Calvo-Merino et al. (2006)
and Calvo-Merino et al. (2005)
reported increased activation within the MNS when expert dancers viewed dance movements that they had been trained to perform, compared to movements on which they had not been trained. However, as in the sign language experiments of Bavelier et al. (2008)
, Neville et al. (1998)
, and MacSweeney et al. (2002
, the dancers did not passively view dance movements. Rather, they were asked to perform judgment tasks for each video (e.g., “How tiring is each movement?” or “How symmetric is each movement?”). Given the previously mentioned effects of training and experience on neural activity, we suggest that passive viewing of dance movements might reveal reduced activation in the MNS for expert dancers compared to non-dancers. But dancing and manual communication are different phenomena. For deaf signers, manual communication (particularly signing) occurs throughout the day, and sign recognition is immediate and automatic, as evidenced by sign-based Stroop effects (e.g., Vaid and Corina, 1989
; Marschark and Shroyer, 1993
). For expert dancers, dance movements may be processed by different mechanisms that are asymbolic and more tightly linked to the motor system.
Corina et al. (2007)
found evidence for functional reorganization when deaf signers observed non-communicative human actions. These actions were non-communicative because there was no intent to convey information on the part of the actor who produced the self-oriented gestures (e.g., scratching oneself) and who interacted with objects (the fMRI analyses combined both action types). In contrast, we found no evidence for functional reorganization when deaf signers observed communicative human actions, that is, no evidence that deaf signers engaged a qualitatively distinct neural system compared to hearing non-signers when they observed pantomimes, although we did observe unique functional connectivity patterns for the deaf group (see and ). Interestingly, when pantomimes were contrasted directly with ASL verbs, activation within right superior temporal cortex was observed only for deaf signers (see ). Saxe et al. (2004)
have argued that a region in the right posterior superior temporal sulcus responds specifically to observed intentional actions. The activation peak for observing pantomimes [63, –45, 15] was just lateral to the local maxima in right STS reported by Saxe et al. (2004)
[Exp. 1: 54, –42, 9; Exp. 2: 51, –42, 18]. For deaf signers, a critical difference between observing pantomimes and ASL verbs is that pantomimes depict intentional actions themselves, whereas ASL verbs represent linguistic labels for actions. For this reason, right posterior superior temporal cortex may have been more engaged for pantomimes than for ASL verbs. We hypothesize that hearing non-signers did not show differential activation in right STS for this contrast because they may have been attempting to work out the intentions of the model when she produced ASL verbs.
Replicating previous results, we found that pantomimes engaged more left hemisphere structures than ASL signs for the hearing group, and that pantomimes also engaged parietal regions to a greater extent than ASL signs for the hearing group (see ). Most of the pantomimes involved grasping movements (e.g., pretending to hold and manipulate objects like a hammer, a telephone, or a dart), reaching movements of the arm and hand (e.g., pretending to direct an orchestra, shampoo one's hair, or play the piano), and full body movements (e.g., dancing in place, pretending to jog or swing a golf club). Stronger activation within parietal cortices (BA 40 and BA 7) for pantomimes most likely reflects the recognition and understanding of the model's depiction of picking up, moving, and holding various objects. Several studies have found that the inferior parietal lobule is engaged when observing grasping actions (e.g., Grafton et al., 1996
), while the superior parietal lobule is engaged when observing reaching movements of the hand and arm (Filimon et al., 2007
), as well as when producing pantomimes (Choi et al., 2001
For deaf signers, there were no regions that were significantly more engaged for ASL verbs compared to pantomimes. Pantomimes do not have stored lexical representations, but they do convey sentence-level concepts with both an agent (the actress) and a patient (the manipulated object). Thus, pantomimes are likely to involve more extensive semantic processing than single lexical verbs, which might have obscured potential differences in regional activation between signs and pantomimes in a direct contrast. We note that the contrast between ASL verbs and fixation baseline revealed activation in left inferior frontal gyrus (BA 45) that was not present for pantomimes compared to fixation (see ). Overall, these findings are consistent with those of MacSweeney et al. (2004)
, who reported largely overlapping patterns of activation for signed sentences and strings of meaningless Tic Tac gestures, but signed sentences exhibited stronger activation in left IFG and left posterior STS extending into supramarginal gyrus. It is likely that we observed even less sign-specific activation than MacSweeney et al. (2004)
because pantomimes are meaningful and, in this sense, may be more sign-like than Tic Tac gestures.
For pantomimes, the functional connectivity analyses revealed correlated activity between left premotor cortex and left IPL for the hearing group (), indicating robust integration within the MNS when viewing either meaningful pantomimes or meaningless gestures. In addition, functional connections of both the left anterior (premotor) and left posterior (IPL) components of the MNS were strongly left lateralized for the hearing group. For the fusiform seed voxel (a region outside the MNS, activated by both the hearing and deaf groups), activity was coupled with the anterior component of the MNS (ventral premotor cortex) for pantomimes, but only for the hearing group. Again functional connectivity was strongly left lateralized for hearing non-signers and bilateral for deaf signers ().
Greater right hemisphere connectivity for deaf signers might be an effect of experience with sign language. Some previous studies have found more right hemisphere activation during sign language comprehension, compared to spoken language comprehension (Neville et al., 1998
; Capek et al., 2004
; but see MacSweeney et al. 2002
; Hickok, Bellugi, and Klima, 1998
). In addition, a recent morphometry study by Allen et al. (2008) found that both deaf and hearing signers exhibited increased white matter volume in the right insula compared to hearing non-signers. Allen et al. (2008) speculated that the distinct morphology of the right insula for ASL signers might arise from enhanced connectivity due to an increased reliance on cross-modal sensory integration in sign language compared to spoken language. If sign language processing recruits right hemisphere structures to a greater extent than spoken language, signers may develop more extensive functional networks connecting the left and right hemispheres.
Another possibility, which is not mutually exclusive, is that the bilateral connectivity we observed might reflect the effects of congenital and life-long auditory deprivation on brain organization. Kang et al. (2003)
investigated the functional connectivity of auditory cortex in deaf children and adults by examining interregional metabolic correlations with 18
F-FDG PET. In this study, the mean activity of FDG uptake in the cytoarchitectonically defined A1 region served as a covariate in the interregional and interhemispheric correlation analysis. The authors reported that metabolism of left auditory cortex was strongly correlated with auditory cortices in the right hemisphere for deaf adults and older deaf children (ages 7-15 years), but no such correlation was evident for normally hearing adults. The cross-hemispheric correlation was stronger for deaf adults than for deaf children. The absence of auditory input, perhaps in conjunction with sign language experience, may lead to plastic changes in functional connectivity between the two hemispheres of the brain for deaf individuals.
In summary, deaf signers exhibited distinct patterns of brain activity and functional connectivity when passively viewing pantomimes and ASL signs, compared to hearing non-signers. The fact that no brain activation was found in anterior (ventral premotor) or posterior (inferior parietal) elements of the MNS for deaf signers for either signs or pantomimes argues against an account of human communication that depends upon automatic sensorimotor resonance between perception and action (see also Toni et al., 2008
). We hypothesize that life-long experience with manual communication reduces or eliminates involvement of the mirror neuron system during passive viewing of communication via the hands. The neural regions engaged in pantomime and sign language perception were very similar, but non-identical, for signers. Recognizing pantomimes recruited right posterior superior temporal cortex near a region hypothesized to respond to observed intentional actions (Saxe et al., 2004
), whereas recognizing ASL verbs preferentially engaged regions within the left inferior frontal gyrus. For hearing non-signers, processing pantomimes engaged parietal cortices to a greater extent than processing meaningless hand movements (ASL signs), reflecting the role of the inferior and superior parietal cortices in grasping and reaching movements. Finally, functional connectivity analyses revealed greater cross-hemisphere correlations for deaf signers and greater left hemisphere integration for hearing non-signers, particularly when viewing pantomimes. We speculate that more right hemisphere integration for deaf signers might arise as a result of long-term experience with sign language processing, plastic changes associated with sensory deprivation, or both.