PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Hum Brain Mapp. Author manuscript; available in PMC 2009 March 1.
Published in final edited form as:
PMCID: PMC2644740
NIHMSID: NIHMS69175

Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

Abstract

Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

Keywords: gestures, speech perception, auditory cortex, magnetic resonance imaging, nonverbal communication

Introduction

Successful social communication involves the integration of simultaneous input from multiple sensory modalities. In addition to speech, features such as tone of voice, facial expression, body posture, and gesture all contribute to the perception of meaning in face-to-face interactions. Hand gestures, for example, can alter the interpretation of speech, disambiguate speech, increase comprehension and memory, and convey information not delivered by speech (e.g., Kendon, 1972; McNeill et al., 1992, 1994; Kelly et al., 1999; Goldin-Meadow & Singer, 2003; Cook et al., 2007). Despite the visible role of gesture in everyday social communication, relatively little is known about how the brain processes natural speech accompanied by gesture.

Studies examining the neural correlates of co-occurring gesture and speech have focused almost entirely on iconic gestures (i.e., hand movements portraying an object or activity). For instance, Holle et al. (2007) showed that viewing iconic gestures (as compared to viewing self-grooming movements) led to increased activity in STS, inferior parietal lobule, and precentral sulcus. Willems et al. (2006) observed similar activity in Broca’s area in response to both word-word and word-gesture mismatches. Using Transcranial Magnetic Stimulation, Gentilucci et al. (2006) also demonstrated Broca’s area involvement in iconic gesture processing. Further, studies using Event Related Potentials suggest that iconic gestures engage semantic processes similar to those evoked by pictures and words (Wu & Coulson, 2005), that iconic gestures are integrated with speech during language processing (Kelly et al., 2004; Özyürek et al., 2007), and that integration of gesture and speech is impacted by the meaningfulness of gesture (Holle & Gunter, 2007).

The integration of auditory and visual cues during speech has been studied more extensively in the context of “visual speech” (i.e., the speech-producing movements of the lips, mouth, and tongue). Behavioral effects related to visual speech are similar to those observed for gesture, as concordant visual speech can aid speech perception (Sumby & Pollack, 1954) whereas discordant visual speech can alter auditory perception (McGurk & MacDonald, 1976). Neuroimaging studies have shown that listening to speech accompanied by concordant visual speech yields greater hemodynamic activity in auditory cortices than listening to speech alone (e.g., Calvert et al., 1999, 2003). In addition, multisensory integration of auditory and visual speech has been observed in the left planum temporale (PT), left superior temporal gyrus and sulcus (STG/S), and left middle temporal gyrus (MTG; Calvert et al., 2000; Campbell et al., 2001; Callan et al., 2001, 2003, 2004; Pekkola et al., 2006).

Here we used functional magnetic resonance imaging (fMRI) paired with an ecologically valid paradigm in order to investigate how speech perception might be affected by rhythmic gesture which accompanies speech. Descriptions of gestures which match the cadence of speech stem from as long ago as 60 A.D. (Quintilian, 2006); these gestures have since been dubbed “batons,” “beats” and “beat gesture” (i.e., rapid movements of the hands which provide ‘temporal highlighting’ to the accompanying speech; McNeill, 1992). Beat gesture has been shown to impact the perception and production of speech prosody (i.e., the rhythm and intonation of speech; Krahmer and Swerts, in press), as well as to establish context in narrative discourse (McNeill, 1992). To the extent that no prior study has focused on the neural correlates of beat gesture and that both visual speech and gesture (1) affect speech comprehension and (2) involve biological motion, we hypothesized that they might be subserved by overlapping neural substrates. Accordingly, we focused on superior temporal cortices as our a priori regions of interest.

Materials and Methods

Subjects

Thirteen adult subjects (3 females; 27.51 ± 7.10 years of age) were recruited at Advanced Telecommunications Research Institute in Kyoto from a cohort of international visitors. All subjects were healthy, right-handed, native English speakers who neither spoke nor understood American Sign Language.

Stimulus material

All video segments comprising the stimuli were culled from two hours of spontaneous speech recorded in a naturalistic setting (i.e., the kitchen of a house). The recording featured a female native speaker of North American English who was naïve to the purpose of the recording. A set of questions relevant to the speaker’s life and experiences was prepared prior to the recording. During the recording, the speaker was asked to stand in the kitchen and answer questions posed to her by the experimenter in the adjacent room. Great care was taken to remove speech articulators and other indices of fundamental frequency in an uncontrived, ecologically-valid manner. The illusion of a cupboard occluding the speaker’s face was created by affixing a piece of plywood (stained to match the wood in the kitchen) to the wall above the stove. The recording was produced using a Sony DCR-HC21 Mini DV Handycam Camcorder secured on a tripod and tilted downward so that only the speaker’s lower neck, torso area, and upper legs were visible. The speaker moved freely and expressed herself in a natural, conversational style throughout the recording. Importantly, although her head was behind the plywood board, her gaze was free to shift from the board directly in front of her to the observer sitting on the couch in the adjacent room. Following the spontaneous speech recording, 12 picture sequences were affixed to the plywood board in front of the speaker’s face. The pictures depicted movements that represent words in American Sign Language (ASL) but which lack obvious iconic meaning to non-signers. The speaker, who neither spoke nor understood ASL, produced each set of movements one time. There were no words written on the pictures, and the speaker did not talk while producing the hand movements. Finally, the speaker was recorded as she stood motionless. Videos were captured with a Sony Mini DV GV-D900 and imported using Macintosh OSX and iMovie. Final Cut Pro HD 4.5 was used to cut and export 24 18-second segments of speech with beat gesture to .avi movie files. Since the 24 segments were selected from two hours of free-flowing speech with gesture, inclusion or exclusion of gesture type could be controlled by cropping. That is, it was possible to eliminate movements that communicated consistent semantic information in the absence of speech by beginning an 18-second segment after that gesture had occurred. As the benefits of segregating gesture into strict categories has recently come under scrutiny (McNeill, 2005), in order to maintain ecological validity, beat gesture (i.e., rhythmic gesture) was not limited to flicks of the hand for the purposes of this study. The stimuli segments contained both beat gesture (strictly defined) as well as rhythmic movements possessing minimal iconicity and metaphoricity. All three types of beat gesture described in Li, et al. (2003)–beats with and without post-stroke holds and movement to a different gesture space for subsequent beat gesture–occurred in our stimuli. Tuite (1993) and Kendon (1972) describe relationships between gestural and speech rhythm, but methods for studying this complex relationship remain elusive. Shattuck-Hufnagel, et al. (in press) and Yasinnick (2004) are among the first to attempt to develop systematic, quantitative methods for investigating speech and gesture timing. Their ongoing work (Shattuck-Hufnagel et al., in press) seeks to represent the relationship between pitch accents and corresponding gestural events.

In the absence of an established method for determining the direct relationship between speech and gesture timing in free-flowing speech, we attained 18-second segments of rhythmic gesture and speech by removing highly iconic gestures. A group of eight viewers (who were not subjects in the study) reported that semantic information could not be discerned by viewing the 24 video segments in the absence of speech. Additionally, one 18-second segment with a still frame of the speaker’s body and 12 segments of ASL-based movements, consisting of 65 different signs, were selected. The selected ASL movements were non-iconic, and a group of 8 viewers (who did not participate in the study) confirmed that the movements did not elicit semantic information. The 24 segments of beat gesture and speech were used in the beat gesture with speech condition (as originally recorded) and in the beat gesture without speech condition (where the audio was removed; Figure 1). The 12 ASL-based segments were used in the nonsense hand movement without speech condition (as originally recorded) and in the nonsense hand movement with speech condition (where they were paired with speech from the former 24 segments that were originally accompanied by beat gesture). Finally, the motionless recording of the speaker was used in the still frame without speech condition, used as baseline, and in the still frame with speech condition (where they were paired with speech from the 24 segments originally accompanied by beat gesture). One 18-second segment was shown per block, thus blocks were 18 seconds long, with a 3-second cream-colored screen separating segments.

Figure 1
Experimental paradigm. There were six conditions, obtained by crossing movement type (beat gesture, nonsense hand movement, still frame) by speech (present or absent). In the actual experiment, blocks were presented in pseudorandom orders counterbalanced ...

The RMS energy of the audio segments was adjusted to be identical across stimuli. To prevent specific item effects (in terms of speech content), stimuli were counter-balanced across subjects such that one subject might hear and see segment no. 1 with the original beat gesture and speech, another subject might hear the speech of segment no. 1 while viewing one of the segments of nonsense hand movement, and yet another subject might hear the speech of segment no. 1 while viewing the still frame. For each subject, any part (speech and/or body movements) of the original 24 beat gesture segments and 12 nonsense hand movement segments occurred exactly one time over the two sessions. The order of presentation of the video segments was randomized subject to the constraints that there would be no serial occurrence of: (i) two identical conditions, (ii) three segments with speech, or (iii) three segments without speech. Each subject viewed a different randomization of the video sequences.

After the fMRI scan, subjects were given a short test with three multiple-choice questions based on the audio content of the final three audiovisual segments appearing in the subject’s fMRI session. Since stimuli were randomized and counter-balanced, each subject received a different set of test questions. This test was intended to promote attention during the passive fMRI task as subjects were informed about a post-scan test at the beginning of the fMRI session. The average accuracy for the 13 subjects was 90% correct.

Experimental procedures

Prior to the fMRI scan, subjects received a short introduction to the task. They were shown a still picture of the video and told that the speaker, whose head was blocked by a cupboard in the kitchen, was talking to a person in the adjacent room. They were told to keep their eyes fixated on the speaker’s torso at all times, even throughout the silent segments. Subjects were advised to pay attention during the entire scan because they would be given a post-scan test on what they saw and heard.

Subjects lay supine in the scanner bed while undergoing two consecutive fMRI scans; in each of these 6 minutes and 30 second scans, each condition occurred 3 times. Visual and auditory stimuli were presented to subjects using a magnet-compatible projection system and headphones under computer control. Subjects viewed visual stimuli via a Victor, Japan projector. The audiovisual stimuli were presented using full view in Real Player in order to ensure that subjects saw no words, numbers or time bars while viewing the stimuli. The images were projected first to a 150 by 200 mm screen placed just behind the subject’s head. Subjects viewed a reflection of the 110 by 150 images (screen resolution 1024 by 768) via a small mirror adjusted to eye level. Auditory stimuli were presented using a Hitachi Advanced head set.

Images were acquired at Advanced Telecommunications Research Institute in Kyoto, Japan using a Shimadzu 1.5 whole body scanner. A 2D spin-echo image (TR = 300 ms, TE = 12.1 ms, matrix size 384 by 512, 5-mm thick, 5-mm gap) was acquired in the sagittal plane to allow prescription for the remaining scans. For each participant, a structural T2-weighted fast spin echo imaging volume (spin-echo, TR = 5468 ms, TE = 80 ms, matrix size 256 by 256, FOV = 224 cm, 30 slices, 0.875-mm in-plane resolution, 5-mm thick) was acquired coplanar with the functional scans to allow for spatial registration of each subject’s data into a common space. The functional data were acquired during two whole-brain scans, each lasting 6 minutes and 30 seconds (264 images, EPI gradient-echo, TR = 3000 ms, TE = 49, flip angle = 90°, matrix size = 64 by 64, 3.5 mm in-plane resolution, 5 mm thick, 0 mm gap).

Data analysis

Following image conversion, the functional data were analyzed using Statistical Parametric Mapping 2 (SPM2; http://www.fil.ion.ucl.ac.uk/spm/software/spm2/). Functional images for each participant were realigned to correct for head motion, normalized into MNI space (Collins et al., 1994; Mazziotta et al., 2001) and smoothed with a 7 mm Gaussian kernel. For each subject, condition effects were estimated according to the General Linear Model using a six-sec delay boxcar reference function. The still frame condition was implicitly modeled as baseline. The resulting contrast images were entered into second level analyses using random effect models to allow for inferences to be made at the population level (Friston et al., 1999). Group activation maps were thresholded at p < 0.01 for magnitude, with whole-volume correction for multiple comparisons applied at the cluster level (p < 0.05). The SPM2 toolbox MarsBaR (Brett et al., 2002) was used to extract parameter estimates for each participant from regions of interest. Small volume correction was applied for selected contrasts of interest based upon previous research identifying superior temporal cortices (i.e., PT and STG/S) as areas of increased activity while viewing visual speech during speech perception and as putative sites of multisensory integration. For the contrast of speech with beat gesture versus speech with nonsense hand movement (Figure 2b), small volume correction was based on a 14000 mm3 volume, a conservative estimate according to measurements of the auditory belt and parabelt regions reported in Sweet et al., 2005. This volume was defined by a sphere of 15 mm radius and centered at the functional maxima (x=−57, y=−12, z=8). For the contrast of bimodal (beat gesture and speech) versus unimodal (still body and speech, beat gesture only) (Figure 2c), small volume correction was based on a 4100 mm3 volume defined by a sphere of 10 mm radius centered at the functional maxima (x=57, y=−27, z=8; identified as planum temporale per anatomical maps reported in a previous structural MRI study, Westbury et al., 1999). Cluster size and coordinates for peaks of activity for all contrasts of interest are presented in Supplementary Table 1.

Figure 2
Neural activity related to processing speech and speech accompanied by beat gesture. (a) Clusters depict areas of greater activity while listening to speech accompanied by beat gesture as compared to listening to speech accompanied by a still body. Areas ...

Results

A direct comparison between speech accompanied by beat gesture versus speech accompanied by a still body (statistical activation map in Figure 2a; Supplementary Table 1) revealed greater activity in bilateral PT and posterior STG, two areas known to underlie both speech perception and the processing of biological motion. Greater activity for this contrast was also observed in visual cortices, associated with sensory processing, as well as in bilateral premotor and left parietal regions, perhaps reflecting “mirror neuron system” activity associated with the perception of meaningful actions (Rizzolatti & Craighero, 2004). As compared to baseline (i.e., viewing a still body without speech), viewing speech accompanied by beat gesture (blue outlines in Figure 2a; Figure 3b; Supplementary Table 1) led to increased activity in bilateral visual cortices (including visual motion area MT), primary auditory cortices, STG/S, MTG, inferior frontal gyrus, middle frontal gyrus, postcentral gyrus and superior colliculi. Viewing speech accompanied by a still body (as compared to baseline) led to increased activity in several overlapping areas (green outlines in Figure 2a; Supplementary Table 1), such as bilateral STG/S, MTG, and IFG. Viewing beat gesture without speech as compared to baseline (Figure 3a; Supplementary Table 1) yielded significant increases in bilateral occipito-temporal areas including MT, right postcentral gyrus and intraparietal sulcus, and posterior MTG and STG/S.

Figure 3
Neural activity related to processing beat gesture and nonsense hand movements in the presence and absence of speech. Clusters depict areas of greater activity while (a) viewing beat gesture as compared to viewing a still body, (b) listening to speech ...

As compared to baseline, viewing speech accompanied by nonsense hand movement (Figure 3d; Supplementary Table 1) led to increased activity throughout visual and temporal cortex bilaterally as well as in bilateral postcentral gyri, intraparietal sulci, and superior colliculi, and left middle frontal gyrus. Viewing nonsense hand movement without speech as compared to baseline (Figure 3c; Supplementary Table 1) yielded significant increases in bilateral occipito-temporal areas including MT, postcentral gyri, intraparietal sulci, superior and middle frontal gyri, and right cerebellum.

To identify regions where increased activity might specifically reflect the integration of beat gesture and speech, we directly compared neural responses to speech accompanied by beat gesture versus speech accompanied by nonsense hand movement. Notably, this contrast revealed significant activity in left STG/S (Figure 2b; Supplementary Table 1), indicating that beat gesture, just as visual speech, modulates activity in left nonprimary auditory cortices during speech perception. For the inverse contrast –nonsense hand movement versus beat gesture– left cerebellum, postcentral gyrus, and intraparietal sulcus were significantly more active (Supplementary Table 1).

To further examine regions where the presence of speech impacts gesture processing, we contrasted summed responses to unimodal conditions (still body with speech and beat gesture only) with responses to the bimodal condition (beat gesture with speech). Significantly greater responses to the bimodal presentation of beat gesture and speech (speech with beat gesture > speech with still body + beat gesture with no speech) were observed in right PT (Figure 2c; Supplementary Table 1). Parameter estimates for each condition in this contrast show that activity while silently viewing beat gesture was neither significantly below nor above baseline. Hence, right PT was recruited when beat gesture was presented in the context of speech, whereas in the absence of speech, gesture had no effect. No areas demonstrated superadditive properties for the combination of nonsense hand movement and speech (speech with nonsense hand movement > speech with still body + nonsense hand movement with no speech).

Discussion

Few studies have attempted to characterize the brain’s response to concurrently and spontaneously produced gesture and speech. We hypothesized that neural responses to natural, rhythmic gesture accompanying speech would be observed not only in visual cortex but also in STG and STS, areas well-known for their role in speech processing. This hypothesis was guided by research on iconic gestures and deaf signers which indicates that STG/S plays a role in processing movement. Additional cues were provided by studies on visual speech showing STG/S to be crucially involved in audiovisual integration of speech with accompanying mouth movement. Supporting our hypothesis, bilateral posterior STG/S (including PT) responses were significantly greater when subjects listened to speech accompanied by beat gesture than when they listened to speech accompanied by a still body. Further, left anterior STG/S responses were significantly greater when listening to speech accompanied by beat gesture than when listening to speech accompanied by a control movement (i.e., nonsense hand movement). Finally, right posterior STG/S showed increased responses only to beat gesture presented in the context of speech, and not to beat gesture presented alone, suggesting a possible role in multisensory integration of gesture and speech. Related research in biological motion, deaf signers, visual speech, and iconic gesture highlight the importance of these current data.

As would be expected, canonical speech perception regions in STG/S showed increased bilateral activity while subjects heard speech accompanied by a still body or speech accompanied by beat gesture. Importantly, when directly comparing these two conditions, responses in the posterior portion of bilateral STG (including PT) were significantly greater when speech was accompanied by beat gesture. These data provide further support for STG/S as a polysensory area, as was originally suggested by studies in rhesus and macaque monkeys (Desimone & Gross, 1979; Bruce et al., 1981; Padberg et at., 2003). Neuroimaging data has shown that STG/S–especially the posterior portion–is responsive to both visual and auditory input in humans. Studies in hearing and nonhearing signers strongly implicate the posterior temporal gyrus in language-related processing, regardless of whether the language input is auditory or visual in nature (MacSweeney et al., 2004, 2006). Most recently, Holle et al. (2007) reported that the posterior portion of left STS showed increased activity for viewing iconic gestures as compared to viewing grooming-related hand movements. Wilson et al. (2007) found a greater degree of intersubject correlation in the right STS when subjects viewed an entire body (e.g., head, face, hands, and torso) producing natural speech as compared to when they heard speech alone. STG/S has also been shown to be more active while listening to speech accompanied by a moving mouth than while listening to speech accompanied by a still mouth (Calvert et al., 1997; 2003). Interestingly, the stimuli in these studies may all be said to have communicative intent, suggesting that the degree of STG/S involvement may be mediated by the viewer’s perception of the stimuli as potentially communicative. Such a characteristic of STG/S would be congruent with Kelly et al.’s (2006) finding that the central N400 effect (i.e., a response known to occur when incongruent stimuli are presented) can be eliminated when subjects know that gesture and speech are being produced by different speakers.

It is important to distinguish between the posterior portion of STG/S, and the STSp (posterior superior temporal sulcus). The latter is a much-discussed area in the study of biological motion (for review, see Blake and Shiffrar, 2006), as STSp has consistently shown increased activity for viewing point-light representations of biological motion (Grossman et al., 2004; Grossman and Blake, 2002). Qualitative comparisons suggest that silently viewing beat gesture (versus a still body) leads to increased activity in the vicinity of STSp (right hemisphere) as reported in Grossman et al. (2004); Grossman and Blake (2002), and Bidet-Caulet, et al. (2005). Significant increases for speech-accompanied beat gesture over speech-accompanied still body, however, are anterodorsal to STSp. That is, speechless biological motion versus a still body yields significant increases in regions known to underlie processing of biological motion, but when accompanied by speech, biological motion versus a still body yields significant increases in an area more dorsal and anterior (to that identified by biological motion localizers). Once again, this suggests that the intent to participate in a communicative exchange (e.g., listening to speech) is a crucial determinant in how movement is processed. The idea that perception of gesture can be altered by the presence or absence of speech complements behavioral findings on gesture production, where it has been shown that the presence of speech impacts what is conveyed through gesture (So et al., 2005).

We would like to suggest that processing of movement may, in many cases, be context driven. Rather than processing speech-accompanying movement in canonical biological motion regions, perhaps movement is processed differently when it is interpreted (consciously or unconsciously) as having communicative intent. We are not the first to suggest that–in the case of language–the brain may not break down sensory input to its smallest tenets and then build meaning from those pieces. In a survey of speech perception studies, Indefrey and Cutler (2004) discovered that regions which are active while listening to single phonemes are not necessarily active while listening to a speech stream. Hence, it appears that the brain is not breaking the speech stream down into its component parts in order to extract meaning. Instead, the context in which the phonemes are received (e.g., words, sentences) determines neural activity. We are suggesting that this may be the case for biological motion as well – that biological motion with speech and without speech may be processed differently due to the contextualization afforded by speech.

Again when exploring activity within STG/S for the contrast of speech accompanied by beat gesture versus speech accompanied by a still body, it is notable that STG/S activity for this contrast includes planum temporale (PT) bilaterally. Within this study, PT has emerged as a potentially critical site for the integration of beat gesture and speech. Contrasting responses to the co-presentation of speech and beat gesture with responses to unimodal presentation of speech (with a still body) and beat gesture (without speech), the right PT was identified as a putative site of gesture and speech integration (Figure 2c). 1 In other words, in the right PT, beat gesture had no effect in the absence of speech. However, in the presence of speech, beat gesture resulted in a reliable signal increase in right PT.

Significant activity in bilateral PT (as well as inferior, middle, and superior temporal gyri) was observed by MacSweeney et al. (2004) while hearing nonsigners viewed blocks of British Sign Language and Tic Tac (a communicative code used by racecourse betters). We observed no activity in planum temporale for either beat gesture or nonsense hand movements (which are based on ASL signs) when viewed without speech. MacSweeney et al. (2004), in addition to including a highly animated face in their stimuli, informed participants that the stimuli would be communicative and asked them to judge which strings of movements were incorrect. Thus, the participants had several cues indicating that they should search for meaning in the hand movements. In the current study, participants had no explicit instruction to assign meaning to the hand movement. Increased activity in planum temporale was observed only when beat gesture was accompanied by speech and not when beat gesture was presented silently. Hence, it appears that PT activity, especially, is mediated by imbuing movement with the potential to convey meaning.

Considering what is known about PT activity, it is likely that beat gesture establishes meaning through its connection to speech prosody. PT has been shown to process meaningful prosodic and melodic input, as significantly greater activity has been observed in this area for producing or perceiving song melody versus speech (Callan et al., 2006; Saito et al., 2006) and for listening to speech with strong prosodic cues (Meyer et al., 2004). Greater activity in PT has also been observed for listening to music with salient metrical rhythm (Chen et al., 2006), processing pitch modulations (Warren et al., 2005; Barrett & Hall, 2006), singing versus speaking, and synchronized production of song lyrics (Saito et al., 2006). The observed right lateralization of multisensory responses for beat gesture and speech may be a further reflection of the link between speech prosody and beat gesture (Krahmer and Swerts, in press). Numerous fMRI, neurophysiological, and lesion studies have demonstrated a strong right hemisphere involvement in processing speech prosody (for review, see Kotz et al., 2006). Along these lines, it has also been suggested that the right hemisphere is better suited for musical processing (Zatorre et al., 2002).

Our findings both confirm the role of PT in processing rhythmic aspects of speech and suggest that this region also plays a pivotal role in processing speech-accompanying gesture. This warrants future work to determine the degree to which PT responses may be modulated by temporal synchrony between beat gesture and speech. Additionally, further studies will be necessary to determine the impact of beat gesture in the presence of other speech-accompanying movement (e.g., head and mouth movement). In order to begin to investigate the neural correlates of beat gesture independently from other types of speech-accompanying movement, the current study recreates environmental conditions where gesture is the only speech-accompanying movement that can be perceived (e.g., viewing a speaker whose face is blocked by an environmental obstacle or viewing a speaker from the back of a large auditorium whose face is barely visible).

Whereas the contrast of beat gesture with speech versus still body with speech showed significant increases in bilateral posterior areas of STG/S, the contrast of beat gesture with speech versus nonsense hand movement with speech showed significant increases in left anterior areas of STG/S. In light of the role of left anterior STG/S in speech intelligibility (Scott et al., 2000; Davis and Johnsrude, 2003), these data suggest that natural beat gesture may impact speech processing at a number of stages. Humphries, et al. (2005) found that the left posterior temporal lobe was most sensitive to speech prosody. It may be the case that beat gesture focuses viewers’ attention on speech prosody which, in turn, leads to increased intelligibility and comprehension. Considering that responses to speech-accompanied beat gesture and nonsense hand movement are not significantly different within right PT, the synchronicity of beat gesture (or the asynchronicity of the random movements) may contribute to differential responses observed in anterior temporal cortex for listening to speech accompanied by these two types of movement.

Willems and Hagoort (2007) have suggested that the link between language and gesture stems from a more general interplay between language and action. Perhaps attesting to this interplay, no other regions besides the anterior STG/S were more active for speech with beat gesture compared to speech with nonsense hand movements. The stimuli and design of the present study were also significantly different from those of another recent study which showed increased responses in Broca’s area for gesture-word mismatches (Willems et al., 2006). Willems et al.’s findings are complimentary to those of the current study in that we investigated responses to gesture with very little semantic information, whereas Willems et al. examined the impact of semantic incongruency in gesture and speech.

Besides posterior temporal regions, we also observed greater activity for speech with beat gesture (as compared to speech with a still body) in bilateral premotor cortices and inferior parietal regions. This may reflect activation of the “mirror neuron system” (for review, see Rizzolatti & Craighero, 2004, and Iacoboni & Dapretto, 2006), whereby regions responsible for action execution (in this case, gesture production) are thought to likewise be involved in action observation. Wilson et al. (2007) also reported bilateral premotor activity for audiovisual speech (but not for audio-only speech), although this activity was ventral to that observed in the present study and did not reach significance when audiovisual and audio-only conditions were compared directly. This difference in localization might reflect the fact that, unlike the stimuli used in the current study, the speaker’s head, face, and speech articulators were fully visible in the stimuli used by Wilson and colleagues (i.e., hand muscles are known to be represented dorsally to head and face muscles within the premotor cortex).

An important area for the processing of our ASL-derived nonsense hand movement was the parietal cortex. Parietal activity was consistently observed when beat gesture and nonsense hand movement (both with and without speech) were compared to baseline. In addition, parietal activity was significantly greater both when viewing nonsense hand movement accompanied by speech (as compared to viewing beat gesture accompanied by speech) and when viewing nonsense hand movement without speech (as compared to viewing beat gesture without speech). Interestingly, Emmorey at al. (2004, 2005, 2007) have identified parietal activity as being crucial to production of sign language. Considering that our subjects and the woman appearing in our stimuli neither spoke nor understood ASL, our data suggest that parietal regions may be optimized for perception of the types of movement used in ASL.

To conclude, our findings of increased activity in posterior STG/S (including PT) for beat gesture with speech indicate that canonical speech perception areas in temporal cortices may process and integrate not only auditory cues but also visual cues during speech perception. Additionally, our finding that activity in anterior STG/S is impacted by speech-accompanying beat gesture suggest differential but intertwined roles for anterior and posterior sections of the STG/S during speech perception, with anterior areas demonstrating increased effects for amplification of speech intelligibility and posterior areas demonstrating increased effects for the presence of multimodal input. In line with extensive research showing that speech-accompanied gesture impacts social communication (e.g., McNeill, 1992) and evidence of a close link between hand action and language (for review, see Willems and Hagoort, 2007), our findings highlight the important role of multiple sensory modalities in communicative contexts.

Acknowledgments

We thank Olga Griswold for invaluable discussion and two anonymous reviewers for their helpful comments. This work was supported by the Foundation for Psychocultural Research–UCLA Center for Culture, Brain, and Development, ATR International, and the NSF EAPSI Program. For generous support the authors also wish to thank the Brain Mapping Medical Research Organization, Brain Mapping Support Foundation, Pierson-Lovelace Foundation, The Ahmanson Foundation, William M. and Linda R. Dietel Philanthropic Fund at the Northern Piedmont Community Foundation, Tamkin Foundation, Jennifer Jones-Simon Foundation, Capital Group Companies Charitable Foundation, Robson Family and Northstar Fund. The project described was supported by Grant Numbers RR12169, RR13642 and RR00865 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH); its contents are solely the responsibility of the authors and do not necessarily represent the official views of NCR or NIH.

Footnotes

1The multisensory properties demonstrated by right PT were observed by utilizing a test for superadditivity. First described in single-cell studies, superadditivity is a property whereby neuronal responses to bimodal stimulus presentation are greater than the summed responses to unimodal stimulus presentations (Stein et al., 1993). Although activity observed using the test for superadditivity may not reflect the same neuronal activity measured in the single multisensory integration cells which were originally identified with this approach (Stein et al., 1993; Laurienti et al., 2005), this test has been used by researchers of visual speech (Calvert et al., 2000; Campbell et al., 2001; Callan et al., 2003, 2004) to successfully identify areas involved in multisensory integration.

References

  • Barrett DJ, Hall DA. Response preferences for “what” and “where” in human non-primary auditory cortex. Neuroimage. 2006;32:968–77. [PubMed]
  • Bidet-Caulet A, Voisin J, Bertrand O, Fonlupt P. Listening to a walking human activates the temporal biological motion area. Neuroimage. 2005;28:132–9. [PubMed]
  • Blake R, Shiffrar M. Perception of human motion. Annu Rev Psychol. 2007;58:47–73. [PubMed]
  • Brett M, Anton JL, Valabregue R, Poline JB. Region of interest analysis using an SPM toolbox [abstract] Neuroimage. 2002;16:S497.
  • Bruce C, Desimone R, Gross CG. Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. J Neurophysiol. 1981;46:369–84. [PubMed]
  • Callan DE, Callan AM, Kroos C, Vatikiotis-Bateson E. Multimodal contribution to speech perception revealed by independent component analysis: a single-sweep EEG case study. Brain Res Cogn Brain Res. 2001;10:349–53. [PubMed]
  • Callan DE, Jones JA, Munhall K, Callan AM, Kroos C, Vatikiotis-Bateson E. Neural processes underlying perceptual enhancement by visual speech gestures. Neuroreport. 2003;14:2213–8. [PubMed]
  • Callan DE, Jones JA, Munhall K, Kroos C, Callan AM, Vatikiotis-Bateson E. Multisensory integration sites identified by perception of spatial wavelet filtered visual speech gesture information. J Cogn Neurosci. 2004;16:805–16. [PubMed]
  • Callan DE, Tsytsarev V, Hanakawa T, Callan AM, Katsuhara M, Fukuyama H, Turner R. Song and speech: brain regions involved with perception and covert production. Neuroimage. 2006;31:1327–42. [PubMed]
  • Calvert GA, Bullmore ET, Brammer MJ, Campbell R, Williams SC, McGuire PK, Woodruff PW, Iversen SD, David AS. Activation of auditory cortex during silent lipreading. Science. 1997;276:593–6. [PubMed]
  • Calvert GA, Brammer MJ, Bullmore ET, Campbell R, Iversen SD, David AS. Response amplification in sensory-specific cortices during crossmodal binding. Neuroreport. 1999;10:2619–23. [PubMed]
  • Calvert GA, Campbell R, Brammer MJ. Evidence from Functional Magnetic Resonance Imaging of Crossmodal Binding in the Human Heteromodal Cortex. Curr Biol. 2000;10:649–657. [PubMed]
  • Calvert GA, Campbell R. Reading speech from still and moving faces: the neural substrates of visible speech. J Cogn Neurosci. 2003;15:57–70. [PubMed]
  • Campbell R, MacSweeney M, Surguladze S, Calvert G, McGuire P, Suckling J, Brammer MJ, David AS. Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning) Brain Res Cogn Brain Res. 2001;12:233–43. [PubMed]
  • Chen JL, Zatorre RJ, Penhune VB. Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms. Neuroimage. 2006;32:1771–81. [PubMed]
  • Collins DL, Neelin P, Peters TM, Evans AC. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. J Comput Assist Tomogr. 1994;18:192–205. [PubMed]
  • Cook SW, Mitchell Z, Goldin-Meadow S. Gesturing makes learning last. Cognition. 2007 doi: 10.1016. 6/j.cognition.2007.04.010. [PMC free article] [PubMed] [Cross Ref]
  • Davis MH, Johnsrude IS. Hierarchical processing in spoken language comprehension. J Neurosci. 2003;23:3423–31. [PubMed]
  • Desimone R, Gross CG. Visual areas in the temporal cortex of the macaque. Brain Res. 1979;178:363–80. [PubMed]
  • Emmorey K, Mehta S, Grabowski TJ. The neural correlates of sign versus word production. Neuroimage. 2007;36:202–8. [PMC free article] [PubMed]
  • Emmorey K, Grabowski T, McCullough S, Ponto LL, Hichwa RD, Damasio H. The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neuroimage. 2005;24:832–40. [PubMed]
  • Emmorey K, Grabowski T, McCullough S, Damasio H, Ponto L, Hichwa R, Bellugi U. Motor-iconicity of sign language does not alter the neural systems underlying tool and action naming. Brain Lang. 2004;89:27–37. [PubMed]
  • Friston KJ, Holmes AP, Price CJ, Buchel C, Worsley KJ. Multisubject fMRI studies and conjunction analyses. Neuroimage. 1999;10:385–96. [PubMed]
  • Gentilucci M, Bernardis P, Crisi G, Dalla Volta R. Repetitive transcranial magnetic stimulation of Broca’s area affects verbal responses to gesture observation. J Cogn Neurosci. 2006;18:1059–74. [PubMed]
  • Goldin-Meadow S, Singer MA. From children’s hands to adults’ ears: gesture’s role in the learning process. Dev Psychol. 2003;39:509–20. [PubMed]
  • Grossman ED, Blake R. Brain Areas Active during Visual Perception of Biological Motion. Neuron. 2002;35:1167–75. [PubMed]
  • Grossman ED, Blake R, Kim CY. Learning to see biological motion: brain activity parallels behavior. J Cogn Neurosci. 2004;16:1669–79. [PubMed]
  • Holle H, Gunter TC, Ruschemeyer SA, Hennenlotter A, Iacoboni M. Neural correlates of the processing of co-speech gestures. Neuroimage. 2007 doi: 10.1016/2007.10.055. [PubMed] [Cross Ref]
  • Holle H, Gunter TC. The Role of Iconic Gestures in Speech Disambiguation: ERP Evidence. J Cogn Neurosci. 2007;19:1175–92. [PubMed]
  • Humphries C, Love T, Swinney D, Hickok G. Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing. Hum Brain Mapp. 2005;26:128–38. [PubMed]
  • Iacoboni M, Dapretto M. The Mirror Neuron System and the Consequences of Its Dysfunction. Nat Review Neurosci. 2006;21:191–199. [PubMed]
  • Indefrey P, Cutler A. Prelexical and lexical processing in listening. In: Gazzaniga M, editor. The Cognitive Neurosciences. Cambridge, Mass: MIT Press; 2004.
  • Kelly SD, Barr D, Church RB, Lynch K. Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of Memory and Language. 1999;40:577–592.
  • Kelly SD, Kravitz C, Hopkins M. Neural correlates of bimodal speech and gesture comprehension. Brain Lang. 2004;89:253–60. [PubMed]
  • Kelly SD, Ward S, Creigh P, Bartolotti J. An intentional stance modulates the integration of gesture and speech during comprehension. Brain Lang. 2006;101:222–33. [PubMed]
  • Kendon A. Some relationships between body motion and speech: An analysis of an example. In: Siegman AW, Pope B, editors. Studies in dyadic communication. Elmsford, NY: Pergamon Press; 1972. pp. 177–216.
  • Kotz SA, Meyer M, Paulmann S. Lateralization of emotional prosody in the brain: an overview and synopsis on the impact of study design. Prog Brain Res. 2006;156:285–94. [PubMed]
  • Krahmer E, Swerts M. Effects of visual beats on prosodic prominence: acoustic analyses, auditory perception and visual perception. Journal of Memory and Language. 2007;57:396–414.
  • Laurienti PJ, Perrault TJ, Stanford TR, Wallace MT, Stein BE. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Exp Brain Res. 2005;166:289–97. [PubMed]
  • Li Q, Nakano Y, Nishida T. Gestures realization for embodied conversational agents. The 17th Annual Conference of the Japanese Society for Artificial Intelligence; Niigata, Japan. 2003.
  • MacSweeney M, Campbell R, Woll B, Brammer MJ, Giampietro V, David AS, Calvert GA, McGuire PK. Lexical and sentential processing in British Sign Language. Hum Brain Mapp. 2006;27:63–76. [PubMed]
  • MacSweeney M, Campbell R, Woll B, Giampietro V, David AS, McGuire PK, Calvert GA, Brammer MJ. Dissociating linguistic and nonlinguistic gestural communication in the brain. Neuroimage. 2004;22:1605–18. [PubMed]
  • Mazziotta J, Toga A, Evans A, Fox P, Lancaster J, Zilles K, Woods R, Paus T, Simpson G, Pike B, Holmes C, Collins L, Thompson P, MacDonald D, Iacoboni M, Schormann T, Amunts K, Palomero-Gallagher N, Geyer S, Parsons L, Narr K, Kabani N, Le Goualher G, Boomsma D, Cannon T, Kawashima R, Mazoyer B. A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM) Philos Trans R Soc Lond B Biol Sci. 2001;356:1293–322. [PMC free article] [PubMed]
  • McGurk H, MacDonald J. Hearing lips and seeing voices. Nature. 1976;264:746–748. [PubMed]
  • McNeill D. Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press; 1992.
  • McNeill D. Gesture and Thought. Chicago: University of Chicago Press; 2005.
  • McNeill D, Cassell J, McCoullough KE. Communicative effects of speech mismatched gestures. Research on Language and Social Interaction. 1994;27:223–237.
  • Meyer M, Steinhauer K, Alter K, Friederici AD, von Cramon DY. Brain activity varies with modulation of dynamic pitch variance in sentence melody. Brain Lang. 2004;89:277–89. [PubMed]
  • Özyürek A, Willems RM, Kita S, Hagoort P. On-line integration of semantic information from speech and gesture: insights from event-related brain potentials. J Cogn Neurosci. 2007;19:605–16. [PubMed]
  • Padberg J, Seltzer B, Cusick CG. Architectonics and cortical connections of the upper bank of the superior temporal sulcus in the rhesus monkey: an analysis in the tangential plane. J Comp Neurol. 2003;467:418–34. [PubMed]
  • Pekkola J, Ojanen V, Autti T, Jaaskelainen IP, Mottonen R, Sams M. Attention to visual speech gestures enhances hemodynamic activity in the left planum temporale. Hum Brain Mapp. 2006;27:471–7. [PubMed]
  • Honeycutt L, editor; Watson JS, translator and editor. Quintilian. Institutes of oratory. 2006. [Retrieved November 10, 2006]. from http://honeyl.public.iastate.edu/quintilian.
  • Rizzolatti G, Craighero L. The mirror-neuron system. Annual Rev Neurosci. 2004;27:169–92. [PubMed]
  • Saito Y, Ishii K, Yagi K, Tatsumi IF, Mizusawa H. Cerebral networks for spontaneous and synchronized singing and speaking. Neuroreport. 2006;17:1893–7. [PubMed]
  • Scott SK, Blank CC, Rosen S, Wise RJ. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 2000;123:2400–6. [PubMed]
  • Shattuck-Hufnagel S, Yasinnik Y, Veilleux N, Renwick M. A method for studying the time alignment of gestures and prosody in American English: ‘Hits’ and pitch accents in academic-lecture-style speech. In: Esposito A, Bratanic M, Keller E, Marinaro M, editors. Fundamentals of verbal and nonverbal communication and the biometric issue. Amsterdam: IOS Press; 2007.
  • So C, Coppola M, Licciardello V, Goldin-Meadow S. The seeds of spatial grammar in the manual modality. Cognitive Science. 2005;29:1029–1043. [PubMed]
  • Stein BE, Meredith MA, Wallace MT. The visually responsive neuron and beyond: multisensory integration in cat and monkey. Prog Brain Res. 1993;95:79–90. [PubMed]
  • Sumby WH, Pollack I. Visual contribution to speech intelligibility in noise. J Acoust Soc Am. 1954;26:212–215.
  • Sweet RA, Dorph-Petersen KA, Lewis DA. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus. J Comp Neurol. 2005;491:270–89. [PubMed]
  • Tuite K. The production of gesture. Semiotica. 1993;93:83–105.
  • Warren JD, Jennings AR, Griffiths TD. Analysis of the spectral envelope of sounds by the human brain. Neuroimage. 2005;24:1052–7. [PubMed]
  • Westbury CF, Zatorre RJ, Evans AC. Quantifying variability in the planum temporale: a probability map. Cereb Cortex. 1999;9:392–405. [PubMed]
  • Willems RM, Hagoort P. Neural evidence for the interplay between language, gesture, and action: a review. Brain Lang. 2007;101:278–89. [PubMed]
  • Willems RM, Ozyurek A, Hagoort P. When Language Meets Action: The Neural Integration of Gesture and Speech. Cereb Cortex. 2006 doi: 10.1093/cercor/bhl141. [PubMed] [Cross Ref]
  • Wilson SM, Molnar-Szakacs I, Iacoboni M. Beyond Superior Temporal Cortex: Intersubject Correlations in Narrative Speech Comprehension. Cereb Cortex. 2007 doi: 10.1093/cercor/bhm049. [PubMed] [Cross Ref]
  • Wu YC, Coulson S. Meaningful gestures: electrophysiological indices of iconic gesture comprehension. Psychophysiology. 2005;42:654–67. [PubMed]
  • Yasinnik Y, Renwick M, Shattuck-Hufnagel S. The Timing of Speech-Accompanying Gestures with Respect to Prosody. Proceedings of Sound to Sense, MIT 2004
  • Zatorre RJ, Belin P, Penhune VB. Structure and function of auditory cortex: music and speech. Trends in Cog Neurosciences. 2002;6:37–46. [PubMed]