Manual gestures can take many forms, from those accompanying speech to constituents of a full-fledged language as in American Sign Language (ASL) (McNeill, 1992
). In Kendon’s (Kendon, 1988
) continuum of gestures (so termed by (McNeill, 1992
), speech-accompanying gesticulations morph into language-like-gestures which are transformed over time into pantomimes, then emblems and finally signs of a fully-fledged language. As we move along this continuum from gesticulation to sign languages, the presence of speech declines, the presence of language properties increases and idiosyncratic gestures are transformed into socially regulated signs. McNeill (McNeill, 2005
) has described four versions of this continuum varying in terms of the relationship of the gestures to speech, linguistic properties, conventions, and semiotic properties. In the present study, we investigated the neural bases of different hand gestures that varied along a linguistic-semantic continuum by using functional magnetic resonance imaging (fMRI) of Deaf participants who were native signers of ASL.
The first goal of the study was to provide greater insight into the neural bases of gestural processing by Deaf participants, specifically, whether they ordered the gestures used in the study along a continuum. Four types of gestures were used in the study: linguistically meaningful ASL signs, pseudo-linguistic and meaningful gestures such as emblems, pseudo-linguistic and non-meaningful gestures, and non-linguistic non-meaningful gestures. The ASL signs were whole-word gestures (tagged as ASL for the rest of the article), emblematic gestures were ‘thumbs-up’/‘thumbs-down’ gestures (EMB), the pseudo-ASL gestures were made-up ASL-like gestures that were similar to pseudo-words in that they could be part of the language but had no meaning (pASL), and non-meaningful non-linguistic gestures (NLNM) that were made-up gestures that could not be viewed as part of the ASL repertoire1
. The gestures therefore formed a possible continuum from NLNM → pASL → EMB → ASL with the linguistic and semantic features increasing from the NLNM end to the ASL end. The positions of NLNM and ASL are fixed at the extreme ends of this continuum; the positions of pASL and EMB are not. The latter two could be interchanged within this continuum, depending on whether the semantic dimension is the primary focus (EMB is meaningful and hence most like ASL) or the linguistic dimension is being tested (we assume the linguistic properties of pASL, and not EMB, are closer to those of ASL), leading to two different continua. In defining the “linguistic” nature of gestures, we use the same reasoning put forward by (MacSweeney et al., 2004
) and (Petitto et al., 2000
), that signed and spoken languages exhibit similar levels of linguistic organization, and single gestures may be defined as linguistic if they follow the phonological rules of a language and are used contrastively. In their 2004 article, (MacSweeney et al., 2004
) studied the neural bases of a manual-brachial signaling code used by racecourse bookies and defined this code as nonlinguistic because it does not have an “internal contrastive structure based on featural parameters”. Using this definition, the pASL gestures, because they are modified ASL signs, are considered linguistic, although they are not meaningful. Defining the linguistic nature of emblems is more complicated. (McNeill, 1992
) has argued for some linguistic properties for these gestures. Emblems are isolated, stand-alone gestures, not naturally occurring as part of a sentence and are understood by both signers and non-signers within a cultural group. Although, there is some ambiguity, for the purposes of the study, we categorize them as nonlinguistic. The types of gestures used in the study are shown in . The gestures used in our study varied along the linguistic dimension, with ASL and pASL having greater linguistic properties than EMB and NLNM.
Figure 1 Pictures of the four types of gestures used as stimuli in the study: (a) ASL (American Sign Language), (b) Emblems, (c) pseudo-ASL or ASL-like non-meaningful gestures (pASL), and (d) non-meaningful and non-linguistic gestures (NLNM). The gestures vary (more ...)
The four gesture types used in our study differed as well along the semantic dimension -two of the gestures were meaningful (ASL, EMB) whereas the other two types (pASL, NLNM) were non-meaningful. ASL gestures used in the study were whole-word gestures (e.g., ‘cop’, guilty’). The emblems used in the study were variants of the ‘thumbs-up’ and ‘thumbs-down’ gestures. Emblems are stand-alone hand gestures that convey semantic information and are known to members of a particular culture; they are analogous to spoken words. However, they differ from spoken words or signs of a manual language because they do not occur in syntactic sequences (Venus and Canter, 1987
) or do not have a fully contrastive system (McNeill, 2005
). Emblematic gestures have been characterized as socially regulated signs that are one-step removed from a full-fledged sign language (Kendon, 1988
; McNeill, 1992
). They are understood equally well by both hearing and Deaf persons belonging to a specific culture. Recently, several researchers have begun to investigate how emblems are processed, mostly from the point of view of the hearing population. Molnar-Szakacs et al. (Molnar-Szakacs et al., 2007
) found that specific cultural and biological factors modulated the neural response (measured as corticospinal excitability) of their participants during processing of culture-specific emblems. Gunter and Bach (Gunter and Bach, 2004
) conducted an ERP study to dissociate the processing of meaningful emblems from the processing of non-meaningful hand gestures using hearing participants. They found that the ERPs elicited by the meaningful gestures were similar to those of abstract words.
The stimuli varied along the linguistic and semantic dimensions and it is possible that rather than ordering these gestures systematically along a continuum, the Deaf participants could view them as belonging to two broad categories, for instance, linguistic and nonlinguistic or meaningful and non-meaningful. The fMRI results of the study will help to clarify the differences in processing of the four types of gestures. The differences in the processing of the stimuli will be tested in 3 ways: (1) along the semantic dimension by collapsing EMB and ASL gestures at one end and the pASL and NLNM gestures at the other end, (2) along the linguistic dimension by considering the ASL and pASL gestures at one end and the EMB and NLNM at the other extrema and (3) a gradual ordering of the stimuli along a continuum, that would provide support for one of the Kendon-MacNeill continua. Note that the Kendon-McNeill continua include pantomime-like gestures that were not incorporated in the present study because pantomimes tend to be longer in duration than the standardized ASL gestures and emblems used in the study.
A second goal of the study was to investigate the neural bases of gestural processing in the context of two tasks: a category discrimination task and an identity discrimination task. Our hypothesis was that the two tasks involve different levels of processing: the identity discrimination task engages neural sources and mechanisms involved in phonological-level processing and the category discrimination task at a whole “word” or semantic-level of processing. Such differences in processing would translate into differences in brain activation patterns. Categorization is at the transition between sensory and higher-level cognitive processing. In an earlier study (Husain et al., 2006
) using auditory stimuli that varied along the dimensions of speech and acoustic speed, we found a set of core regions which were activated to a greater extent for all stimuli for the category discrimination task compared to the auditory discrimination task: the bilateral middle and inferior frontal gyri, dorsomedial frontal gyrus and the inferior parietal lobule. In this case, the auditory discrimination occurred at the level of processing acoustic features such as transients and category discrimination occurred at the level of phonological processing. There were lateralization differences in these regions, but these differences covaried more with the acoustic features (fast or slow transients) than with the speech/nonspeech nature of the stimuli. Additionally, the reaction times of the categorization task were observed to be longer than the discrimination task for all stimulus types. The second goal of the study seeks to determine if the core regions highlighted in the auditory study were also activated in the processing of manual gestures, and if so, what level of processing (i.e., phonological or semantic) engages these core regions.