PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Commun Disord. Author manuscript; available in PMC 2016 July 2.
Published in final edited form as:
PMCID: PMC4530578
NIHMSID: NIHMS710514

Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

Abstract

The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia.

Keywords: co-verbal gestures, aphasia severity, verbal semantic skills, hemiplegia, discourse

1. Introduction

Gestures are used by human as a natural non-verbal means of communication. They generally refer to arm, hand, or bodily movements for expressing ideas, intentions, or personal and emotional feelings (Knapp & Hall, 1997) and can be culturally specific (Kendon, 1997; McNeill, 1992). McNeill (1992) provided a more precise definition for gesture, which is the arm and hand movements that synchronize with speech. Co-verbal gestures are commonly found in everyday verbal interactions and serve the purpose of supplementing language content, regulating speech flow, maintaining the attention between a speaker and listener during a conversation, shifting a conversational topic, facilitating the continuation of a topic, and emphasizing a particular topic or content (Kendon, 2004; Mather, 2005).

1.1 Connection between gestures and language production

It has been reported in the literature that gesture and language production were highly related and could be originated from a single process (Krauss, Chen, & Gottesman, 2000). In particular, when a lexical item is activated at the stage of conceptualization, its corresponding gesture can be originated at the same time and interacts and temporally synchronizes with the language output. In other words, gesture use among typical speakers can facilitate lexical retrieval during spontaneous speech production, at least at the conceptual level where mental lexicons are activated (Hadar, & Butterworth, 1997; Krauss & Hadar, 1999). With the use of functional imaging data, Xu, Gannon, Emmorey, Jason, and Braun (2009) suggested these two forms of human communication were processed by the same neural system in the human brain. This view of close connection between gesture and language output is consistent with and further supported by an earlier report of a higher proportion of gestures associated with retrieving lexical items of lower familiarity (Morrel-Samuels & Krauss, 1992). Rauscher, Krauss, and Chen (1996) also emphasized the positive effects of gesture specific to lexical access in normal speakers. When participants were restricted from using arm and hand movements, an increase in non-juncture filled pauses and a decrease in speech fluency of verbal expression involving spatial content were found. Moreover, studies examining the relationship between gesture use and language competency among normal speakers have revealed that individuals with a lower overall lexical diversity at the discourse level had a tendency to produce more co-verbal gestures (Crowder, 1996).

Studies have been conducted that aimed to prove the above-mentioned Lexical Retrieval Hypothesis but failed to do so (see, for example, Beattie & Coughlan, 1999). Some researchers opted for the notion that gestures are employed for packaging information conceptually before it is coded into a linguistic form for oral output. This was supported by findings that contradicted the Lexical Retrieval Hypothesis. For example, Kita (2000) described the Information Packaging Hypothesis and emphasized the assumption that speakers were active in employing gestures (and, therefore, intended to use gestures) during language production. Instead of a simple concurrent activation of gestural and linguistic information (as well as maintenance of the activated spatial information), co-verbal gestures were produced to structure and package linguistic information into units in the language formulation process. This view was further supported by Kita and Özyürek (2003) and Özyürek, Kita, Allen, Furman, and Brown (2005) who reported that the complexity of gestures employed in a task of orally describing an object’s motion paralleled speakers’ use of single or multiple clauses. Speakers who produced a single clause to describe the manner and path of a motion tended to use a single gesture, while those who produced multiple clauses had a tendency to employ separate gestures in the task. Moreover, gestures play a primary role in enhancing communication through providing extra information to the listener (see review by de Ruiter, 2006). According to the conclusion by de Ruiter, gestures served as a communicative device that could provide information to compensate for verbal breakdown in language output.

1.2 Independent annotation of gesture forms and functions

Although a relationship between gesture use and language production is apparent, coding gestures with respect to form and function and quantifying how they may be related to language processes is far from straightforward. Variations among different gesture coding systems have complicated the annotation and interpretation of gesture use as well as their function during production of spontaneous speech (Scharp, Tompkins, & Iverson, 2007). Kong, Law, Kwan, Lai, and Lam (2015) have recently proposed a gesture classification framework to independently annotate co-verbal gestures in terms of their forms and functions. This was motivated by the fact that mixed coding of gesture forms and functions within one quantification system, a characteristic of many existing frameworks (see review by Kong et al., 2015), can be conceptually problematic and may create confusion when it comes to interpreting gesture employment. This is especially the case when a particular gesture form carries more than one function under different communication conditions. In the Kong et al. framework, there are six forms of gestures, including (1) iconic gestures that model the shape of an object or the motion of an action, (2) metaphoric gestures that show pictorial content to communicate an abstract idea, (3) deictic gestures such as familiar pointing gestures that indicate objects in conversational space, (4) emblems with standard properties, language-like features, and culturally-specific conventionalized meanings, (5) beats including rhythmic beating of a finger, hand or arm that are used in the format of a simple hand or arm flick or a moving motion of finger(s), hand(s), or arm(s) in an up-and-down or a back-and-forth fashion, and (6) non-identifiable gestures such as uncodable finger, hand, and/or arm movement due to its ambiguous connection or lack of a direct meaning to the language content. While the first four forms are content-carrying, the other two are non-content-carrying.

In the dimension of functions, Kong et al. (2015) classified gestures by their primary function in relation to the language content, including (1) providing additional information to message conveyed, i.e., the content of the gesture gave additional information related to the speech, (2) enhancing the language content – gestures that signal the same meaning as the language content and potentially facilitate a listener to decode language content, (3) providing alternative means of communication – gestures that carry meaning or information not included in the language content, (4) guiding and controlling the speech flow – gestures that reinforce the speech rhythm with the rate of gesture movement synchronized with the speech pace, (5) reinforcing the intonation or prosody of speech – gestures that involve a speaker’s intensifying or accentuating a target element in the speech, (6) assisting lexical retrieval – gestures that facilitate lexical access at times of word-finding difficulty, (7) aiding sentence re-construction – gestures used when a speaker demonstrates modification of syntactic structure or refinement of sentence structure, and (8) no specific function – gestures that do not show a specific function in relation to the language content or serve unclassifiable functions other than the ones mentioned above. More information about these gesture forms and functions, together with examples, can be found in Appendix A.

When determining the form of a gesture, Kong et al. (2015) considered its relation to the corresponding verbal production. In situation when a specific co-verbal gesture served more than one communicative purpose, analysis and final annotation of its function was based on the primary function in relation to the language content. Kong et al. (2015) examined the videos of 119 healthy native Cantonese Chinese speakers (stratified into three age and two education levels) selected from the Database of Speech and GEsture (DoSaGE) focusing on gesture employment during oral discourse tasks. Based on the aforementioned coding system, it was found that about one third of the normal speakers did not use any gestures. The results also indicated that content-carrying gestures were mainly used for helping listeners to decode language content, while non-content-carrying gestures primarily served the purpose of emphasizing language content. Moreover, older speakers tended to use gestures more frequently, and speakers with a higher level of language proficiency produced fewer gestures. However, how well this annotation system is applicable to speakers who have problems expressing themselves verbally, especially those with severe degree of aphasia, has not been investigated.

1.3 Use of co-verbal gestures and aphasia

According to the Sketch model (de Ruiter, 2000), which is an extension of the production model of Levelt (1989), both routes of gesture and language production originated from the conceptualizing stage, similar to the initial stage proposed in many models of lexical and sentence production (e.g. Garrett, 1975). The condition of acquired language impairment, such as aphasia, provides investigators a unique opportunity to elucidate the possible (non-)communicative functions gesture may serve. More specifically, if both gesture and speech are employed at the same time during oral expression, speakers with aphasia who tend to have a diminished capacity to execute oral production should rely on the gestural modality to assist communication. The interaction between gesture and speech as suggested by the Sketch model has been further supported by Feyereisen (1987) who reported more co-verbal gestures among speakers with aphasia in relation to their less informative speech.

The compensatory role of gestures in aphasia has also been proposed by a number of gesture scholars, who conducted studies that compared gestural profiles between individuals with and without aphasia, and concluded that speakers with impaired oral ability secondary to language deficits tended to use gestures to compensate for their difficulties. Specifically, with reference to the quantity and types of gestures elicited from a conversational interview, Le May, David, and Thomas (1988) found that hand gestures (predominantly batons, ideographic, and deictic gestures) were employed more by speakers with Broca's aphasia but the least by non-neurologically impaired controls. Wernicke's aphasia also demonstrated significantly more use of kinetographic gestures (similar to the iconic gestures in Kong et al.’s (2015) system) than their non-aphasic counterparts. Concerning the employment of gestures among individuals with aphasia, Hogrefe, Ziegler, Weidinger, and Goldenberg (2012) have also recently reported that speakers with severe aphasia tended to employ gestures more as a strategy to convey messages using an alternative means of communication. A follow-up study by Hogrefe, Ziegler, Wiesmayer, Weidinger, and Goldenberg (2013) revealed that some speakers with aphasia used gestures spontaneously to compensate for their limited verbal output and, more interestingly, these co-verbal gestures conveyed more information than the corresponding spoken expression. Parallel to Le May’s (1988) findings of significantly more kinetographs used by speakers with aphasia, Herrmann, Reichle, Lucius-Hoene, Wallesch, and Johannsen-Horbach (1988) reported that speakers with aphasia gestured significantly more frequently and for a significantly longer period of time than their normal conversational partners. More importantly, speakers with severe aphasia were found to use gestures in a manner different from the controls; they used significantly fewer language-focused hand movements but significantly more codified gestures, such as emblems containing direct non-verbal translations consisting of a word or phrase that could represent a speaker’s mind as a lexical item (Poggi, 2008), than normal speakers. These gestures mainly functioned as speech substitutes.

However, one should also note that contradictory results have also been reported in the literature (e.g., Feyereisen, 1983; Sekine & Rose, 2013; Sekine, Rose, Foster, Attard, & Lanyon, 2013). In particular, Cicone, Wapner, Foldi, Zurif, and Gardner (1979) found that speakers with relatively preserved expressive language ability produced more gestures than those with impaired expressive language. Note that instead of simply counting the occurrence of co-verbal gestures, the authors compared the physical parameters of gestures used and how pragmatically appropriate these gestures were employed during the communication task across subjects. Moreover, Glosser, Wiener, and Kaplan (1986) suggested that speakers with a moderate degree of aphasia produced less meaningful gestures than those with mild degree of aphasia, irrespective of the quantity of gestures produced. It is, therefore, reasonable to conclude that the compensatory role of employing gestures by individuals with aphasia is not as straightforward as one may assume. This is at least the case when the semantic, physical, and/or pragmatic properties of gestures are considered.

According to Sekine and Rose (2013), the aphasia type and speech fluency have an impact on gesture production. A higher proportion of speakers with a lower degree of speech fluency were found to use concrete deictic gestures and pantomime in their narratives. Furthermore, specific patterns of gesture production were found according to aphasia types. For example, speakers with Broca’s and conduction aphasia tended to produce gestures that were more iconic, characterized by a depiction of concrete images or demonstration of actions. Speakers with Wernicke’s aphasia, on the other hand, used gestures that were more abstract, such as metaphoric or referential gestures. In contrast, those with anomic and transcortical motor aphasia demonstrated a profile of gesture employment similar to unimpaired control speakers who used fewer iconic gestures, although transcortical motor aphasia exhibited more concrete deictic and pointing-to-self gestures. In a related study by Sekine et al. (2013), a relationship between the frequency of gesture production and aphasia severity was also reported. Specifically, speakers with a higher degree of severity, as reflected by a lower Aphasia Quotient of the Western Aphasia Battery, had a tendency to use fewer referential but more concrete deictic gestures. Furthermore, speakers who were more fluent tended to produce a higher number of words and gestures.

1.4 Influence of (verbal-) semantic impairment and hemiparesis on gesture use

In any model of language processing, semantic processes play a central role in all modalities including reading, writing, speaking as well as gesture use (Hillis, 2001). An impaired verbal semantic system among speakers with aphasia can, therefore, hinder their use of gestures. However, only a few studies have examined the relationship between the use of gestures and semantic dysfunction secondary to aphasia; these studies have mainly investigated the use of gestures in non-verbal semantic tasks to demonstrate the impact of impairment in semantic processing. For example, Fucetola, Connor, Perry, Leo, Tucker, and Corbetta (2006) found that non-verbal semantic processing was strongly related to the ability of speakers with aphasia in using gestures and predicted their employment of gestures as a kind of functional communication. Hogrefe et al. (2012) also reported that impaired non-verbal semantic processing limited the diversity of hand gestures, which corresponded to the amount of content to convey and the richness of transmitted information. The possible relationships among gesture employment, verbal semantic skills, and non-verbal semantic processing are, therefore, apparent although only few studies have explicitly measured and reported their association.

Two conflicting positions exist concerning the relationship between hand dominance and spontaneous gesture use. The first claims that gestures used during verbal interactions are usually mediated by the dominant hand (McNeill, 1992). Typical right-handers tend to rely on the left hemisphere to control language functions and produce gestures using their dominant right hand (Kimura, 1973). Lausberg and Kita (2003), in contrast, provided a different account through a study about hand preference in using co-verbal iconic gestures. Specifically, it was reported that typical right-handed speakers did not differ in terms of displaying left- and right-hand-mediated iconic gestures, irrespective of whether they were asked to provide an oral narration or to perform a silent gestural demonstration of animations with two moving objects. The authors drew an opposite conclusion that the choice of using the left or right hand was, instead, more determined by the semantic aspects of verbal output. According to Kimura and McNeill, right-sided hemiplegia, usually coincident with aphasia due to left hemispheric lesions, can significantly restrict the employment of co-verbal gestures. One of the very few studies that examined this issue suggested that paralysis of the dominant hand would result in a reduction of the number of gesture components and the complexity of spatial configurations of these gestures (Pedelty, 1987). However, for individuals with stroke-induced aphasia who were premorbidly right-handed, opposite findings have also been reported revealing a lack of influence of hemiparesis on using hand gestures (Hogrefe et al., 2012).

1.5 Aims

The aim of this study was to systematically investigate how gesture use was different between speakers with and without aphasia using Kong et al.’s (2015) coding system. Specifically, we (1) examined the distributions of different gesture forms and functions in the normal and aphasic groups, (2) compared the frequency of gestures used between the two groups, (3) assessed whether gestures employed by individuals with aphasia differed as a function of aphasia severity, integrity of the semantic system, and hemiplegia, and (4) determined the inter- and intra-rater reliabilities of this coding system for aphasia. The results would better our understanding of the relationship between gesture use and language production, and provide important insights to speech-language pathologists when assessing speakers with acquired language disorders as well as planning of intervention for these individuals.

2. Method

The data of the current study were drawn from two Cantonese databases. One of them consisted of orthographically transcribed language samples from neurologically unimpaired male and female speakers of different ages and education levels as well as speakers with aphasia who had suffered a single stroke as verified through neuroimaging and/or a clear medical diagnosis (Kong, Law, & Lee, 2009)1. These samples were elicited using the AphasiaBank protocol (MacWhinney, Fromm, Forbes, & Holland, 2011), with adaptation to the local Chinese culture, and transcribed in the Child Language ANalyses computer program (CLAN; MacWhinney, 2003). The other database was called “DoSaGE” (Kong et al., 2015)2 with digitized videos, of the same participants mentioned above, linked and synchronized with each corresponding language sample using the EUDICO Linguistic ANnotator (ELAN; Max Planck Institute for Psycholinguistics, 2002; Lausberg & Sloetjes, 2009). The DoSaGE corpus contained three independent tiers annotating, respectively, the linguistic information of the transcript, the form and the function of each gesture.

2.1 Participants and data

The ‘normal’ group included 131 right-handed native speakers of Cantonese recruited in Hong Kong. None of them had any history of neurological lesions that would affect everyday communication. They were distributed into three age groups (Young: 18 to 39 years, Middle-aged: 40 to 59 years, and Senior: 60 or above), and two education levels (using secondary school for the two younger and primary school for the oldest group, respectively, as cut-off of ‘low/high’). Demographic information of the normal group is given in Table 1.

Table 1
Demographic Information on Normal Participants (n=131)

The ‘aphasic’ group included 48 Cantonese-speaking individuals with aphasia who were premorbidly right-handed. All of them suffered from a single stroke as verified by neuroimaging results and were at least six months post onset. There were 36 fluent (30 male and 6 female) and 12 non-fluent (8 male and 4 female) speakers, based on the results of the Cantonese version of the Western Aphasia Battery (CAB; Yiu, 1992). The mean of CAB Aphasia Quotient (AQ) was 81.31/100 (SD=15.15 and range of 40.8 to 99.0). The average age of the group was 56.13 years (SD=9.02 and range of 41.33 to 85.92 years). The average years of education were 8.65 (SD=3.56 and range of 0 to 16). With reference to the Frenchay Dysarthria Assessment (Enderby & Palmer, 2008) and Apraxia Battery for Adults (Dabul, 2000) adapted for Chinese speakers, none of them had any co-morbid dysarthria and apraxia that were of moderate or severe grade. Their demographic information is given in Table 2. A ‘control’ group was also formed by selecting 48 participants from the normal group who were matched in age (±3 years) and education level (±5 years) with each of the aphasic subjects.

Table 2
Demographic Information on Aphasic Participants (n=48)

For each control and aphasic participant, three sets of transcripts and their ELAN files were chosen from the databases, including the monologue of narrating an important event in their life, story-telling of two highly familiar stories (‘The Hare and the Tortoise’ and ‘The Boy who Cried Wolf’) after presentation of picture cards, and sequential description of describing the procedure of making a ham and egg sandwich with photos of the ingredients remaining in sight during the task.

In addition to the narrative tasks, each participant in the aphasic group was administered the following assessment battery, including (1) Spoken Word – Picture Matching Test adapted for Chinese speakers (SWPM; Law, 2004) to evaluate their verbal semantic abilities, (2) selected items from the Pyramid and Palm Trees Test (PPTT; Howard, & Patterson, 1992) and the Associative Match Test in the Birmingham Object Recognition Battery (BORB; Riddoch, & Humphreys, 1993) modified to be culturally appropriate for Chinese subjects (Law, 2004) to assess their non-verbal semantic abilities, (3) object naming of selected items from the Boston Naming Test, Short Form (Kaplan et al., 2001) and action naming of selected items from the Verb Naming Test (Thompson, 2011) to estimate their naming abilities, and (4) the Action Research Arm Test3 (ARAT; Lyle, 1981; Yozbatiran, Der-Yeghiaian, & Cramer, 2008) to quantify their degree of left and right upper limb hemiplegia4. Note that based on the results of the PPTT and BORB, 30 aphasic subjects were relatively unimpaired in non-verbal semantic skills.

2.2 Data analysis

Annotation of gesture in this study followed the framework in Kong et al. (2015). Specifically, a unit of gesture was defined as the duration from the start of a movement until the hand(s) returned to its resting position (McNeill, 1992). If the hand(s) did not return to the resting position, gestures would be divided by either a pause in the movement or an obvious change in shape or trajectory (Jacobs & Garnham, 2007). During the flow of speech, self-adapting motions, such as touching the face or changing hand position from the lap to the desk, were excluded from the analysis as they lack semantic attachment to the language content (Jacobs & Garnham, 2007). Each of the co-verbal gestures that appeared in the narrative tasks was then independently coded with one of the six forms as well as one of the eight functions. The frequency of each gesture form and function was obtained for each participant in ELAN. The distributions of gestures used in different forms and functions by the normal and aphasic groups (Aim 1 of the study) were then formulated and compared.

Linguistic analysis was also conducted for all language samples. After segmenting each sample into utterances to obtain the total utterance number, each utterance was classified as either complete or incomplete (utterances that were ungrammatical, ill-formed, or with missing elements). All complete utterances were further divided into simple or compound and complex utterances. The following linguistic measures were then computed for each sample, including (1) type-token ratio (TTR), (2) percentage of simple utterances, (3) percentage of complete utterances, (4) percentage of regulators, and (5) percentage of dysfluency. Details of classifying utterance types and the above linguistic measures with examples are summarized in Appendix B. A ratio of total number of gestures per word in all discourse tasks was also calculated as a measure of frequency of gesture use in all discourse tasks5.

2.3 Statistical Analysis

The normality of distribution of all dependent variables was tested using Kolmogorov-Smirnov test (Field, 2009). Box-Cox transformations (Box & Cox, 1964) were employed to normalize negative skewness, in case the data were not normally distributed due to zero in gesture counts (Osborne, 2010). Natural log was done when positive skewness of the data was observed (Field, 2009). Non-parametric statistical analyses were implemented if the assumption of normal distribution was still violated after data transformation. Otherwise, parametric tests were conducted.

For the second aim of the study, Mann-Whitney test was employed to compare the gesture use between the aphasic and control groups (n=48 for both groups), based on the frequency of gesture, i.e., number of gestures per word (gesture/word ratio), across all discourse tasks. Moreover, the same statistical test was used to determine the effect of hemiplegia (Aim 3) on gesture use between subjects with hemiplegia who scored 0 in the right hand performance of ARAT (or R-ARAT; n=16) and those without hemiplegia (R-ARAT > 53; n=18).

To investigate the effect of aphasia severity on gesture use (Aim 3), Spearman’s rank-order correlations were implemented between the CAB AQ and gesture/word ratio. For the 30 aphasic subjects who were relatively unimpaired in non-verbal semantic skills, as reflected by the results of the PPTT and BORB, the dissociation of impaired linguistic but relatively intact non-verbal semantics exhibited by this aphasic sub-group allowed the investigation of the relationship between their degree of semantic integrity and co-verbal gesture used (Aim 3). To do so, Spearman rank-order correlations were performed between their scores in the object and action naming tasks and gesture/word ratio across all discourse tasks.

Concerning the relationship between linguistic performance and use of co-verbal gestures among speakers with aphasia, the 48 subjects were ranked according to their gesture/word ratio. The top 33.3% of the subjects (n=16) were regarded as speakers with high frequency of gesture use (High-Gesture group), while the bottom 33.3% were regarded as the Low-Gesture group. Comparisons of the two groups in terms of frequency of gesture use were conducted using independent t-tests with the five language parameters as dependent variables. Bonferroni adjustment of alpha value as 0.01 (0.05/5) was applied to control for the occurrence of Type I errors.

Finally, for Aim 4 of the study on coding reliability, 10% of data (or five sets of aphasic data) were randomly selected and re-analyzed by author WW to obtain the intra-rater reliability by employing the Kendall’s tau coefficient. The same set of data was independently coded by another speech therapist student to obtain the inter-rater reliability. Point-to-point agreement was also calculated to estimate the reliability of form and function annotations.

3. Results

3.1 General results

A total of 3242 and 3249 gestures were annotated from the normal and aphasic groups, respectively. About 35% of the normal speakers (n=46) produced no gestures throughout all the discourse tasks, but only about 10% of speakers with aphasia (one transcortical motor and four anomic aphasia) showed an absence of co-verbal gestures. Addressing Aim 1 of the present study, the distribution of forms and functions of gestures employed by the normal and aphasic groups is displayed in Table 3. Among those who employed gestures in the aphasic group (n=43), a higher proportion of content-carrying gestures mainly serving the function of enhancing language content was found, as compared to the normal speakers who gestured (n=85). Concerning the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were mainly associated with assisting lexical retrieval or with no specific functions. Unlike the normal group, 24.7% of the non-identifiable gestures by speakers with aphasia were related to assisting lexical retrieval. Results of the Mann-Whitney test also revealed that speakers with aphasia (mean=0.18, SD=0.20) had a significantly higher gesture/word ratio than their age, gender, and education-matched controls (mean=0.02, SD=0.03) across all discourse tasks (U=403.5, p <0.0001).

Table 3
Distribution of Forms and Functions of Gestures Employed in Normal Speakers (N = 85) and Speakers with Aphasia (N = 43)

3.2 Aphasia and gestures

Results of the Spearman’s rank-order correlation indicated a negative correlation between CAB AQ and frequency of gesture use (r = −0.510, p < 0.0001), i.e., speakers with more severe aphasia (with lower AQ scores) tended to use more gestures during discourse tasks. Speakers with aphasia who produced a higher percentage of complete utterances or simple utterances in their narratives also use significantly fewer gestures (Table 4). All in all, our results suggested that for speakers with aphasia who produced co-verbal gestures, more severe aphasia was associated with a higher rate of using co-verbal gestures.

Table 4
Linguistic Performance of High and Low Frequency Gesture Group

3.3 Influence of integrity of verbal-semantics and hemiplegia

Among the 30 aphasic subjects who were relatively unimpaired in non-verbal semantic skills, their verbal semantics in terms of object and action naming were negatively related to number of gestures per word (r=−0.507, p<0.01). Subjects who scored lower in the naming tasks tended to produce gestures more frequently. Finally, hemiplegia, as quantified by the ARAT was not found to affect the use of gestures in speakers with aphasia. No significant difference was found between speakers with (mean=0.24, SD=0.24) and without hemiplegia (mean=0.14, SD=0.17) in the gesture/word ratio (U=96.0, p=0.102).

3.4 Inter-and intra-rater reliability

Concerning the inter-rater and intra-rater reliability, results of the Kendall’s tau coefficients revealed significant correlations at p<0.05 or better for coding of all gesture forms and functions (see Table 5). While coefficients for intra-rater were higher than inter-rater, the annotation of ‘beats’ and the gesture functions of providing additional information, guiding speech flow, and assisting lexical retrieval had relatively lower consistency across raters. As for across-rater point-to-point agreement, a percentage of 75.5% (260/345) and 71.7% (247/345) was obtained for gesture forms and functions, respectively. More disagreements were found across raters between beats and non-identifiable (with 34% of the gestures were mismatched) as well as between ‘no specific functions’ and ‘reinforcing language content’ (with 24% of the gestures were mismatched). The within-rater point-to-point agreement was much higher: 92.8% (320/345) and 89.9% (310/345) for gesture forms and functions, respectively.

Table 5
Reliability Measures of Forms and Functions of Gesture

4. Discussion

The difference in pattern of using co-verbal gestures related to aphasia is well established, although contributing factors have not been systematically studied. The current study was designed to compare the use of co-verbal gestures during oral discourse tasks among speakers with and without aphasia and to explore the effects of aphasia severity, semantic integrity, and hemiplegia on gesture employment in individuals with aphasia. Our findings suggested that for the subjects who used co-verbal gestures, the average number of gestures used per subject was almost double in the aphasic (75.56 or 3249/43) as compared to the normal group (38.14 or 3242/85). The gesture to word ratio was significantly higher for the aphasic group than their age- and education-matched controls. This observation is in line with studies reporting a higher gesture to word rate for PWA (e.g., Feyereisen, 1983; Carlomagno & Cristilli (2006) for persons with non-fluent aphasia). It was also found that aphasia severity and verbal-semantic processing impairment, but not the degree of hemiplegia, affected the employment of gestures among speakers with aphasia.

Following the novel coding framework of independent annotation of co-verbal gestures with reference to forms and functions during verbal expression (Kong et al., 2015), we were able to reveal the difference of proportion of content-carrying gestures between the aphasic and normal groups: 30.4% (or 988/3249) in the aphasic group vs. 13.1% (or 426/3242) in the normal group. Most of these content-carrying gestures served to enhance the language content among typical speakers. The aphasic group, on the other hand, also used a subset of these gestures (iconic and deictic) to provide additional information related to the language content. This confirmed the notion that gestures could be used to compensate for naming problems in aphasia (Orgassa, 2005). In a recent review by Gullberg (2013), the notion of whether gestures may substitute for speech was discussed. Within the context of gestures and verbal output forming an integrated system of communication, it was concluded that recruiting gestures as a compensatory strategy to overcome linguistic difficulties is common but not necessarily compulsory when one produces language, at least in the case of linguistically unimpaired adults and children. Instead of gesturing, speakers can use tactics such as circumlocution to overcome speech silence, especially when they are aware of missing information in the verbal output. Acknowledging the multifaceted relationship between oral output and co-verbal gestures, we agree with Gullberg (2013) that a better definition of gestural compensation would lead to a more sophisticated understanding of the role of gestures in aphasic language production.

Along the discussion about the facilitatory effects of gestures in aphasia, the multiple functions of co-verbal deictic gestures employed by the speakers with aphasia in the present study were unique across the four types of content-carrying gestures. One may notice that about 8% of deictic gestures were employed to assist lexical retrieval but, at the same time, another 9% of use did not correspond to any specific functions. We recognize that the presence of a stimulus picture during the task of describing the procedure of sandwich making could potentially lead to a relatively higher percentage of deictic gestures, as compared to the other two discourse tasks. Deictic gestures are defined by McNeill (1992) as pointing gestures referring to locations or directions. They could be further divided into two subtypes, abstract and concrete deictics. While concrete deictics are gestures referring to an object or direction in the real physical world, abstract deictics are used to create or refer to discourse markers in the gesture space in front of the speaker’s body. Whether the two subtypes of deictics may play different roles in aphasic speech production, and more specifically if they have different degrees of facilitatory effects on word finding during conversation (Lanyon & Rose, 2009), deserve further examination. As for non-content-carrying gestures, our results showed that beats were employed by both speaker groups to reinforce their speech prosody and intonation as well as to guide and control their speech flow. This is consistent with the finding of Kong et al.’s (2015). The fact that about one-quarter of the non-identifiable gestures (24.7%) was related to assisting lexical retrieval in aphasia was surprising. According to Krauss et al. (2000), ‘lexical’ gestures, similar to the content-carrying gestures in the present study, play an important role in assisting word retrieval from the mental lexicon when one formulates ideas to be conveyed verbally. The use of these ‘lexical’ gestures is generally activated as a supplemental mechanism to facilitate spoken language. In other words, the systems for speech and gesture production are intertwined during spontaneous oral output. Our results here may provide some new evidence for the relationship between ‘non-lexical’ gestures and word finding difficulties, but the underlying mechanism of potential facilitative effects for word retrieval should be further investigated.

One type of gesture that has been reported in the literature is interactive gesture, which functions mainly to assist the process of verbal dialogues without conveying any topical information (Bavelas, Chovil, Coates, & Roe, 1995). These gestures, also known as pragmatic gestures (Kendon, 2004), conduit gestures (McNeill, 1985), or thinking gestures (Gullberg, 2011), may occur during communication breakdown. Following the coding criteria in Kong et al., (2015), a gesture was coded with the function of assisting lexical retrieval in the present study when a speaker tried to produce a target word after gesturing at times of word-finding difficulty, as indicated by an observable time delay of retrieval attempt, verbally stating of being unable to find a target word, interjections, circumlocution, word or phrase repetitions, or word substitutions within the language output. However, one may argue that because we could not easily and clearly disentangle the interactive (i.e., to keep the floor of continuing the flow of verbal output or to seek for help from a conversational partner) and referential (i.e., to provide self-cueing to overcome anomia) nature of using a gesture in this situation, inclusion of a new category of interactive function (which is currently embedded in the final function category of ‘no specific function’) in the current coding system should be considered. A more in-depth follow-up study is being conducted to further examine the interactive function of co-verbal gestures and their relationship with linguistic breakdown in spontaneous language tasks.

Regarding the employment of co-verbal gestures as a function of aphasia severity, our observation of negative correlation between gesture use and AQ scores provided further support to the Sketch model (see de Ruiter, 2000). We demonstrated that due to a more prominent verbal deficit, albeit with relatively unimpaired non-verbal semantics skills, speakers with more severe aphasia had a diminished capacity to execute oral production and, therefore, relied on the gestural modality to assist communication. This is also consistent with several previous studies, such as Fucetola et al. (2006) who proposed that aphasia severity was a predictor of functional communication abilities including gestures, as well as Herrmann, Reichle, Lucius-Hoene, Wallesch, and Johannsen-Horbach (1988) who reported a higher frequency of gestures to compensate for language deficits among those with more severe aphasia. Note that the current results appeared to contradict the claims by Cicone et al. (1979), Glosser Wiener, and Kaplan (1986) or Mol, Krahmer, and van de Sandt-Koenderman (2013), who suggested that gestures tended to degrade with verbal language in aphasia (although Mol et al. (2013)’s conclusion may be confounded by the factor of limb apraxia because they did not disentangle the influence of limb apraxia on gesture production in their severely aphasic subjects). As the degree of verbal vs. non-verbal semantic integrity of aphasic participants was not measured in those studies, one may question if the contradictory results could be related to their more impaired verbal and/or non-verbal semantic systems, leading to weaker activation and/or impaired execution of gestures. Nevertheless, the close link between processes underlying co-verbal gestures and verbal language production is still evident (de Ruiter & de Beer, 2013). One possible way to further examine the relationship of these two mechanisms is to carefully examine the timing and rate of using non-verbal communication during an oral task (see Cicone et al., 1979). Alternatively, a systematic quantification of the use of multi-modal communication among speakers with aphasia that involves verbal, prosodic, and gestural properties of oral output may also help us better understand how co-verbal gestures are used in relation to incidents of linguistic deficits due to aphasia (Duncan & Pedelty, 2007; Wilkinson, Beeke, & Maxim, 2010).

According to Mol et al. (2013), gestures found among speakers with aphasia were less informative than those produced by normal speakers, primarily because of the parallel degrees of both impaired verbal and non-verbal language in aphasia. In other words, Mol et al. suggested that speakers with aphasia might not necessarily compensate for their impaired verbal expressivity by gesturing. Our present findings of distribution of the aphasic gestural functions (Table 3) may provide an alternative account to the functional values of co-verbal gestures. In particular, irrespective of their forms, the percentages of using gestures to serve different purposes for enriching verbal communication ranged from two (e.g., enhancing language content) to 20 (e.g., assisting lexical retrieval) times higher in the aphasic group. Duncan and Pedelty (2007) have also suggested that when one examines the semantic contents of language output together with the corresponding co-verbal gestures employed in a speaker’s continuous narration, the results could be informative as to the specific discourse focus in each utterance. The important role of gestures in aphasic communication is, therefore, obvious. The present results also suggest that verbal semantic processing impairment could predict the frequency of co-verbal gesture use in speakers with aphasia having relatively preserved non-verbal semantics. This provides additional support to previous conclusions that only non-verbal semantic integrity was associated with gesture employment (Fucetola et al., 2006; Hogrefe et al., 2012). The claim by Goldin-Meadow (1999) who demonstrated that speakers would employ co-verbal gestures to compensate for diminished language content for listeners is further clarified and reinforced here.

In light of the general assumption that the use of the dominant hand to perform co-verbal gestures is limited by right-side weakness among our subjects with aphasia, the insignificant impact of hemiplegia on co-verbal gesture employment might seem surprising. However, it has been reported in the stroke literature that hand preference among speakers with aphasia could be alternated due to the compensation of the right hemisphere to the damaged left hemisphere (e.g., Foundas, Macauley, Raymer, Maher, Heilman, & Rothi, 1995). Despite the presence of right-sided hemiplegia, our speakers with aphasia seemed to have switched their hand preference from right (which was dominant pre-morbidly but impaired post-morbidly) to left as a regulatory mechanism to perform daily gross and fine motor functions. The present observation thus has important clinical implications for gesture-based language therapy in aphasia (e.g., Marshall, 2006; Pashek 1997; Rodriguez, Raymer, & Rothi, 2006; Rose, 2006; Rose, Douglas, & Matyas, 2002) because one may still expect positive outcomes with the use of post-morbid ‘dominant’ hand in performing communicative gestures. Improvement associated with practiced communicative gestures should still be a clinically valid goal for speakers with severe aphasia (Daumüller & Goldenberg, 2010).

Consistent with the results of inter- and intra-reliability in Kong et al. (2015), the coding of gesture forms and functions in speakers with aphasia continued to demonstrate good consistency across and within raters. While individual differences exist when it comes to employing co-verbal gestures, we believe that inclusion of the following two components of annotation training for future users would improve the reliability of gestural coding and analyses. First, given the mixed use of terminology in the literature by gesture scholars to describe even the same type of gesture, a more systematic review of various gestures (such as Rose, Raymer, Lanyon, & Attard, 2013) for future users will be a good foundation for appreciating the basis of how we defined the six gesture forms and eight functions in the present study. Second, more examples such as those given in Appendix A, extracted from individuals with various severities or syndromes of aphasia, will provide users with more thorough information on the varieties of gestures to be coded. These examples will be particularly useful when one needs to determine the primary function of a gesture with reference to the corresponding language content.

There are two directions to extend the current study. First, conversation-based language samples can be collected and analyzed with Kong et al.’s (2015) annotation framework. Although the content of this type of samples tends to be more open-ended, which may potentially lead to difficulties in making direct comparisons between the subjects’ gestural performance and communication contents, the effect of various communicative contexts and the intentions of the speakers/listeners on communicative gestures can be examined. The plausible limitation of potentially eliciting more deictic gestures due to the use of pictorial stimuli, as in our case in this study, can also be eliminated. Second, how well this framework can capture use of co-verbal gestures by individuals speaking different languages, i.e., whether the distribution of different gesture forms varies across languages or cultures, will be of interest to researchers investigating non-verbal behaviors. It has been suggested that although culturally specific gesture types exist across different languages, a speaker’s cultural background has only minimal influence on the rate or pattern of gestures’ occurrence (Yammiyavar, Clemmensen, & Kumar, 2008). However, speakers with certain cultures (e.g., Italian) seemed to have a tendency to use more bodily language, including co-verbal gestures, for communication than others (Graham & Argyle, 1975). Given that the current dataset of gesture distribution was derived from Cantonese-speaking individuals, further investigation into how well the results may be generalized to other cultures is warranted. This extension also applies to annotation of bilingual aphasic cases to quantitatively differentiate the employment of co-verbal gestures as a function of residual, and perhaps differential, linguistic abilities in the two languages. This is especially the case when the prevalence of bilingual aphasia is increasing steadily (Ansaldo & Saidi, 2014). Both of the above-mentioned aspects of gesture employment in aphasia have thus far received little attention. Systematic investigations of these areas will enhance our knowledge base of co-verbal gestures and, therefore, render speech-language pathologists important evidence to conduct clinical evaluations and rehabilitation.

Highlight

  • We perform independent annotation of gesture forms and functions in aphasia
  • Significantly more gestures were employed by the speakers with aphasia
  • Aphasic gestural use enhanced language content and provided additional information
  • Aphasia severity and verbal-semantic impairment was associated with more gestures

Acknowledgements

This study is supported by a grant funded by the National Institutes of Health to Anthony Pak-Hin Kong (PI) and Sam-Po Law (Co-I) (project number: NIH-R01-DC010398). Special thanks to the staff members in the following organizations (in alphabetical order) for their help in subject recruitment: Christian Family Service Center (Kwun Tong Community Rehabilitation Day Center), Community Rehabilitation Network of The Hong Kong Society for Rehabilitation, Internal Aphasia Clinic at the University of Hong Kong, Hong Kong Stroke Association, Lee Quo Wei Day Rehabilitation and Care Centre of The Hong Kong Society for Rehabilitation, and Self Help Group for the Brain Damage. The authors would also like to acknowledge the invaluable contribution of the people living with aphasia who participated in this study.

Appendix A. Details and examples of the gesture coding system

The six forms of gestures were based on the classification by Ekman and Friesen (1969), Mather (2005), and McNeill’s (1992):

Example 1Example 2
Iconic: outlines the shape of an object
or the motion of an action
The speaker twisted his hand in a rotary
action, pretending he was turning the knob of
a stove when he said the action word An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0001.jpg
(turning on the stove)
When the speaker said the word An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0002.jpg
(sleeping), he put his palm beside the ear to
represent the action of sleeping
Metaphoric: shows pictorial content
of an abstract idea
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0003.jpg
(there is no one around me), his index finger
drew a circle from in the air to represent the
concept of ‘around’
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0004.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0005.jpg (frying both sides of the ham), he
flipped his palm up and down to represent
‘both sides’
Deictic: familiar pointing, indicating
objects in conversational space
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0006.jpg (he ran
immediately), he pointed to left hand side to
refer to the person who ran away
The speaker pointed to the picture of an egg
while he said the word An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0007.jpg (egg)
Emblem: gestures with standard
well-formed properties and
language-like features that are
culturally specific
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0008.jpg (all gone),
he opened his arms with palms facing
upward to indicate ‘nothing’
The speaker patted his chest to indicate the
meaning of ‘I’, when he said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0009.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0010.jpg (when I had a stroke). This gesture was
universally accepted to represent oneself
Beat: rhythmic beating such as a
simple up-and-down or back-and-forth
hand or arm flick
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0011.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0012.jpg (people were coming from the
America, Taiwan, and Singapore), he flicked
his arm down rhythmically as he name the
countries one by one
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0013.jpg
(walking step by step), his hand flicked
downwards in synchrony with the word
‘step’
Non-identifiable: uncodable gestures
due to ambiguous connection or lack
of a direct meaning to the language
content
The speaker’s flicked his hand up and down
but didn’t synchronize with speech his whole
description of a story
The speaker moved his hand occasionally to
a random position in the air during his
monologue
The eight functions of gestures were based on classification systems of several previous studies:
Example 1Example 2
Providing substantive information
to the listener: gives information in
addition to the language content
(Goldin-Meadow, 2003)
When mentioning the action of An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0014.jpg
(opening the door), the speaker mimicked the
action of door opening with a twisting action
to give additional information on the way the
door was opened
When saying An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0015.jpg (I was tied in
this way), the speaker pretended to be tied by
outlining a circular motion with his hand to
providing additional information about how
he was tied
Enhancing the language content:
gives the same meaning to the
language content (Beattie & Shovelton, 2000)
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0016.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0017.jpg (you may eat after putting another
piece of bread on top), he pretended to put a
piece of bread on a sandwich.
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0018.jpg (I
cannot move this hand), he pointed to the
weaker arm
Providing alternative means of
communication: carries meaning in
the absence of speech (Le May et al., 1988)
The speaker put his thumb up to indicate
‘good’ without saying anything
The speaker employed the OK sign to
respond to the question of ‘Are you ready to
start?’ without any other verbal responses
Guiding and controlling the flow of
speech: reinforces the rhythm of the
speech (Jacobs & Garnham, 2007)
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0019.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0020.jpg (I received physiotherapy in
the morning and occupational therapy in the
afternoon), the speaker flicked his hand twice
when he mentioned the words
‘physiotherapy’ and ‘occupational therapy’
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0021.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0022.jpg (I was discharged from the hospital on
September 14th), he flicked his hand
rhythmically when mentioning the date
Reinforcing the intonation or
prosody of speech: emphasizes
his/her meaning of speech
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0023.jpg (I am
really unhappy), his hand flicked at every
syllable to emphasize his unhappiness
The speaker tapped his hand on the table to
emphasize the word ‘wolf when saying An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0024.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0025.jpg (The wolf really comes)
Assisting lexical retrieval: facilitates
word retrieval at times of long pause,
word-finding difficulty, interjections
and circumlocution during speech
(Mayberry & Jaques, 2000)
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0026.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0027.jpg (first crack… /e /… an egg), he pointed
to the egg on the picture during the
interjection of /e/
When the speaker said ‘/e/… An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0028.jpg
/e /…’ (/e /… what is this? /e/…), he put up
his palm and held it on the air when he was
struggling for the target word that was
produced eventually
Assisting sentence re-construction:
modifies the syntactic structure,
re-constructs a sentence, or refines a
sentence structure (Alibali, Kita, & Young, 2000)
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0029.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0030.jpg (The villagers
shouted… the shepherd shouted loudly that
the wolf was coming), he put up his hand and
then down on the table during the
reformation of the sentence
When the speaker said An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0031.jpg
An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0032.jpg (When I had a stroke… I
worked as a janitor), he moved his palm
from left to right during the reformation of
the sentence
No specific function deduced: does
not show any of the above seven
functions
When the speaker describing how he learnt
calligraphy, he kept moving his index finger
in a circular motion with no synchronization
to the sentences
The speaker occasionally put his palm up
and down in between utterances when he
was describing his stroke story

Appendix B. Parameters for measuring linguistic performance

II. Type-token ratio (TTR): total number of different words/total number of words.

  1. Total number of different words: each different word, excluding unintelligible utterances and bound morphemes, was counted once
  2. Total number of words: all words in the speech sample except those for repetition and self-correction

III. Percentage of complete utterances: total number of complete utterances/total number of utterances

  1. Incomplete utterances are those that are ungrammatical or ill-formed utterances or utterances with omitted elements
  2. Complete utterances should consist of one or more clauses or a phrase in Cantonese with a specific intonation of statement or question (Ma, Ciocca, & Whitehill, 2006; Shen, 1992). Utterance types (i) to (iv) are sub-classified as simple utterances:
    • (i) An utterance with a subject and predicate (verb plus a complement or modifier) An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0033.jpg e.g., An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0034.jpg (The turtle won the race)
    • (ii) An utterance without a subject but a predicate only An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0035.jpg e.g., An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0036.jpg (Turn on the cooker)
    • (iii) An utterance with a predicate only but is still grammatically correct in conjunction with the previous utterance An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0037.jpg, e.g., An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0038.jpg An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0039.jpg (‘How was the rabbit?’ ‘Very unhappy’)
    • (iv) A single-word utterance An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0040.jpg e.g., An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0041.jpg (Where?)
    • (v) Compound and complex utterances An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0042.jpg
      • - Compound utterances refer to utterances joining two or more simple utterances with different subjects and predicates by coordinating conjunctions (such as for, and, but, or so), e.g., An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0043.jpg An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0044.jpg (The villagers will not trust the boy again because he lied)
      • - Complex utterances refer to utterances containing one or more dependent clauses either at the beginning, middle, or end of the utterance using subordinating conjunctions or relative pronouns, e.g. An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0045.jpg An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0046.jpg (The moral of this story is that we should not lie)
  3. Total number of utterances refers to the sum of complete and incomplete utterances.

IV. Percentage of simple utterances: It was computed by dividing the total number of simple utterances (i.e., the sum of measures II-b-(i), II-b-(ii), II-b-(iii), and II-b-(iv) above) by the total number of utterances

V. Percentage of regulators: Regulators are utterances used for initiation, shifting, continuation, and termination of conversations (Mather, 2005), such as An external file that holds a picture, illustration, etc.
Object name is nihms-710514-ig0047.jpg (This is it). It was computed by the ratio of total number of regulator/total number of utterances

VI. Percentage of dysfluency: Dysfluency included repetitions of words or syllables, sound prolongations, pauses, self-corrections, and interjections such as /e/ and /um/ (Mayberry & Jaques, 2000). It was computed by the ratio of incidents of dysfluency/total number of utterances

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1This database only contains orthographic transcriptions of subjects’ (unimpaired as well as aphasic speakers) verbal output and does not have video files of each subject’s performance linked to the corresponding transcripts. The number of participants in the database (Kong et al., 2009) reported in this manuscript was based on data available and analyzed at the time of this study. The total number of participants has increased since then. Permission to access the database for research or teaching purposes can be obtained from the first author.

2Similar to Kong et al. (2009), the number of participants in the DoSaGE database has increased to about 100. Permission to access DoSaGE for research or teaching purposes can be obtained from the first author.

3There are 19 tests in the ARAT. Each arm is scored independently. Each test is given an ordinal score of 0, 1, 2, or 3, with higher values indicating better arm motor status. The total ARAT score is the sum of the 19 tests, and thus the maximum score is 57.

4Language tests and collection of discourse samples were done over three to four testing sessions, but the order of these language and discourse tasks was the same for both the aphasic and control groups (except that some assessments such as the CAB were not conducted in controls).

5Due to the large number of subjects and extensive amount of data collected for each participant, data collection and analysis was done by a team of research personnel who received adequate training from the first and second authors. Different personnel were involved in test administration and gesture and linguistic coding. Out of all the language samples in the present study, no more than 5 subjects’ data were collected, coded, and analyzed by the same research personnel.

References

  • Alibali MW, Kita S, Young AJ. Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes. 2000;15(6):593–613.
  • Ansaldo AI, Saidi LG. Aphasia therapy in the age of globalization: Cross-linguistic therapy effects in bilingual aphasia. Behavioural Neurology. 2014;2014 Article ID 603085, 10 pages. doi: 10.1155/2014/603085. [PMC free article] [PubMed]
  • Bavelas JB, Chovil N, Coates L, Roe L. Gestures specialized for dialogue. Personality and Social Psychology Bulletin. 1995;21:394–405.
  • Beattie G, Coughlan J. An experimental investigation of the role of iconic gestures in lexical access using the tip-of-the-tongue phenomenon. British Journal of Psychology. 1999;90:35–56. doi: 10.1348/000712699161251. [PubMed]
  • Beattie G, Shovelton H. Iconic hand gestures and the predictability of words in context in spontaneous speech. British Journal of Psychology. 2000;91(4):473. [PubMed]
  • Box GEP, Cox DR. An analysis of transformation. Journal of Royal Statistical Society (Series B) 1964;26:211–246.
  • Carlomagno S, Cristilli C. Semantic attributes of iconic gestures in fluent and non-fluent aphasic adults. Brain and Language. 2006;99(1):102–103.
  • Cicone M, Wapner W, Foldi N, Zurif E, Gardner H. The relation between gesture and language in aphasic communication. Brain and Language. 1979;8(3):324–349. [PubMed]
  • Crowder EM. Gestures at Work in Sense-Making Science Talk. The Journal of the Learning Sciences. 1996;5(3):173–208.
  • Dabul BL. Apraxia Battery for Adults-Second Edition (ABA-2) Pro-Ed Inc; Texas, TX: 2000.
  • Daumüller M, Goldenberg G. Therapy to improve gestural expression in aphasia: A controlled clinical trial. Clinical Rehabilitation. 2010;24(1):55–65. [PubMed]
  • de Ruiter JP. The production of gesture and speech. In: McNeill D, editor. Language and Gesture. Cambridge University Press; Cambridge, UK: 2000. pp. 284–311.
  • de Ruiter JP. Can gesticulation help aphasic people speak, or rather, communicate? International Journal of Speech-Language Pathology. 2006;8(2):124–127.
  • de Ruiter JP, de Beer C. A critical evaluation of models of gesture and speech production for understanding gesture in aphasia. Aphasiology. 2013;27(9):1015–1030.
  • Duncan S, Pedelty LR. Discourse focus, gesture, and disfluent aphasia. In: Duncan SD, Cassell J, Levyl ET, editors. Gesture and the dynamic dimension of language: Essays in honor of David McNeill. John Benjamins Publishing Company; Amsterdam: 2007. pp. 261–283.
  • Ekman P, Friesen WV. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica. 1969;1(1):49–98.
  • Enderby P, Palmer R. Frenchay Dysarthria Assessment (FDA-2) Pro-Ed Inc.; Texas, TX: 2008.
  • Feyereisen P. Manual activity during speaking in aphasic subjects. International Journal of Psychology. 1983;18:545–556.
  • Feyereisen P. Gestures and speech, interactions and separations: A reply to McNeill (1985) Psychological Review. 1987;94:492–498.
  • Field AP. Discovering statistics using SPSS. SAGE; London: 2009.
  • Foundas AL, Macauley BL, Raymer AM, Maher LM, Heilman KM, Rothi LJG. Gesture laterality in aphasic and apraxic stroke patients. Brain and Cognition. 1995;29(2):204–213. [PubMed]
  • Fucetola R, Connor LT, Perry J, Leo P, Tucker FM, Corbetta M. Aphasia severity, semantics, and depression predict functional communication in acquired aphasia. Aphasiology. 2006;20:449–461.
  • Garrett MF. Syntactic process in sentence production. Psychology of Learning and Motivation. 1975;9:133–177.
  • Glosser G, Wiener M, Kaplan E. Communicative gestures in aphasia. Brain and Language. 1986;27(2):345–359. [PubMed]
  • Goldin-Meadow S. The role of gesture in communication and thinking. Trends in Cognitive Sciences. 1999;3(11):419–429. [PubMed]
  • Goldin-Meadow S. Hearing gesture: How our hands help us think. Belknap Press of Harvard University Press; Cambridge, Mass; London: 2003.
  • Graham JA, Argyle M. A cross cultural study of the communication of extra-verbal meaning by gestures. International Journal of Psychology. 1975;10(1):57–67.
  • Gullberg M. Thinking, speaking, and gesturing about motion in more than one language. In: Pavlenko A, editor. Thinking and speaking in two languages. Multilingual Matters; Bristol: 2011. pp. 143–169.
  • Gullberg M. So you think gestures are compensatory? Reflections based on child and adult learner data. In: Mattson AF, Norrby C, editors. Language acquisition and use in multilingual contexts: Theory and practice. Media-Tryck, Lund University; Lund: 2013. pp. 39–49.
  • Hadar U, Butterworth B. Iconic gestures, imagery, and word retrieval in speech. Semiotica. 1997;115:147–172.
  • Herrmann M, Reichle T, Lucius-Hoene G, Wallesch C-W, Johannsen-Horbach H. Nonverbal communication as a compensative strategy for severely nonfluent aphasics? – A quantitative approach. Brain and Language. 1988;33(1):41–54. [PubMed]
  • Hillis AE. Cognitive neuropsychological approaches to rehabilitation of language disorders: Introduction. In: Chapey R, editor. Language intervention strategies in aphasia and related neurogenic communication disorders. Lippincott, Williams & Wilkins; Philadelphia: 2001.
  • Hogrefe K, Ziegler W, Weidinger N, Goldenberg G. Non-verbal communication in severe aphasia: Influence of aphasia, apraxia, or semantic processing? Cortex. 2012;48:952–962. [PubMed]
  • Hogrefe K, Ziegler W, Wiesmayer S, Weidinger N, Goldenberg G. The actual and potential use of gestures for communication in aphasia. Aphasiology. 2013;27(9):1070–1089.
  • Howard D, Patterson KE. The Pyramids and Palm Trees Test: A test of semantic access from words and pictures. Thames Valley Test Company; 1992.
  • Jacobs N, Garnham A. The role of conversational hand gestures in a narrative task. Journal of Memory and Language. 2007;56(2):291–303.
  • Kaplan E, Goodglass H, Weintraub S, Segal O, van Loon-Vervoorn A. Boston naming test. Pro-Ed Inc.; Texas, TX: 2001.
  • Kendon A. Gesture. Annual Review of Anthropology. 1997;26:109–128.
  • Kendon A. Gesture: Visible action as utterance. Cambridge University Press; New York: 2004.
  • Kimura D. Manual activity during speaking – I. Right handers. Neuropsychologia. 1973;11:45–50. [PubMed]
  • Kita S. How representational gestures help speaking. In: McNeill D, editor. Language and gesture. Cambridge University Press; Cambridge: 2000.
  • Kita S, Özyürek A. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language. 2003;47:16–32.
  • Knapp ML, Hall JA. Nonverbal communication in human interaction. Harcourt Brace College Publishers; Fort Worth: 1997.
  • Kong APH, Law S-P, Kwan CCY, Lai C, Lam V. A coding system with independent annotations of gesture forms and functions during verbal communication: Development of a Database of Speech and GEsture (DoSaGE) Journal of Nonverbal Behavior. 2015;39(1):93–111. [PMC free article] [PubMed]
  • Kong APH, Law SP, Lee ASY. The construction of a corpus of Cantonese-aphasic-discourse: a preliminary report; Poster presented at The 2009 American Speech-Language-Hearing Association (ASHA) Convention; New Orleans, LA, USA. 2009.
  • Krauss R, Chen Y, Gottesman R. Lexical gestures and lexical access: A process model. In: McNeill D, editor. Language and gesture. Cambridge University Press; Cambridge, UK: 2000. pp. 261–283.
  • Krauss R, Hadar U. The role of speech-related arm/hand gestures in word retrieval. In: Campbell R, Messing L, editors. Gesture, speech, and sign. Oxford University Press; Oxford: 1999. pp. 63–116.
  • Lanyon L, Rose M. Do the hands have it? The facilitation effects of arm and hand gesture on word retrieval in aphasia. Aphasiology. 2009;23(7-8):809–822.
  • Lausberg H, Kita S. The content of the message influences the hand preference in co-speech gestures and in gesturing without speaking. Brain and Language. 2003;86:57–69. [PubMed]
  • Lausberg H, Sloetjes H. Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods. 2009;41(3):841–849. [PubMed]
  • Law S-P. Impairment to phonological processes in a Cantonese-speaking brain-damaged patient. Aphasiology. 2004;18:373–388.
  • Le May A, David R, Thomas PA. The use of spontaneous gesture by aphasic patients. Aphasiology. 1988;2(2):137–145.
  • Levelt WJM. Speaking: From intention to articulation. MIT Press; Cambridge, MA: 1989.
  • Lyle RC. A performance test for assessment of upper limb function in physical rehabilitation treatment and research. International Journal of Rehabilitation Research. 1981;4(4):483–492. [PubMed]
  • Ma JK, Ciocca V, Whitehill TL. Quantitative analysis of intonation patterns in statements and questions in Cantonese; Proceedings of the Third Intonational Conference on Speech Prosody; Dresden, Germany. 2006. pp. 277–280.
  • MacWhinney B. The CHILDES project: Tools for analyzing talk. Lawrence Erlbaum; Hillsdale, NJ: 2003.
  • MacWhinney B, Fromm D, Forbes M, Holland A. AphasiaBank: Methods for studying discourse. Aphasiology. 2011;25:1286–1307. [PMC free article] [PubMed]
  • Marshall J. The role of gesture in aphasia therapy. International Journal of Speech-Language Pathology. 2006;8:110–114.
  • Mather SM. Ethnographic Research on the Use of Visually Based Regulators for Teachers and Interpreters. In: Metzger M, Fleetwood E, editors. Attitudes, innuendo, and regulators. Gallaudet University Press; Washington, DC: 2005. pp. 136–161.
  • Max Planck Institute for Psycholinguistics 2002 http://www.lat-mpi.eu/tools/elan/
  • Mayberry RI, Jaques J. Gesture production during stuttered speech: insights into the nature of gesture-speech integration. In: McNeill D, editor. Language and gesture. Cambridge University Press; New York: 2000. pp. 199–214.
  • McNeill D. So you think gestures are nonverbal? Psychological Review. 1985;92(3):271–295.
  • McNeill D. Hand and mind: What gestures reveal about thought. University of Chicago Press; Chicago: 1992.
  • Mol L, Krahmer E, van de Sandt-Koenderman M. Gesturing by speakers with aphasia: How does it compare? Journal of Speech, Language and Hearing Research. 2013;56(4):1224–1236. [PubMed]
  • Morrel-Samuels P, Krauss R. Word familiarity predicts temporal asynchrony of hand gestures and speech. Journal of Experimental psychology: Learning, Memory and Cognition. 1992;18:615–622.
  • Orgassa A. Co-speech gesture in anomic aphasia. Toegepaste Taalwetenschap in Artikelen. 2005;73(1):85–97.
  • Osborne JW. Improving your data transformations: Applying the Box-Cox transformation. Practical Assessment, Research & Evaluation. 2010;15(12):1–9.
  • Özyürek A, Kita S, Allen S, Furman R, Brown A. How does linguistic framing of events influence co-speech gestures? Insights from cross-linguistic variations and similarities. Gesture. 2005;5(1):215–237.
  • Pashek G. A case study of gesturally cued naming in aphasia: Dominant versus nondominant hand training. Journal of Communication Disorder. 1997;30:349–366. [PubMed]
  • Pedelty LL. Gesture in aphasia. Department of Behavioral Sciences, University of Chicago; IL, USA: 1987. Unpublished dissertation.
  • Poggi I. Iconicity in different types of gestures. Gesture. 2008;8(1):45–61.
  • Rauscher F, Krauss R, Chen Y. Speech and lexical access: The role of lexical movements in speech production. Psychological Science. 1996;7:226–231.
  • Riddoch MJ, Humphreys GW. BORB: Birmingham object recognition battery. LEA; 1993.
  • Rodriguez A, Raymer A, Rothi LJG. Effects of gesture and verbal and semantic-phonologic treatments for verb retrieval in aphasia. Aphasiology. 2006;20:286–297.
  • Rose ML. The utility of arm and hand gestures in the treatment of aphasia. Advances in Speech-Language Pathology. 2006;8(2):92–109.
  • Rose M, Douglas J, Matyas T. The comparative effectiveness of gesture and verbal treatments for a specific phonologic naming impairment. Aphasiology. 2002;16:1001–1030.
  • Rose ML, Raymer AM, Lanyon LE, Attard MC. A systematic review of gesture treatments for post-stroke aphasia. Aphasiology. 2013;27(9):1090–1127.
  • Scharp VL, Tompkins CA, Iverson JM. Gesture and aphasia: Helping hands? Aphasiology. 2007;21:717–725. [PMC free article] [PubMed]
  • Sekine K, Rose ML. The relationship of aphasia type and gesture production in people with aphasia. American Journal of Speech-Language Pathology. 2013;22(4):662–672. [PubMed]
  • Sekine K, Rose ML, Foster AM, Attard MC, Lanyon LE. Gesture production patterns in aphasic discourse: In-depth description and preliminary predictions. Aphasiology. 2013;27(9):1031–1049.
  • Shen J. On a model of Chinese intonation. Studies on Language. 1992;4:16–24.
  • Thompson CK. Northwestern assessment of verbs and sentences. 2011 http://northwestern.flintbox.com/public/project/19927.
  • Wilkinson R, Beeke S, Maxim J. Formulating actions and events with limited linguistic resources: Enactment and iconicity in agrammatic aphasic talk. Research on Language and Social Interaction. 2010;43(1):57–84.
  • Xu J, Gannon PJ, Emmorey K, Jason FS, Braun AR. Symbolic gestures and spoken language are processed by a common neural system. Proceedings of the National Academy of Sciences of the United States of America. 2009;106(49):20664–20669. [PubMed]
  • Yiu EML. Linguistic assessment of Chinese-speaking aphasics: Development of a Cantonese aphasia battery. Journal of Neurolinguistics. 1992;7:379–424.
  • Yammiyavar P, Clemmensen T, Kumar J. Influence of cultural background on non-verbal communication in a usability testing situation. International Journal of Design. 2008;2(2):31–40.
  • Yozbatiran N, Der-Yeghiaian L, Cramer SC. A Standardized Approach to Performing the Action Research Arm Test. Neurorehabilitation and Neural Repair. 2008;22:78–90. [PubMed]