|Home | About | Journals | Submit | Contact Us | Français|
Upon witnessing a causal event, do children’s gestures encode causal knowledge that a) does not appear in their linguistic descriptions or b) conveys the same information as their sentential expressions? The former use of gesture is considered supplementary; the latter is considered reinforcing. Sixty-four English-speaking children aged 2.5- to 5 years described an action in which the experimenter pushed a ball across a small pool with a stick. Children produced more complete sentences expressing causal relations, encoding more of the elements in the event. Younger children produced noncausal sentences and location gestures that referred to or highlighted the goal of the action. Older children used both reinforcing and supplementary gestures conveying the instrument (e.g., the stick) and direction (e.g., from left to right) of the action. These findings present a noncausal to causal developmental trajectory both in speech and gesture. Among older children, results also suggest that gestures carry causal information before they form complete sentences to express causal events.
Causal understanding is fundamental to recognizing the relationship between objects and events. In physical causal events, one object, often a causal agent, acts upon another object (the “patient” to borrow terminology from linguistics) by contacting the object and by changing the endstate of the second object’s motion. Research shows that infants infer these physical causal relations by the end of their first year of life (Cohen, Rundell, Spellman, & Cashon; 1999; Golinkoff & Kerr, 1978; Leslie, 1982, 1984). Despite the fact that toddlers comprehend causal sentences (Bunger & Lidz, 2004; Fisher, 1996; Hirsh-Pasek, Golinkoff, & Naigles, 1996; Naigles, 1990), production seems to lag behind children’s causal understanding in that children at this age are often unable to correctly use causal connectives (e.g., because) and causal verbs that could carry the necessary information (Clark, 2003). Later, preschool aged children produce sentences expressing causal relations in events that reveal their causal understanding. However, it is not yet clear the types of information they produce to describe physical causal relations (hitting a ball with a stick). We explore how preschool children use a combination of speech and gesture to express causal information. Our goal is to trace the role of gesture in children who are capable of inferring causal relations, but who might have difficulty producing the components of sentences that describe causal relations. This is one of the first studies asking how children’s gesture production might contribute to the production of sentences expressing causal relations.
Speech and gesture are complementary components of an integrated language system (McNeill, 1992). This system allows children to express meaning in two modalities that are semantically and temporally coherent and in which gesture is complementary to children’s expressions, strengthening the message offered in speech (Kendon, 1980; McNeill, 1998, 2005; Goldin-Meadow, 1998, 2003; Nicoladis, Mayberry, & Genesee, 1999). Research has investigated the role of gestures in children’s language development and in several cognitive tasks (Alibali & Goldin-Meadow, 1993; Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Church & Goldin-Meadow, 1986; Ehrlich, Levine, & Goldin-Meadow, 2006; Garber & Goldin-Meadow, 2002; Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005, 2009; Pine, Lufkin, & Messer, 2004). Here we examine the role of gestures in children’s causal speech production.
Children’s spontaneous gestures serve different purposes. First, early gestures preview language, acting as indicators for upcoming changes in verbal expressions. Infants start by using pointing gestures before they produce their first words. These deictic gestures involve, for example, pointing at a cup. Research suggests that the objects children point to are soon thereafter named in words (Bates, 1976; Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005). Once children start producing words, the form and function of their gestures become more diverse. At this stage, in addition to deictic gestures, children produce representational gestures that refer to an object’s actions or attributes such as moving the hand in a downward action while saying “go down”. Early gestures in both deictic and representational forms have two primary functions: They reinforce meaning given in speech (e.g., pointing at a cup while saying “cup”) or they supplement speech by providing additional information in the gesture domain (e.g., pointing at a cup while saying “mine”). However, as suggested by Özçalişkan and Goldin-Meadow (2009), only supplementary gesture-speech combinations are key to communicating sentence-like meanings and to predicting later language development. For example, Iverson and Goldin-Meadow (2005) demonstrated that the age at which children first use supplementary gestures (e.g., pointing at a cup while saying, “mine” to represent “my cup”) is linked to their initial use of two-word utterances. Children also produce complex gesture-speech constructions before they express the same information in the verbal modality (Özçalişkan & Goldin-Meadow, 2005). Hence, “gesture may pave the way for future developments in language” (Iverson & Goldin-Meadow, 2005, p. 370).
Second, children’s gestures reveal underlying thinking in various cognitive tasks such as counting, Piagetian conservation, the Tower of Hanoi problem, spatial reasoning, and the balance problem (Alibali & Goldin-Meadow, 1993; Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Church & Goldin-Meadow, 1986; Ehrlich, Levine, & Goldin-Meadow, 2006; Garber & Goldin-Meadow, 2002; Pine, Lufkin, & Messer, 2004). Studies involving these tasks demonstrate that gestures can uncover conceptual knowledge relevant to a specific task. Broaders et al. (2007) suggested that children’s gestures tap into their implicit knowledge by supplementing the information available in the verbal modality. In most cases, such gestures are produced before upcoming changes in knowledge, demonstrating a possible transitional stage (Church &Goldin-Meadow, 1986, 1988; Goldin-Meadow, Alibali, & Church, 1993; Pine, Lufkin, &Messer, 2004).
Preschool-aged children also use gestures to augment their linguistic expression. For example, Kidd and Holler (2009) found that 3- to 5-year-olds used gestures to solve a lexical ambiguity task, in which they were asked to retell a short story involving two homonym senses (e.g., mouse as an animal and mouse as computer equipment). Children’s use of speech and gesture in their retellings were examined. Results showed that children in the youngest age group did not solve the ambiguity in the verbal modality and used many pointing gestures, indicating an ineffective way of disambiguation. Four-year-olds, however, produced both representational and pointing gestures for disambiguation. By age 5, children disambiguated the homonyms in their speech and their use of gestures significantly decreased. That is, 4-year-olds relied on gestures to complement their less sophisticated verbal skills. These results suggest that before age 5 gestures can help children resolve difficult and demanding problems while forecasting future cognitive advancements that will become available in the verbal modality.
Together, these functions of children’s gestures imply that gesture assists and previews early language development as well as children’s transitional knowledge in many cognitive and language tasks. Importantly, these functions of gestures are not mutually exclusive. Gesture becomes an undeniably crucial part of the communication system, providing a tool both to express information and to cope with challenging cognitive information (Goldin-Meadow, 2000; McNeill, 1992; see also Kidd & Holler, 2009).
Although researchers have made great progress in understanding the development of children’s gestures, few studies have explored the role of gesture in one basic area of cognition and language - causal events. Only one study has examined the hypothesis that causal descriptions might be expressed through gesture (Furman, Özyürek, & Allen, 2006). In this study, children were presented with causal events such as a “triangle man” hitting a “tomato man” and the tomato man then rolling down the hill. More than one-third of the time, 3- and 5-year-olds’ descriptions of these events included gestures referring to at least one subevent. For example, children expressed the cause with a horizontal sharp movement of the hand to represent the action “hit,” for the result subevent; a diagonal movement of the hand representing the downwards action of “roll down;” or a combination of both (e.g., the continuous hand movement of the horizontal and diagonal actions to represent “push him down”). 5-year-olds produced more gestures for the causing subevent than for the result subevent (e.g., “hit”) whereas 3-year-olds produced equal amount of gestures for both subevents. Furman et al. (2006) provide evidence that children use gestures to express the components of causal events when they speak and the use of gestures to represent these caused motion events change by age.
What is the relationship between speech and gesture in children’s expressions of causal events? Given that both causal knowledge and causal descriptions undergo significant developmental changes during the preschool period (Bowerman, 1974; Bullock, 1985; Clark, 2003; das Gupta & Bryant, 1989; Krist, Fieberg, & Wilkening, 1993), gesture might assist children’s expressions of causal events.
Before 12 months of age, infants perceive causal events as different from noncausal events (Baillargeon, 1994; Cohen, Rundell, Spellman; & Cashon; 1999; Leslie, 1982; 1984; Oakes & Cohen, 1990; Saxe, Tenenbaum, & Carey, 2005; Saxe, Tzelnic, & Carey, 2007) and attend to the differences between the agent and patient roles (Cohen, Amsel, Redford, Casasola, 1998; Leslie & Keeble, 1987; Golinkoff, 1975; Golinkoff & Kerr, 1978). Moreover, research documents that within causal events, 12-month-olds are also sensitive to the direction of cause through elements such as source and goal (Lakusta, Wagner, O’Hearn, & Landau, 2007). Thus, infants have early representations of causal relations in the physical domain.
Children’s causal understanding, however, undergoes major developmental changes in the first three years of life (Bullock, 1985; das Gupta & Bryant, 1989; Krist, Fieberg, & Wilkening, 1993; Gopnik & Shulz, 2007; Gopnik & Sobel, 2000). For example, by age 3, children use temporal ordering to refer to the sequence of mechanical causal events (Bullock & Gelman, 1979, Sobel, Tenenbaum, & Gopnik, 2004) and identify invisible causal agents such as light or sound (Shultz, 1982). These findings highlight that children who are generally using multi-word sentences not only understand simple causality but also possess a sophisticated and fairly broad understanding of causal mechanics.
Can children who have the necessary conceptual underpinnings of cause describe a causal event they have just witnessed? With their considerable causal knowledge, children should be able to produce sentences expressing causal relations. They might, however, fall behind in expressing their causal knowledge using language. We propose that, just as children resolve ambiguities with homonyms through gesture (Kidd & Holler, 2009), their gestures might supplement verbal information in effecting causal descriptions.
When describing a simple causal relation such as the act of dropping a pencil, adults use dual-participant sentences, “The girl drops the pencil.” In this case, the verb “drop” denotes a causal relation between the agent “the girl” and the patient “the pencil” (Jackendoff, 1990; Levin, 1993). The same sentence can be expressed using a single-participant in a noncausal way. For example, one might describe the same action as “the pencil falls,” using the noncausal verb “fall,” omitting the causal agent “the girl.” A simple causal event might also involve an intervening variable such as “The girl breaks the window with the stone”, in which “the stone” is the proximal cause for the window breaking. Other components described in causal events are the direction, location, and endpoint of the action. In the sentence, “The man kicked the ball to the other side of the field,” the “field” is the location and the “other side of the field” is the endpoint or goal of the action. These spatial components are optionally expressed depending on what the speaker intends to communicate about the causal event.
Given the breadth of young children’s understanding of cause, one might predict that they would express at least some elements of causal relations in their language. Research suggests that in the second year of life, children have several causal verbs in their productive vocabulary (break, cut; Bowerman, 1974; Carey, 1978; Clark, 2003). Early in the third year of life, children make productive errors and use noncausal words to indicate causal relations, such as, “how would you flat it?” (Bowerman, 1974; Carey, 1978). This naturalistic evidence shows that even after children produce several lexical causatives (e.g., break), they continue to express causal relations using noncausal sentences. It is not until around the age of 4 that children reliably use causal verbs and causal connectives to express causal relations in complex sentences (Clark, 2003).
The present study explores the relationship between speech and gesture in children’s expression of causal events. First, we ask how children talk about causal events. In line with previous naturalistic studies, we predict that older children (4- and 5-year-olds) will produce more causal verbs for sentences expressing causal relations, compared to younger children (2.5- and 3-year-olds). Consistent with the use of more sentences expressing causal relations, older children will be more likely to linguistically express the agent, patient, and instrument involved in events. In contrast, the use of direction and location might not differ among age groups, because they are optional components of causal expressions.
Second, we analyze different gesture categories (reinforcing, supplementary, and gesture-only) as well as gesture types (pointing, representational). We have two hypotheses. First, consistent with previous studies on other tasks (Kidd & Holler, 2009), we predict that younger children’s gestures will supplement their verbal production, previewing what they would later express in their speech. These gestures will be mostly deictic gestures. Second, older children may produce reinforcing gestures when expressing the event with causal sentences. These gestures can be both deictic and representational.
Participants were 64 monolingual English-speaking children, balanced for gender and separated evenly into four age groups: 2.5-year-olds (M = 32.91 months, SD = 1.71, range 30.22 – 35.16), 3-year-olds (M = 39.91, SD = 2.40, range 37.00 – 44.08), 4-year-olds (M = 52.76, SD = 4.36, range 48.04 – 58.05), and 5-year-olds (M = 65.16, SD = 4.19, range 60.10 – 71.13). These age groups were chosen to represent the complete developmental trajectory for causal expression. The sample was recruited from suburban Philadelphia using commercially available mailing lists. The majority of participants were from middle-class families and were white, with less than 5% of Hispanic, Asian American, or African-American descent. Data from an additional 7 children were discarded due to failure to respond (4) or experimenter error (3).
This study was part of a larger study examining force dynamics and causal understanding (Wolff, 2003). Children’s understanding of a causal relation involving an instrument was examined with an experimental task in which the experimenter used a stick to push an object (either a ball or a ring) across a pool of water. Children were asked to express what happened in the event. Here, we focus on children’s verbal and gestural expressions of this simple causal event.
Children were tested individually in a quiet room at the laboratory. The experimenter sat next to the child, to the left of a table. A 46 cm × 38 cm × 13 cm rectangular box full of water was situated on the table and a camera captured both the event and children’s responses. During the warm-up phase, the experimenter showed the task materials to the child (the ball, the ring, the stick). By slightly hitting the object with her hand in different directions, the experimenter moved the ball and the ring on the water saying “Can you see how the ball/the ring moves on the water? Here is the stick, I’ll hold onto it.” Then, the experimenter pushed one of the objects on the water from left to right of the box. At the same time the experimenter said: “Can you see how the ring/ball moves when I push like this?” The same pushing action was repeated for the second object.
After the warm-up phase, each child was presented with two test trials. In counterbalanced order, the experimenter pushed one of the objects along either the horizontal or the diagonal side of the box. The order of direction was counterbalanced between test trials. While pushing the ball or the ring, the experimenter said, “Watch me carefully now”. When the experimenter finished the action, s/he asked the child to describe what happened: “Wow, did you see what just happened? Can you tell me what happened here?” If the child responded “no,” the experimenter asked for the child’s best guess: “What do you think happened here?” Then, the experimenter repeated the same procedure for a second test trial, using a second object.
A native English speaker transcribed all speech. Children’s utterances were coded for their use of causal verbs such as make, push, hit and noncausal verbs such as go and float, and for the meaning of the entire sentence. Children’s utterances were also coded for the use of various components of a sentence: agent (e.g., you), patient (the ball or the ring), instrument (i.e., the stick), location (e.g., there, here, other side), and direction (e.g., this way, across here). The phrases such as “all the way to there” were coded as direction whereas the single use of “there” was categorized as location. Children’s speech-only utterances, in which no gesture accompanied speech, were calculated for further analyses. Table 1 shows two samples of speech coding.
Children’s gestures were coded for type and gesture category. For type, gestures were classified as pointing or representational (Furman et al., 2006; Goldin-Meadow, 2003; McNeill, 1992). Pointing gestures included showing an object or location by extending the index finger toward the referent, as when a child pointed to a location in the box to refer to the endpoint at which the ball stopped, as they said, “The ball went over here.” Representational gestures indicated attributes or actions of an object’s direction. For example, if a child said, “when you pushed it” while her hand shape mimicked holding a stick and moving it away from the self, the gesture was coded as representational.
Gesture category involved three kinds of gestures: reinforcing, supplementary, and gesture-only expressions (Özçalişkan & Goldin-Meadow, 2005, 2009). Reinforcing gestures convey the same information as the concurrently used speech. An example would be pointing at the ball while saying, “ball.” Supplementary gestures conveyed different information than offered in concurrently used language such as pointing at the ball while saying, “you pushed.” Gesture-only expressions were produced without concurrent speech such as pointing at the ball in silence.
For each gesture type and category, gestures were divided by the referents: the causal agent, the receiver of the action, patient (the ball or the ring), instrument (the stick), the location and direction of the action. Table 2 presents two samples of children’s use of gestures.
Children’s utterances and gestures were initially coded by the first author. A second person randomly chose and coded 36% of children’s responses. Agreement between coders was 95% (k = .93, n = 386) for speech referents, 86% (k = .82, n = 97) for identifying gestures and assigning category, and 90% (k = . 87, n = 58) for gesture referents.
A repeated-measure analysis of variance (ANOVA) with age (2.5-, 3-, 4-, and 5-year-olds) as a between-subject variable, and verb type (causal vs. noncausal), and event components (agent, patient, instrument, location, and direction) as within-subject variables yielded no main effects of gender or any interactions with gender. Thus, gender was not considered in further analyses.
Mean number of words used in event descriptions differed across age groups, F (1, 60) = 5.10, p = .03, η2 = .20. Overall, 5-year-olds produced twice as many words as the other age groups (M = 32; Scheffé, ps < .045). However, children’s total expression of event components coded only in speech without gestures did not differ by age.
The mean percentages of causal and noncausal verbs in children’s total speech were also calculated. A one-way ANOVA indicated that children differed in their use of causal verbs, F (3, 60) = 8.57, p = .00, η2 = .30. Four- and 5-year-olds used significantly more causal verbs than the two younger age groups (Scheffé, ps < .023). As Figure 1 depicts, the mean percentage of causal verbs of total verb use differed by age group, F (3, 60) = 10.49, p = .00, η = .34, with older children using more causal verbs than younger ones (Scheffé, ps < .019). Paired-samples t-tests showed that younger children produced significantly more noncausal verbs than causal verbs (ts > 4.72, ps < .01). Even though the mean percentage of the use of causal and noncausal verbs did not significantly differ for 4- and 5-year-olds, 4-year-olds had almost equal number of causal and noncausal verbs and 5-year-olds had more causal verbs than noncausal ones (see Figure 1). The diversity of verbs used over time was similar: Children’s causal verbs consisted mostly of push and hit; noncausal verbs were float, go, move, and swim.
The number of children who used more causal vs. noncausal verbs differed by age group. Only 9 of 32 2- and 3-year-old children used more causal than noncausal verbs to describe the events, while 22 of 32 4- and 5-year-olds produced more causal than noncausal verbs in their sentences.
Although children produced approximately the same number of words only in speech without gestures, the components they used differed by age. To analyze the expression of event components, we calculated the percentages of agent, patient, instrument, location or direction of children’s total utterances. Only the use of agent differed by age group, F (3, 60) = 5.64, p = .00, η2 = .22 (see Figure 2). Post-hoc analyses showed that compared to 5-year-olds, 2.5- and 3-year-old children used fewer agents in their speech (Scheffé, ps < .013). No differences were found for the expression of other causal components.
Together, the findings indicate that 5-year-olds explained causation using causal verbs, while 4-year-olds produced almost equal numbers of causal and noncausal verbs, and both 2.5- and 3-year-olds tended to use more noncausal than causal verbs. Older children explicitly mentioned the agent more often than younger children.
No gender differences appeared for gesture type (pointing or representational), gesture category (reinforcing, supplementary or gesture-only) or gesture referents (agent, patient, instrument, location, and direction). Gender therefore was not considered in further analyses.
As shown in Table 3, the mean number of gestures children produced differed significantly among age groups, F (3, 60) = 4.47, p = .01, η2 = .18. However, post-hoc analyses indicated a difference only at the extreme ages; between 2.5- and 5-year-olds (Scheffé, p = . 02), with 5-year-olds using twice as many gestures as 2.5-year-olds. To control for amount of talk, we calculated children’s proportion of gestures to their overall speech (i.e., the number of gestures per word). This proportion did not differ among age groups, F (3, 60) = 1.24, p = .30, η2 = .06.
Children in all age groups produced more pointing gestures than representational ones, F (1, 60) = 48.67, p = .00, η2 = .45. Children’s use of representational gestures differed across age groups, F (3, 60) = 3.41, p = .02, η2 = .15 (see Table 3) with only older children using representational gestures.
We examined the event components and types expressed in gesture. All age groups were more likely to use pointing to refer to the instrument than any other event components. Yet, only older children, compared to younger ones, produced representational instrument gestures, F (1, 60) = 16.46, p < .01. For example, older children made one of their hands into a fist to represent holding a stick, rather than simply pointing at the stick itself. As predicted, regardless of the age group, the direction of the action was produced by representational gestures, t (63) = 9.02, p < .01, and the location was indicated only in pointing gestures, t (63) = 12.60, p < .01.
As shown in Figure 2, regardless of age, children produced more reinforcing gestures compared to supplementary or gesture-only expressions, F (2, 120) = 10.78, p = .00, η2 = .15.
A one-way ANOVA showed that children in all age groups produced similar percentages of reinforcing gestures. However, gesture referents for this category varied by age group. Children’s use of instrument, location, and direction reinforcing gestures differed by age group (Fs > 3.08, ps < .03). Older children produced significantly more instrument and direction gestures than younger age groups to reinforce information already expressed in their sentences expressing causal relations (Scheffé, ps < .05). In contrast, 2.5-year-olds used more location gestures compared to 4-year-olds (Scheffé, p = .03), indicating that younger children reflected goals by producing relatively more location gestures to reinforce their noncausal speech.
Similar to reinforcing gestures, the percentage of supplementary gestures did not differ by age. A one-way ANOVA of the proportion of gesture referents yielded a difference for the use of instrument, F (3, 60) = 4.07, p = .01, η2 = .17, suggesting that children gestured about the proximal cause of the event. Post-hoc analyses showed that 5-year-olds used supplementary instrument gestures more than 2.5- and 3-year-olds (Scheffé, p < .04). These findings indicate that these instrument gestures offered extra information not captured in the verbal modality. All age groups also produced many location and direction supplementary gestures.
When children’s gesture-only expressions were analyzed, results indicated no main effect of age for the use of gestures without speech. Further analyses for each gesture referent (instrument, patient, location, and direction) demonstrated the same results; children produced very few gestures that were not accompanied by speech. However, children produced location gestures more often than other gestures in isolation from speech, F (3, 180) = 16.59, p = .00, η2 = .22.
Last, we examined those children who produced more causal verbs than noncausal ones (9/32 for younger age groups, 22/32 for older age groups). The mean percentage of the use of reinforcing and supplementary gestures was very similar in these two groups (25% for reinforcing and 17% for supplementary for younger groups and 25% and 13%, respectively, for older groups). These results suggest that when children start using sentences that express causal relations, they produce similar numbers of gestures either to reinforce or supplement verbal information.
This study was designed to investigate the relationship between speech and gesture in children’s descriptions of simple causal events that were enacted as they watched. Two main issues framed the investigation. First, we examined children’s speech for whether they expressed possible causal event components (agent, patient, instrument, location, direction). Second, we examined the role of gestures as they accompanied speech by analyzing different gesture types (pointing, representational) and gesture categories (reinforcing, supplementary, and gesture-only).
In verbal descriptions children initially used noncausal verbs, and then lexical causative verbs, such as push and hit, before they formed full sentences expressing causal relations, involving more components such as instrument. Only older children verbally produced the agent of the sentence. The optional components of location and direction were used similarly in all age groups.
Regarding children’s use of gestures, we had two hypotheses: 1) younger children would use more gestures to preview what they would later express in their speech; 2) older children would produce reinforcing gestures to highlight causal information that is present in verbal modality. The findings were surprising. The first hypothesis was partially confirmed. Younger children only pointed at the location to reinforce their speech. However, older children produced more gestures than younger ones, using gesture to both reinforce (same information) and supplement (additional information) their speech. In particular, older children pointed and used representational gestures about the instrument and the direction of the causal event they witnessed. Although these results seem to contradict previous findings showing a decrease in children’s supplementary gestures with age and with advanced language (Iverson & Goldin-Meadow, 2005; Kidd & Holler, 2009; Özçalişkan & Goldin-Meadow, 2005, 2009), older children rely on gestures to supplement their speech before they form complex sentences that express causal relations.
This controlled experiment validates prior naturalistic research showing a noncausal to causal trajectory in causal sentence production. Although children infer the causal meaning of a novel verb from the sentential context at 2 years of age, they fail to appropriately use causal verbs in causal sentences until the preschool years (Bunger & Lidz, 2004; Naigles, 1990; Tomasello, 2000).
At first, younger children’s explanations involve primarily noncausal verbs, even in describing a causal event. When they were asked to describe the events portrayed in this study, younger children used noncausal verbs such as “the ball floated on water” or “the ball moves from one side to the other.” One might wonder whether the language children heard in warm-up trials served as a model for their own speech, as the experimenter intentionally used noncausal language.” However, under this interpretation there should be no difference between age groups, when in fact one emerged. Older children still interpreted the action as causal and included the agent (i.e., referring to the agent experimenter as “you”) in their speech. Younger children did not. These findings also corroborate evidence that young children are more likely to notice goals than sources in dynamic events (Lakusta & Landau, 2005).
Our results show that causal descriptions improve remarkably when children reach age 4. It is of note that when children start to produce sentences expressing causal relations, they usually omit the instrument (e.g., “you hit the ball” rather than “you hit the ball with a stick.”). Direction of the motion appears infrequently in children’s causal descriptions.
Our findings on gesture augment the literature by examining children’s gestures in a causality task. Although a longitudinal study is required in order to fully understand the changes in gesture and speech as well as the different functions of gesture at different ages, we can ask whether some components occurring in gesture reinforce verbal information or preview some information that is not yet realized in speech.
If children reinforce speech with gesture, they might use gesture and speech together to refer to the same causal event components. Our findings support this conclusion, showing that children at all age groups produced more reinforcing gestures than other categories. However, gesture referents varied by age. Similar to verbal descriptions, younger children were very goal-directed and used location reinforcing gestures. In contrast, older children’s sentences expressing causal relations were more likely to be reinforced by instrument and direction gestures. As children produce more sentences expressing causal relations, they use more gestures to convey the same information. Thus, gesture and speech encode strongly related meanings (Gullberg, de Bot, & Volterra, 2008). Importantly, gesture might offer an alternative way to code and organize spatial-perceptual information and engage in the conceptual planning for speech (e.g., Alibali, Kita, & Young, 2000; Kita, 2000).
Our data also suggest that children use many supplementary gestures that refer to components other than what they express in their speech. Gesture is used to convey information for instrument and spatial components of direction and location. When children start producing sentences with causal verbs such as “you hit the ball”, gestures referring to the instrument preview speech. For example, a child conveys additional information by pointing to the stick or making a fist hand shape. Thus, only older children produced instrument gestures that might later be expressed in speech. Previous research suggests that children’s supplementary gestures predict their future language development (Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005, 2009), and preschool children use gestures to supplement their speech in demanding tasks (Kidd & Holler, 2009). Our findings demonstrate that even 5-year-olds use extra information in gestures before they form complete sentences expressing causal relations. We suspect that there might be a decline in the number of supplementary gestures once children begin to express instruments in their sentences.
Gestures that refer to spatial components might serve different purposes than gestures for the instrument, because children in all age groups produce location supplementary gestures. Older children also use many direction supplementary gestures. In these cases, gestures seem to convey “optional” information for descriptions, providing extra cues about the task. For example, when expressing the action of “hitting the ball,” gesture is used to describe where the ball stopped and which path the ball followed. Similar to supplementary gestures, children in each age group used many location gestures only in the gestural modality without accompanying speech.
We asked children to report on a causal event that they had just witnessed performed by the same person who was asking what they had seen. Alibali and her colleagues (2000) pointed out that, “speakers use gesture to explore alternative ways of encoding and organizing spatial and perceptual information” (p. 595). Hence, when the perceptual cues are not available to the speaker, gestures might help to conceptualize and organize the information (Alibali et al., 2000; Kita, 2000). It is possible that children in all age groups would have produced more gestures if they had been asked to describe the event in a different context (e.g., another room) or to a different person (e.g., a second experimenter or their parents) who had not witnessed the event. Future studies should tease apart how describing the event to another person influences children’s gesture production. In addition, some children might produce few gestures if they are confident of their answers. The role of content in our task is an empirical question to be addressed in a future study.
Taken together, gesture reinforces and in some cases precedes language by conveying causal information in children’s speech. Perhaps gestures are particularly well suited for describing causal relations. Causal relations are fundamentally dynamic, continuous, and contain a number of temporally ordered steps. To express these dynamic relations linguistically, children must package these continuous relations into the categories that language describes (Goldin-Meadow, 2006; Golinkoff & Hirsh-Pasek, 2008), which is a difficult task. For example, if someone uses a stick to hit a ball in a pool of water, the trajectory of the moving stick will come into contact with the ball, the ball will begin to move and will continue moving across the pool, finally reaching the other side where it comes to a stop. In order to describe the event causally, children must determine when the causation begins and ends (i.e., the boundaries of the event). Although this increased ambiguity makes causal descriptions particularly hard to learn, gestures might offer a way for children to represent causal events without the burden of placing categorical labels on dynamic events. Thus, this kind of world-to-word mapping might be well suited for representation in gesture before or in concert with speech (Goldin-Meadow, 2003).
Finally, this study raises interesting questions that open a new area of research. The findings suggest a role for gesture in reinforcing and supplementing children’s production of sentences expressing causal relations. Our results using simple causal events demonstrate the trajectory of causal language development in two modalities. This work also expands the definition of “cause” generally investigated in the developmental literature. All causes are not simple contact causes in which A contacts B to create an effect (Michotte, 1963). Some causal events deal not only with contact, but also with the direction of contact and forces that can alter the direction (e.g., the wind from a fan redirects a boat to reach the goal; Talmy, 1988; Wolff, 2003, 2007). We know relatively little about how children learn force dynamics and even less about how children express these more complex causal relations in language. Gesture offers a window into children’s understanding of these intervening and invisible variables of causal relations that might allow children to communicate information that would not be possible to express in language.
We explored how children’s gestures assist and supplement their verbal expressions to fully communicate their underlying knowledge about causation. In both verbal and gestural domains, children move from noncausal to causal expressions. Gestures reinforce speech at all ages. They also supplement causal language directly by referring to the proximal causes (i.e., instrument) that are not always expressed, but nonetheless understood. Our results provide additional evidence for the role of gesture in children’s language development, suggesting that even older children use gestures to support complex ideas before they can form full-fledged sentences to convey their causal understanding.
This work was supported NICHD grant 5R01HD050199 and by NSF grants BCS-0642529 to the second and third authors. We thank everyone at the Temple University Infant Lab for their invaluable contributions to this project. Special thanks to Sarah Roseberry, Kelly Fisher, Wendy Shallcross, Yannos Misitzis, Alon Hafri, and Katrina Ferrara. We thank the children and parents who participated in the study. Finally, we would like to express our appreciation to the Editor and all anonymous reviewers for their comments on the previous drafts of the manuscript.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Tilbe Göksun, Temple University.
Kathy Hirsh-Pasek, Temple University.
Roberta Michnick Golinkoff, University of Delaware.