PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Cogn Dev. Author manuscript; available in PMC 2011 January 1.
Published in final edited form as:
Cogn Dev. 2010; 25(1): 56–68.
doi:  10.1016/j.cogdev.2009.11.001
PMCID: PMC2834526
NIHMSID: NIHMS166008

How do preschoolers express cause in gesture and speech?

Abstract

Upon witnessing a causal event, do children’s gestures encode causal knowledge that a) does not appear in their linguistic descriptions or b) conveys the same information as their sentential expressions? The former use of gesture is considered supplementary; the latter is considered reinforcing. Sixty-four English-speaking children aged 2.5- to 5 years described an action in which the experimenter pushed a ball across a small pool with a stick. Children produced more complete sentences expressing causal relations, encoding more of the elements in the event. Younger children produced noncausal sentences and location gestures that referred to or highlighted the goal of the action. Older children used both reinforcing and supplementary gestures conveying the instrument (e.g., the stick) and direction (e.g., from left to right) of the action. These findings present a noncausal to causal developmental trajectory both in speech and gesture. Among older children, results also suggest that gestures carry causal information before they form complete sentences to express causal events.

Keywords: causal event components, causal understanding, gesture, speech

Causal understanding is fundamental to recognizing the relationship between objects and events. In physical causal events, one object, often a causal agent, acts upon another object (the “patient” to borrow terminology from linguistics) by contacting the object and by changing the endstate of the second object’s motion. Research shows that infants infer these physical causal relations by the end of their first year of life (Cohen, Rundell, Spellman, & Cashon; 1999; Golinkoff & Kerr, 1978; Leslie, 1982, 1984). Despite the fact that toddlers comprehend causal sentences (Bunger & Lidz, 2004; Fisher, 1996; Hirsh-Pasek, Golinkoff, & Naigles, 1996; Naigles, 1990), production seems to lag behind children’s causal understanding in that children at this age are often unable to correctly use causal connectives (e.g., because) and causal verbs that could carry the necessary information (Clark, 2003). Later, preschool aged children produce sentences expressing causal relations in events that reveal their causal understanding. However, it is not yet clear the types of information they produce to describe physical causal relations (hitting a ball with a stick). We explore how preschool children use a combination of speech and gesture to express causal information. Our goal is to trace the role of gesture in children who are capable of inferring causal relations, but who might have difficulty producing the components of sentences that describe causal relations. This is one of the first studies asking how children’s gesture production might contribute to the production of sentences expressing causal relations.

Speech and gesture are complementary components of an integrated language system (McNeill, 1992). This system allows children to express meaning in two modalities that are semantically and temporally coherent and in which gesture is complementary to children’s expressions, strengthening the message offered in speech (Kendon, 1980; McNeill, 1998, 2005; Goldin-Meadow, 1998, 2003; Nicoladis, Mayberry, & Genesee, 1999). Research has investigated the role of gestures in children’s language development and in several cognitive tasks (Alibali & Goldin-Meadow, 1993; Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Church & Goldin-Meadow, 1986; Ehrlich, Levine, & Goldin-Meadow, 2006; Garber & Goldin-Meadow, 2002; Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005, 2009; Pine, Lufkin, & Messer, 2004). Here we examine the role of gestures in children’s causal speech production.

The Role of Gesture in Language and Cognitive Development

Children’s spontaneous gestures serve different purposes. First, early gestures preview language, acting as indicators for upcoming changes in verbal expressions. Infants start by using pointing gestures before they produce their first words. These deictic gestures involve, for example, pointing at a cup. Research suggests that the objects children point to are soon thereafter named in words (Bates, 1976; Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005). Once children start producing words, the form and function of their gestures become more diverse. At this stage, in addition to deictic gestures, children produce representational gestures that refer to an object’s actions or attributes such as moving the hand in a downward action while saying “go down”. Early gestures in both deictic and representational forms have two primary functions: They reinforce meaning given in speech (e.g., pointing at a cup while saying “cup”) or they supplement speech by providing additional information in the gesture domain (e.g., pointing at a cup while saying “mine”). However, as suggested by Özçalişkan and Goldin-Meadow (2009), only supplementary gesture-speech combinations are key to communicating sentence-like meanings and to predicting later language development. For example, Iverson and Goldin-Meadow (2005) demonstrated that the age at which children first use supplementary gestures (e.g., pointing at a cup while saying, “mine” to represent “my cup”) is linked to their initial use of two-word utterances. Children also produce complex gesture-speech constructions before they express the same information in the verbal modality (Özçalişkan & Goldin-Meadow, 2005). Hence, “gesture may pave the way for future developments in language” (Iverson & Goldin-Meadow, 2005, p. 370).

Second, children’s gestures reveal underlying thinking in various cognitive tasks such as counting, Piagetian conservation, the Tower of Hanoi problem, spatial reasoning, and the balance problem (Alibali & Goldin-Meadow, 1993; Broaders, Cook, Mitchell, & Goldin-Meadow, 2007; Church & Goldin-Meadow, 1986; Ehrlich, Levine, & Goldin-Meadow, 2006; Garber & Goldin-Meadow, 2002; Pine, Lufkin, & Messer, 2004). Studies involving these tasks demonstrate that gestures can uncover conceptual knowledge relevant to a specific task. Broaders et al. (2007) suggested that children’s gestures tap into their implicit knowledge by supplementing the information available in the verbal modality. In most cases, such gestures are produced before upcoming changes in knowledge, demonstrating a possible transitional stage (Church &Goldin-Meadow, 1986, 1988; Goldin-Meadow, Alibali, & Church, 1993; Pine, Lufkin, &Messer, 2004).

Preschool-aged children also use gestures to augment their linguistic expression. For example, Kidd and Holler (2009) found that 3- to 5-year-olds used gestures to solve a lexical ambiguity task, in which they were asked to retell a short story involving two homonym senses (e.g., mouse as an animal and mouse as computer equipment). Children’s use of speech and gesture in their retellings were examined. Results showed that children in the youngest age group did not solve the ambiguity in the verbal modality and used many pointing gestures, indicating an ineffective way of disambiguation. Four-year-olds, however, produced both representational and pointing gestures for disambiguation. By age 5, children disambiguated the homonyms in their speech and their use of gestures significantly decreased. That is, 4-year-olds relied on gestures to complement their less sophisticated verbal skills. These results suggest that before age 5 gestures can help children resolve difficult and demanding problems while forecasting future cognitive advancements that will become available in the verbal modality.

Together, these functions of children’s gestures imply that gesture assists and previews early language development as well as children’s transitional knowledge in many cognitive and language tasks. Importantly, these functions of gestures are not mutually exclusive. Gesture becomes an undeniably crucial part of the communication system, providing a tool both to express information and to cope with challenging cognitive information (Goldin-Meadow, 2000; McNeill, 1992; see also Kidd & Holler, 2009).

Although researchers have made great progress in understanding the development of children’s gestures, few studies have explored the role of gesture in one basic area of cognition and language - causal events. Only one study has examined the hypothesis that causal descriptions might be expressed through gesture (Furman, Özyürek, & Allen, 2006). In this study, children were presented with causal events such as a “triangle man” hitting a “tomato man” and the tomato man then rolling down the hill. More than one-third of the time, 3- and 5-year-olds’ descriptions of these events included gestures referring to at least one subevent. For example, children expressed the cause with a horizontal sharp movement of the hand to represent the action “hit,” for the result subevent; a diagonal movement of the hand representing the downwards action of “roll down;” or a combination of both (e.g., the continuous hand movement of the horizontal and diagonal actions to represent “push him down”). 5-year-olds produced more gestures for the causing subevent than for the result subevent (e.g., “hit”) whereas 3-year-olds produced equal amount of gestures for both subevents. Furman et al. (2006) provide evidence that children use gestures to express the components of causal events when they speak and the use of gestures to represent these caused motion events change by age.

What is the relationship between speech and gesture in children’s expressions of causal events? Given that both causal knowledge and causal descriptions undergo significant developmental changes during the preschool period (Bowerman, 1974; Bullock, 1985; Clark, 2003; das Gupta & Bryant, 1989; Krist, Fieberg, & Wilkening, 1993), gesture might assist children’s expressions of causal events.

Understanding and Expression of Physical Causal Events

Before 12 months of age, infants perceive causal events as different from noncausal events (Baillargeon, 1994; Cohen, Rundell, Spellman; & Cashon; 1999; Leslie, 1982; 1984; Oakes & Cohen, 1990; Saxe, Tenenbaum, & Carey, 2005; Saxe, Tzelnic, & Carey, 2007) and attend to the differences between the agent and patient roles (Cohen, Amsel, Redford, Casasola, 1998; Leslie & Keeble, 1987; Golinkoff, 1975; Golinkoff & Kerr, 1978). Moreover, research documents that within causal events, 12-month-olds are also sensitive to the direction of cause through elements such as source and goal (Lakusta, Wagner, O’Hearn, & Landau, 2007). Thus, infants have early representations of causal relations in the physical domain.

Children’s causal understanding, however, undergoes major developmental changes in the first three years of life (Bullock, 1985; das Gupta & Bryant, 1989; Krist, Fieberg, & Wilkening, 1993; Gopnik & Shulz, 2007; Gopnik & Sobel, 2000). For example, by age 3, children use temporal ordering to refer to the sequence of mechanical causal events (Bullock & Gelman, 1979, Sobel, Tenenbaum, & Gopnik, 2004) and identify invisible causal agents such as light or sound (Shultz, 1982). These findings highlight that children who are generally using multi-word sentences not only understand simple causality but also possess a sophisticated and fairly broad understanding of causal mechanics.

Can children who have the necessary conceptual underpinnings of cause describe a causal event they have just witnessed? With their considerable causal knowledge, children should be able to produce sentences expressing causal relations. They might, however, fall behind in expressing their causal knowledge using language. We propose that, just as children resolve ambiguities with homonyms through gesture (Kidd & Holler, 2009), their gestures might supplement verbal information in effecting causal descriptions.

When describing a simple causal relation such as the act of dropping a pencil, adults use dual-participant sentences, “The girl drops the pencil.” In this case, the verb “drop” denotes a causal relation between the agent “the girl” and the patient “the pencil” (Jackendoff, 1990; Levin, 1993). The same sentence can be expressed using a single-participant in a noncausal way. For example, one might describe the same action as “the pencil falls,” using the noncausal verb “fall,” omitting the causal agent “the girl.” A simple causal event might also involve an intervening variable such as “The girl breaks the window with the stone”, in which “the stone” is the proximal cause for the window breaking. Other components described in causal events are the direction, location, and endpoint of the action. In the sentence, “The man kicked the ball to the other side of the field,” the “field” is the location and the “other side of the field” is the endpoint or goal of the action. These spatial components are optionally expressed depending on what the speaker intends to communicate about the causal event.

Given the breadth of young children’s understanding of cause, one might predict that they would express at least some elements of causal relations in their language. Research suggests that in the second year of life, children have several causal verbs in their productive vocabulary (break, cut; Bowerman, 1974; Carey, 1978; Clark, 2003). Early in the third year of life, children make productive errors and use noncausal words to indicate causal relations, such as, “how would you flat it?” (Bowerman, 1974; Carey, 1978). This naturalistic evidence shows that even after children produce several lexical causatives (e.g., break), they continue to express causal relations using noncausal sentences. It is not until around the age of 4 that children reliably use causal verbs and causal connectives to express causal relations in complex sentences (Clark, 2003).

The present study explores the relationship between speech and gesture in children’s expression of causal events. First, we ask how children talk about causal events. In line with previous naturalistic studies, we predict that older children (4- and 5-year-olds) will produce more causal verbs for sentences expressing causal relations, compared to younger children (2.5- and 3-year-olds). Consistent with the use of more sentences expressing causal relations, older children will be more likely to linguistically express the agent, patient, and instrument involved in events. In contrast, the use of direction and location might not differ among age groups, because they are optional components of causal expressions.

Second, we analyze different gesture categories (reinforcing, supplementary, and gesture-only) as well as gesture types (pointing, representational). We have two hypotheses. First, consistent with previous studies on other tasks (Kidd & Holler, 2009), we predict that younger children’s gestures will supplement their verbal production, previewing what they would later express in their speech. These gestures will be mostly deictic gestures. Second, older children may produce reinforcing gestures when expressing the event with causal sentences. These gestures can be both deictic and representational.

Method

Participants

Participants were 64 monolingual English-speaking children, balanced for gender and separated evenly into four age groups: 2.5-year-olds (M = 32.91 months, SD = 1.71, range 30.22 – 35.16), 3-year-olds (M = 39.91, SD = 2.40, range 37.00 – 44.08), 4-year-olds (M = 52.76, SD = 4.36, range 48.04 – 58.05), and 5-year-olds (M = 65.16, SD = 4.19, range 60.10 – 71.13). These age groups were chosen to represent the complete developmental trajectory for causal expression. The sample was recruited from suburban Philadelphia using commercially available mailing lists. The majority of participants were from middle-class families and were white, with less than 5% of Hispanic, Asian American, or African-American descent. Data from an additional 7 children were discarded due to failure to respond (4) or experimenter error (3).

Materials and Procedure

This study was part of a larger study examining force dynamics and causal understanding (Wolff, 2003). Children’s understanding of a causal relation involving an instrument was examined with an experimental task in which the experimenter used a stick to push an object (either a ball or a ring) across a pool of water. Children were asked to express what happened in the event. Here, we focus on children’s verbal and gestural expressions of this simple causal event.

Children were tested individually in a quiet room at the laboratory. The experimenter sat next to the child, to the left of a table. A 46 cm × 38 cm × 13 cm rectangular box full of water was situated on the table and a camera captured both the event and children’s responses. During the warm-up phase, the experimenter showed the task materials to the child (the ball, the ring, the stick). By slightly hitting the object with her hand in different directions, the experimenter moved the ball and the ring on the water saying “Can you see how the ball/the ring moves on the water? Here is the stick, I’ll hold onto it.” Then, the experimenter pushed one of the objects on the water from left to right of the box. At the same time the experimenter said: “Can you see how the ring/ball moves when I push like this?” The same pushing action was repeated for the second object.

After the warm-up phase, each child was presented with two test trials. In counterbalanced order, the experimenter pushed one of the objects along either the horizontal or the diagonal side of the box. The order of direction was counterbalanced between test trials. While pushing the ball or the ring, the experimenter said, “Watch me carefully now”. When the experimenter finished the action, s/he asked the child to describe what happened: “Wow, did you see what just happened? Can you tell me what happened here?” If the child responded “no,” the experimenter asked for the child’s best guess: “What do you think happened here?” Then, the experimenter repeated the same procedure for a second test trial, using a second object.

Transcription and Coding

Speech

A native English speaker transcribed all speech. Children’s utterances were coded for their use of causal verbs such as make, push, hit and noncausal verbs such as go and float, and for the meaning of the entire sentence. Children’s utterances were also coded for the use of various components of a sentence: agent (e.g., you), patient (the ball or the ring), instrument (i.e., the stick), location (e.g., there, here, other side), and direction (e.g., this way, across here). The phrases such as “all the way to there” were coded as direction whereas the single use of “there” was categorized as location. Children’s speech-only utterances, in which no gesture accompanied speech, were calculated for further analyses. Table 1 shows two samples of speech coding.

Table 1
Sample coding from two children’s speech. Each sentence was coded for the presence of causal vs. noncausal verbs, and components of events (agent, patient, instrument, location, and direction).

Gesture

Children’s gestures were coded for type and gesture category. For type, gestures were classified as pointing or representational (Furman et al., 2006; Goldin-Meadow, 2003; McNeill, 1992). Pointing gestures included showing an object or location by extending the index finger toward the referent, as when a child pointed to a location in the box to refer to the endpoint at which the ball stopped, as they said, “The ball went over here.” Representational gestures indicated attributes or actions of an object’s direction. For example, if a child said, “when you pushed it” while her hand shape mimicked holding a stick and moving it away from the self, the gesture was coded as representational.

Gesture category involved three kinds of gestures: reinforcing, supplementary, and gesture-only expressions (Özçalişkan & Goldin-Meadow, 2005, 2009). Reinforcing gestures convey the same information as the concurrently used speech. An example would be pointing at the ball while saying, “ball.” Supplementary gestures conveyed different information than offered in concurrently used language such as pointing at the ball while saying, “you pushed.” Gesture-only expressions were produced without concurrent speech such as pointing at the ball in silence.

For each gesture type and category, gestures were divided by the referents: the causal agent, the receiver of the action, patient (the ball or the ring), instrument (the stick), the location and direction of the action. Table 2 presents two samples of children’s use of gestures.

Table 2
Sample coding from two children’s gestures. Each gesture was described and where the gesture matched the speech was numbered in speech description. Gesture type (pointing vs. representational), category (reinforcing, supplementary, gesture-only), ...

Reliability

Children’s utterances and gestures were initially coded by the first author. A second person randomly chose and coded 36% of children’s responses. Agreement between coders was 95% (k = .93, n = 386) for speech referents, 86% (k = .82, n = 97) for identifying gestures and assigning category, and 90% (k = . 87, n = 58) for gesture referents.

Results

How do children express causal events in speech?

A repeated-measure analysis of variance (ANOVA) with age (2.5-, 3-, 4-, and 5-year-olds) as a between-subject variable, and verb type (causal vs. noncausal), and event components (agent, patient, instrument, location, and direction) as within-subject variables yielded no main effects of gender or any interactions with gender. Thus, gender was not considered in further analyses.

Use of causal verbs

Mean number of words used in event descriptions differed across age groups, F (1, 60) = 5.10, p = .03, η2 = .20. Overall, 5-year-olds produced twice as many words as the other age groups (M = 32; Scheffé, ps < .045). However, children’s total expression of event components coded only in speech without gestures did not differ by age.

The mean percentages of causal and noncausal verbs in children’s total speech were also calculated. A one-way ANOVA indicated that children differed in their use of causal verbs, F (3, 60) = 8.57, p = .00, η2 = .30. Four- and 5-year-olds used significantly more causal verbs than the two younger age groups (Scheffé, ps < .023). As Figure 1 depicts, the mean percentage of causal verbs of total verb use differed by age group, F (3, 60) = 10.49, p = .00, η = .34, with older children using more causal verbs than younger ones (Scheffé, ps < .019). Paired-samples t-tests showed that younger children produced significantly more noncausal verbs than causal verbs (ts > 4.72, ps < .01). Even though the mean percentage of the use of causal and noncausal verbs did not significantly differ for 4- and 5-year-olds, 4-year-olds had almost equal number of causal and noncausal verbs and 5-year-olds had more causal verbs than noncausal ones (see Figure 1). The diversity of verbs used over time was similar: Children’s causal verbs consisted mostly of push and hit; noncausal verbs were float, go, move, and swim.

Figure 1
Experimental procedure and the directions used during warm-up (left picture) and test trials (middle and right pictures).

The number of children who used more causal vs. noncausal verbs differed by age group. Only 9 of 32 2- and 3-year-old children used more causal than noncausal verbs to describe the events, while 22 of 32 4- and 5-year-olds produced more causal than noncausal verbs in their sentences.

The use of causal event components in speech alone

Although children produced approximately the same number of words only in speech without gestures, the components they used differed by age. To analyze the expression of event components, we calculated the percentages of agent, patient, instrument, location or direction of children’s total utterances. Only the use of agent differed by age group, F (3, 60) = 5.64, p = .00, η2 = .22 (see Figure 2). Post-hoc analyses showed that compared to 5-year-olds, 2.5- and 3-year-old children used fewer agents in their speech (Scheffé, ps < .013). No differences were found for the expression of other causal components.

Figure 2
Mean percentage of the use of causal vs. noncausal verbs in each age group (children’s causal vs. noncausal verbs divided by total verbs).

Together, the findings indicate that 5-year-olds explained causation using causal verbs, while 4-year-olds produced almost equal numbers of causal and noncausal verbs, and both 2.5- and 3-year-olds tended to use more noncausal than causal verbs. Older children explicitly mentioned the agent more often than younger children.

How do children use gestures to describe causal events?

No gender differences appeared for gesture type (pointing or representational), gesture category (reinforcing, supplementary or gesture-only) or gesture referents (agent, patient, instrument, location, and direction). Gender therefore was not considered in further analyses.

Number of gestures produced

As shown in Table 3, the mean number of gestures children produced differed significantly among age groups, F (3, 60) = 4.47, p = .01, η2 = .18. However, post-hoc analyses indicated a difference only at the extreme ages; between 2.5- and 5-year-olds (Scheffé, p = . 02), with 5-year-olds using twice as many gestures as 2.5-year-olds. To control for amount of talk, we calculated children’s proportion of gestures to their overall speech (i.e., the number of gestures per word). This proportion did not differ among age groups, F (3, 60) = 1.24, p = .30, η2 = .06.

Table 3
The mean number of gestures used to describe the events, the standard error of mean (SE), and range and the mean frequency, SE, and range of pointing and representational gestures

Gesture type

Children in all age groups produced more pointing gestures than representational ones, F (1, 60) = 48.67, p = .00, η2 = .45. Children’s use of representational gestures differed across age groups, F (3, 60) = 3.41, p = .02, η2 = .15 (see Table 3) with only older children using representational gestures.

We examined the event components and types expressed in gesture. All age groups were more likely to use pointing to refer to the instrument than any other event components. Yet, only older children, compared to younger ones, produced representational instrument gestures, F (1, 60) = 16.46, p < .01. For example, older children made one of their hands into a fist to represent holding a stick, rather than simply pointing at the stick itself. As predicted, regardless of the age group, the direction of the action was produced by representational gestures, t (63) = 9.02, p < .01, and the location was indicated only in pointing gestures, t (63) = 12.60, p < .01.

Gesture categories

As shown in Figure 2, regardless of age, children produced more reinforcing gestures compared to supplementary or gesture-only expressions, F (2, 120) = 10.78, p = .00, η2 = .15.

A one-way ANOVA showed that children in all age groups produced similar percentages of reinforcing gestures. However, gesture referents for this category varied by age group. Children’s use of instrument, location, and direction reinforcing gestures differed by age group (Fs > 3.08, ps < .03). Older children produced significantly more instrument and direction gestures than younger age groups to reinforce information already expressed in their sentences expressing causal relations (Scheffé, ps < .05). In contrast, 2.5-year-olds used more location gestures compared to 4-year-olds (Scheffé, p = .03), indicating that younger children reflected goals by producing relatively more location gestures to reinforce their noncausal speech.

Similar to reinforcing gestures, the percentage of supplementary gestures did not differ by age. A one-way ANOVA of the proportion of gesture referents yielded a difference for the use of instrument, F (3, 60) = 4.07, p = .01, η2 = .17, suggesting that children gestured about the proximal cause of the event. Post-hoc analyses showed that 5-year-olds used supplementary instrument gestures more than 2.5- and 3-year-olds (Scheffé, p < .04). These findings indicate that these instrument gestures offered extra information not captured in the verbal modality. All age groups also produced many location and direction supplementary gestures.

When children’s gesture-only expressions were analyzed, results indicated no main effect of age for the use of gestures without speech. Further analyses for each gesture referent (instrument, patient, location, and direction) demonstrated the same results; children produced very few gestures that were not accompanied by speech. However, children produced location gestures more often than other gestures in isolation from speech, F (3, 180) = 16.59, p = .00, η2 = .22.

Last, we examined those children who produced more causal verbs than noncausal ones (9/32 for younger age groups, 22/32 for older age groups). The mean percentage of the use of reinforcing and supplementary gestures was very similar in these two groups (25% for reinforcing and 17% for supplementary for younger groups and 25% and 13%, respectively, for older groups). These results suggest that when children start using sentences that express causal relations, they produce similar numbers of gestures either to reinforce or supplement verbal information.

Discussion

This study was designed to investigate the relationship between speech and gesture in children’s descriptions of simple causal events that were enacted as they watched. Two main issues framed the investigation. First, we examined children’s speech for whether they expressed possible causal event components (agent, patient, instrument, location, direction). Second, we examined the role of gestures as they accompanied speech by analyzing different gesture types (pointing, representational) and gesture categories (reinforcing, supplementary, and gesture-only).

In verbal descriptions children initially used noncausal verbs, and then lexical causative verbs, such as push and hit, before they formed full sentences expressing causal relations, involving more components such as instrument. Only older children verbally produced the agent of the sentence. The optional components of location and direction were used similarly in all age groups.

Regarding children’s use of gestures, we had two hypotheses: 1) younger children would use more gestures to preview what they would later express in their speech; 2) older children would produce reinforcing gestures to highlight causal information that is present in verbal modality. The findings were surprising. The first hypothesis was partially confirmed. Younger children only pointed at the location to reinforce their speech. However, older children produced more gestures than younger ones, using gesture to both reinforce (same information) and supplement (additional information) their speech. In particular, older children pointed and used representational gestures about the instrument and the direction of the causal event they witnessed. Although these results seem to contradict previous findings showing a decrease in children’s supplementary gestures with age and with advanced language (Iverson & Goldin-Meadow, 2005; Kidd & Holler, 2009; Özçalişkan & Goldin-Meadow, 2005, 2009), older children rely on gestures to supplement their speech before they form complex sentences that express causal relations.

Children’s verbal descriptions of causal events

This controlled experiment validates prior naturalistic research showing a noncausal to causal trajectory in causal sentence production. Although children infer the causal meaning of a novel verb from the sentential context at 2 years of age, they fail to appropriately use causal verbs in causal sentences until the preschool years (Bunger & Lidz, 2004; Naigles, 1990; Tomasello, 2000).

At first, younger children’s explanations involve primarily noncausal verbs, even in describing a causal event. When they were asked to describe the events portrayed in this study, younger children used noncausal verbs such as “the ball floated on water” or “the ball moves from one side to the other.” One might wonder whether the language children heard in warm-up trials served as a model for their own speech, as the experimenter intentionally used noncausal language.” However, under this interpretation there should be no difference between age groups, when in fact one emerged. Older children still interpreted the action as causal and included the agent (i.e., referring to the agent experimenter as “you”) in their speech. Younger children did not. These findings also corroborate evidence that young children are more likely to notice goals than sources in dynamic events (Lakusta & Landau, 2005).

Our results show that causal descriptions improve remarkably when children reach age 4. It is of note that when children start to produce sentences expressing causal relations, they usually omit the instrument (e.g., “you hit the ball” rather than “you hit the ball with a stick.”). Direction of the motion appears infrequently in children’s causal descriptions.

Children’s use of gestures in describing causal events

Our findings on gesture augment the literature by examining children’s gestures in a causality task. Although a longitudinal study is required in order to fully understand the changes in gesture and speech as well as the different functions of gesture at different ages, we can ask whether some components occurring in gesture reinforce verbal information or preview some information that is not yet realized in speech.

If children reinforce speech with gesture, they might use gesture and speech together to refer to the same causal event components. Our findings support this conclusion, showing that children at all age groups produced more reinforcing gestures than other categories. However, gesture referents varied by age. Similar to verbal descriptions, younger children were very goal-directed and used location reinforcing gestures. In contrast, older children’s sentences expressing causal relations were more likely to be reinforced by instrument and direction gestures. As children produce more sentences expressing causal relations, they use more gestures to convey the same information. Thus, gesture and speech encode strongly related meanings (Gullberg, de Bot, & Volterra, 2008). Importantly, gesture might offer an alternative way to code and organize spatial-perceptual information and engage in the conceptual planning for speech (e.g., Alibali, Kita, & Young, 2000; Kita, 2000).

Our data also suggest that children use many supplementary gestures that refer to components other than what they express in their speech. Gesture is used to convey information for instrument and spatial components of direction and location. When children start producing sentences with causal verbs such as “you hit the ball, gestures referring to the instrument preview speech. For example, a child conveys additional information by pointing to the stick or making a fist hand shape. Thus, only older children produced instrument gestures that might later be expressed in speech. Previous research suggests that children’s supplementary gestures predict their future language development (Iverson & Goldin-Meadow, 2005; Özçalişkan & Goldin-Meadow, 2005, 2009), and preschool children use gestures to supplement their speech in demanding tasks (Kidd & Holler, 2009). Our findings demonstrate that even 5-year-olds use extra information in gestures before they form complete sentences expressing causal relations. We suspect that there might be a decline in the number of supplementary gestures once children begin to express instruments in their sentences.

Gestures that refer to spatial components might serve different purposes than gestures for the instrument, because children in all age groups produce location supplementary gestures. Older children also use many direction supplementary gestures. In these cases, gestures seem to convey “optional” information for descriptions, providing extra cues about the task. For example, when expressing the action of “hitting the ball,” gesture is used to describe where the ball stopped and which path the ball followed. Similar to supplementary gestures, children in each age group used many location gestures only in the gestural modality without accompanying speech.

We asked children to report on a causal event that they had just witnessed performed by the same person who was asking what they had seen. Alibali and her colleagues (2000) pointed out that, “speakers use gesture to explore alternative ways of encoding and organizing spatial and perceptual information” (p. 595). Hence, when the perceptual cues are not available to the speaker, gestures might help to conceptualize and organize the information (Alibali et al., 2000; Kita, 2000). It is possible that children in all age groups would have produced more gestures if they had been asked to describe the event in a different context (e.g., another room) or to a different person (e.g., a second experimenter or their parents) who had not witnessed the event. Future studies should tease apart how describing the event to another person influences children’s gesture production. In addition, some children might produce few gestures if they are confident of their answers. The role of content in our task is an empirical question to be addressed in a future study.

Taken together, gesture reinforces and in some cases precedes language by conveying causal information in children’s speech. Perhaps gestures are particularly well suited for describing causal relations. Causal relations are fundamentally dynamic, continuous, and contain a number of temporally ordered steps. To express these dynamic relations linguistically, children must package these continuous relations into the categories that language describes (Goldin-Meadow, 2006; Golinkoff & Hirsh-Pasek, 2008), which is a difficult task. For example, if someone uses a stick to hit a ball in a pool of water, the trajectory of the moving stick will come into contact with the ball, the ball will begin to move and will continue moving across the pool, finally reaching the other side where it comes to a stop. In order to describe the event causally, children must determine when the causation begins and ends (i.e., the boundaries of the event). Although this increased ambiguity makes causal descriptions particularly hard to learn, gestures might offer a way for children to represent causal events without the burden of placing categorical labels on dynamic events. Thus, this kind of world-to-word mapping might be well suited for representation in gesture before or in concert with speech (Goldin-Meadow, 2003).

Finally, this study raises interesting questions that open a new area of research. The findings suggest a role for gesture in reinforcing and supplementing children’s production of sentences expressing causal relations. Our results using simple causal events demonstrate the trajectory of causal language development in two modalities. This work also expands the definition of “cause” generally investigated in the developmental literature. All causes are not simple contact causes in which A contacts B to create an effect (Michotte, 1963). Some causal events deal not only with contact, but also with the direction of contact and forces that can alter the direction (e.g., the wind from a fan redirects a boat to reach the goal; Talmy, 1988; Wolff, 2003, 2007). We know relatively little about how children learn force dynamics and even less about how children express these more complex causal relations in language. Gesture offers a window into children’s understanding of these intervening and invisible variables of causal relations that might allow children to communicate information that would not be possible to express in language.

Conclusion

We explored how children’s gestures assist and supplement their verbal expressions to fully communicate their underlying knowledge about causation. In both verbal and gestural domains, children move from noncausal to causal expressions. Gestures reinforce speech at all ages. They also supplement causal language directly by referring to the proximal causes (i.e., instrument) that are not always expressed, but nonetheless understood. Our results provide additional evidence for the role of gesture in children’s language development, suggesting that even older children use gestures to support complex ideas before they can form full-fledged sentences to convey their causal understanding.

Figure 3
Mean percentage of the use of speech-only, gesture-only, reinforcing gestures, and supplementary gestures in each age group (the number of times children use each category divided by total use of speech and gesture referents).

Acknowledgments

This work was supported NICHD grant 5R01HD050199 and by NSF grants BCS-0642529 to the second and third authors. We thank everyone at the Temple University Infant Lab for their invaluable contributions to this project. Special thanks to Sarah Roseberry, Kelly Fisher, Wendy Shallcross, Yannos Misitzis, Alon Hafri, and Katrina Ferrara. We thank the children and parents who participated in the study. Finally, we would like to express our appreciation to the Editor and all anonymous reviewers for their comments on the previous drafts of the manuscript.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Tilbe Göksun, Temple University.

Kathy Hirsh-Pasek, Temple University.

Roberta Michnick Golinkoff, University of Delaware.

References

  • Alibali MW, Goldin-Meadow S. Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind. Cognitive Psychology. 1993;25:468–523. [PubMed]
  • Alibali MW, Kita S, Young A. Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes. 2000;15:593–613.
  • Baillargeon R. How do infants learn about the physical world? Current Directions in Psychological Science. 1994;3:133–140.
  • Bates E. Language and context: The acquisition of pragmatics. New York: Academia Press; 1976.
  • Bowerman M. Learning the structure of causative verbs: a study in the relationship of cognitive, semantic and syntactic development. Papers and Reports on Child Language Development. 1974;8:142–178.
  • Broaders S, Cook S, Mitchell Z, Goldin-Meadow S. Making children gesture bring out implicit knowledge and leads to learning. Journal of Experimental Psychology: General. 2007;136(4):539–550. [PubMed]
  • Bullock M. Animism in childhood thinking: A new look at an old question. Developmental Psychology. 1985;21:217–225.
  • Bullock M, Gelman R. Preschool Children’s Assumptions about Cause and Effect: Temporal Ordering. Child Development. 1979;50:89–96.
  • Bunger A, Lidz J. Proceedings of the Annual Boston University Conference on Language Development. Cascadilla Press; Cambridge: 2004.
  • Carey S. The child as word learner. In: Bresnan J, Miller G, Halle M, editors. Linguistic Theory and Psychological Reality. Cambridge, MA: MIT Press; 1978. pp. 264–293.
  • Church RB, Goldin-Meadow S. The mismatch between gesture and speech as an index of transitional knowledge. Cognition. 1986;23:43–71. [PubMed]
  • Clark EV. First Language Acquisition. New York, NY: Cambridge University Press; 2003.
  • Cohen LB, Rundell LJ, Spellman BA, Cashon CH. Infants’ perception of causal chains. Psychological Science. 1999;10:412–418.
  • Cohen LB, Amsel G, Redford MA, Casasola M. The development of infant causal perception. In: Slater A, editor. Perceptual development: Visual, Auditory, and Speech Perception in Infancy. East Sussex, UK: Psychology Press Ltd; 1998. pp. 167–209.
  • das Gupta P, Bryant PE. Young children’s causal inferences. Child Development. 1989;60:1138–1146. [PubMed]
  • Ehrlich SB, Levine S, Goldin-Meadow S. The importance of gesture in children’s spatial reasoning. Developmental Psychology. 2006;42:1259–1268. [PubMed]
  • Fisher C. Structural limits on verb mapping: the role of analogy in children’s interpretations of sentences. Cognitive Psychology. 1996;31:41–81. [PubMed]
  • Furman R, Özyürek A, Allen S. Learning to express causal events across languages: What do speech and gesture patterns reveal? In: Bamman D, Magnitskaia T, Zaller C, editors. Proceedings of the 30th Annual Boston University Conference on Language Development. Somerville, MA: Cascadilla Press; 2006. pp. 190–201.
  • Garber PR, Goldin-Meadow S. Gesture offers insight into problem-solving in adults and children. Cognitive Science. 2002;26:817–831.
  • Goldin-Meadow S. The development of gesture and speech as an integrated system. In: Iverson JM, Goldin-Meadow S, editors. The nature and functions of gesture in children’s communications. San Francisco: Jossey-Bass; 1998. pp. 29–42.
  • Goldin-Meadow S. Hearing gesture: How our hands help us think. Cambridge: Harvard University Press; 2003.
  • Goldin-Meadow S. Talking and Thinking With Our Hands. Current Directions in Psychological Science. 2006;15:34–39.
  • Golinkoff RM. Semantic development in infants: The concept of agent and recipient. Merrill-Palmer Quarterly. 1975;21:181–193.
  • Golinkoff RM, Hirsh-Pasek K. How toddlers begin to learn verbs. Trends in Cognitive Science. 2008;12:397–403. [PubMed]
  • Golinkoff RM, Kerr JL. Infants’ perception of semantically defined action role changes in filmed events. Merrill-Palmer Quarterly. 1978;24:53–61.
  • Gopnik A, Sobel DM. Detecting blickets: How young children use information about novel causal powers in categorization and induction. Child Development. 2000;71:1205–1222. [PubMed]
  • Gopnik A, Shulz L. Causal Learning: Psychology, Philosophy, and Computation. New York: Oxford University Press; 2007.
  • Gullberg M, de Bot K, Volterra V. Gestures and some key issues in language development. Gesture. 2008;8:149–179.
  • Hirsh-Pasek K, Golinkoff R, Naigles L. Young children’s ability to use syntactic frames to derive meaning. In: Hirsh-Pasek K, Golinkoff R, editors. The origins of grammar: Evidence from early language comprehension. Cambridge, MA: MIT Press; 1996. pp. 123–158.
  • Iverson J, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005;16:367–371. [PubMed]
  • Jackendoff R. Semantic Structures. Cambridge, MA: MIT Press; 1990.
  • Kendon A. Gesture and speech: Two aspects of the process of utterance. In: Key MR, editor. Nonverbal Communication and Language. The Hague: Mouton; 1980. pp. 207–227.
  • Kidd E, Holler J. Children’s use of gesture to resolve lexical ambiguity. Developmental Science. 2009;12(6):903–913. [PubMed]
  • Kita S. How representational gestures help speaking. In: McNeill D, editor. Language and gesture: Window into thought and action. Cambridge, UK: Cambridge University Press; 2000. pp. 162–185.
  • Krist H, Fieberg EL, Wilkening F. Intuitive physics in action and judgment: The development of knowledge about projectile motion. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1993;19:952–966.
  • Lakusta L, Landau B. Starting at the end: The importance of goals in spatial language. Cognition. 2005;96:1–33. [PubMed]
  • Lakusta L, Wagner L, O’Hearn K, Landau B. Conceptual foundations of spatial language: Evidence for a goal bias in infants. Language Learning and Development. 2007;3(3):179–197.
  • Leslie AM. The perception of causality in infants. Perception. 1982;11:173–186. [PubMed]
  • Leslie AM. Spatiotemporal continuity and the perception of causality in infants. Perception. 1984;13:287–305. [PubMed]
  • Leslie AM, Keeble S. Do six-month-old infants perceive causality? Cognition. 1987;25:265–288. [PubMed]
  • Levin B. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press; Chicago, IL: 1993.
  • McNeill D. Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press; 1992.
  • McNeill D. Speech and gesture integration. In: Iverson JM, Goldin-Meadow S, editors. The nature and functions of gesture in children’s communication. San Francisco: Jossey-Bass Inc, Publishers; 1998. pp. 11–27.
  • McNeill D. Gesture and Thought. Chicago: University of Chicago Press; 2005.
  • Michotte AE. The perception of causality. New York: Basic Books; 1963.
  • Nicoladis E, Mayberry R, Genesee F. Gesture and early bilingual development. Developmental Psychology. 1999;35:514–526. [PubMed]
  • Naigles LR. Children use syntax to learn verb meanings. Journal of Child Language. 1990;17:357–374. [PubMed]
  • Oakes LM, Cohen LB. Infant perception of a causal event. Cognitive Development. 1990;5:193–207.
  • Özçaliskan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development. Cognition. 2005;96:B101–B113. [PubMed]
  • Özçaliskan S, Goldin-Meadow S. When gesture-speech combinations do and do not index linguistic change. Language and Cognitive Processes. 2009;24(2):190–217. [PMC free article] [PubMed]
  • Özyürek A. Do speakers design their co-speech gestures for their addresees? The effects of addressee location on representational gestures. Journal of Memory and Language. 2002;46:688–704.
  • Özyürek A, Kita S, Allen S, Furman R, Brown A, Ishizuka T. Development of cross-linguistic variation in speech and gesture: Motion events in English and Turkish. Developmental Psychology. 2008;44:1040–1054. [PubMed]
  • Pine KJ, Lufkin N, Messer D. More gestures than answers: Children learning about balance. Developmental Psychology. 2004;40:1059–1067. [PubMed]
  • Saxe R, Tenenbaum JB, Carey S. Secret agents: Inferences about hidden causes by 10- and 12-month-old infants. Psychological Science. 2005;16:995–1001. [PubMed]
  • Saxe R, Tzelnic T, Carey S. Knowing who dunnit: Infants identify the causal agent in an unseen causal interaction. Developmental Psychology. 2007;43(1):149–158. [PubMed]
  • Shultz TR. Rules of causal attribution. Monographs of the Society for Research in Child Development. 1982;47(1 Serial No 194)
  • Sobel DM, Tenenbaum JB, Gopnik A. Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers. Cognitive Science. 2004;28:303–333.
  • Talmy L. Force dynamics in language and cognition. Cognitive Science. 1988;12:49–100.
  • Tomasello M. The item based nature of children’s early syntactic development. Cognitive Sciences. 2000;4:156–163. [PubMed]
  • Wolff P. Direct causation in the linguistic coding and individuation of causal events. Cognition. 2003;88:1–48. [PubMed]
  • Wolff P. Representing causation. Journal of Experimental Psychology: General. 2007;136:82–11. [PubMed]