PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Cognition. Author manuscript; available in PMC 2010 August 1.
Published in final edited form as:
PMCID: PMC2750876
NIHMSID: NIHMS121840

Does language about similarity play a role in fostering similarity comparison in children?

Abstract

Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input—deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (POINT TO CAT+POINT TO TIGER) resembling those conveyed by 40 hearing children in early gesture+speech combinations (cat+POINT TO TIGER). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.

Keywords: similarity comparison, homesign, deafness, gesture, gesture-speech combination, early language development, x is like y, similes, metaphor

Similarity is a central construct in explanations of knowledge acquisition, and underlies much of children’s early learning about categories (Smith, 1983, Samuelson & Smith, 2000). For example, 18-month-olds can sort objects into categories based on shared perceptual features (e.g., boxes vs. balls; Gopnik & Meltzoff, 1992; Sugarman, 1993), and even preverbal children can use perceptual similarity to categorize animals or human faces (see Oakes & Madole, 2000, for a review). The fact that preverbal children, as well as other nonverbal animals (including pigeons, Hernstein, Loveland & Cable, 1976, and chimpanzees, Oden, Thompson & Premack, 1990), respond systematically to similarity makes it clear that having a codified language is not essential to recognize similarities between objects. But does learning an explicit term for comparison help promote the routine expression of more abstract similarity relations?

All languages have symbolic markers designed to highlight similarities between objects. The word like in the ‘x is like y’ construction (e.g., the tiger is like a cat) plays this role in English and is frequently found in the talk English-learning children hear (Özçalışkan, Goldin-Meadow & Gentner, 2009). This construction thus offers children a model for their early expressions of similarity. And children take advantage of this model, using the word like to express similarities at a relatively young age. Three- to four-year-old children spontaneously produce novel expressions that highlight similarities between objects (Billow, 1981; Clark, 1973; Chukovsky, 1968; Elbers, 1988; Winner, 1979), describing, for example, a long pencil as looking like a rocket ship (Gardner, Winner, Bechhofer & Wolf, 1978). Children of this age are also able to reliably choose sentence endings based on similarity when asked about expressions that involve comparisons between objects in experimental contexts (e.g., a river is like a snake) (Billow, 1975; Epstein & Gamlin, 1994; Gardner, Kircher, Winner & Perkins, 1975; Mendelsohn, Robinson, Gardner & Winner, 1984; Vosniadou & Ortony, 1983; Winner, McCarthy & Gardner, 1980).

Does having constructions that make comparison explicit (for example, x is like y) in their linguistic input play a role in getting children to comment on similarities between objects? On the one hand, the need to communicate about similarities may be so basic that we might guess that learning words for comparison would make no difference; children might be able to express the same types of similarity comparisons regardless of whether they have explicit terms for comparison in their lexicons. On the other hand, although the simple, global similarity that often holds between objects from the same category (e.g., the similarity between a cat and a tiger) may be salient even to very young children, there is considerable evidence that more focused partial similarities (e.g., the similarity between a red apple and a red book, objects from different categories) is not as obvious (Gentner & Rattermann, 1991; Smith, 1987). Thus, their emergence in child conversation might be more closely tied to the emergence of explicit terms for comparison.

To explore these possibilities, we examined children who have had no exposure to a usable language model and thus no exposure to an explicit term for similarity (i.e., the word like). We asked whether these children comment on similarity between objects and, if so, whether their similarity comparisons resemble those produced by hearing children who do have access to an explicit term that highlights comparison and who can communicate about global similarities between objects from the same basic category (cat is like tiger), as well as more focused, partial similarities between objects from different categories (red apple is like red book).

Deaf children who have hearing losses so profound as to preclude the acquisition of spoken language are unable to profit from the conventional spoken language that surrounds them. If these deaf children are born to hearing parents, they may not be exposed to a conventional sign language until adolescence. Despite their lack of a usable conventional language model, these children invent gesture systems, called homesigns, to communicate with the hearing individuals in their worlds (Feldman, Goldin-Meadow & Gleitman, 1978; Goldin-Meadow, 2003). The deaf children use pointing gestures and invent iconic gestures to refer to objects (Goldin-Meadow, Butcher, Mylander & Dodge, 1994) and therefore might be able to use their gestures to communicate about similarities between objects. We explore here whether deaf children use their homesign gestures to express similarity relations even if they are never exposed to an explicit term for comparison (the word like). If so, we ask whether their similarity comparisons resemble those produced by hearing children who have access to the word like.

How might a deaf child with only a homemade gesture system express a similarity relation? One strategy would be to invent a gesture for like. However, this turns out to be difficult, as the deaf children’s gestures were rarely arbitrary in form. All of the deaf children in our study were being educated using oral methods (e.g., lip-reading, auditory training) and their parents had been advised by educators to talk to their children whenever possible and avoid using sign language or gesture. The children’s gestures therefore had to be transparent enough to be understood by people who shared neither their gesture system nor their desire to communicate with gesture. It is apparently not easy to invent a gesture form that transparently conveys the meaning like and, indeed, none of the deaf children did. An alternative strategy would be to juxtapose two gestures and let the listener infer the similarity relation between them (e.g., POINT TO BALLOON+POINT TO LOLLIPOP). This, in fact, is the strategy that the deaf children adopted.

One problem immediately arises, however—we cannot be certain that a child who merely juxtaposes two gestures intends to convey a similarity comparison. Our solution to this problem was to use similarity expressions produced by young hearing children as the standard against which to assess the deaf children’s gesture+gesture combinations. Before young hearing children produce the ‘x is like y’ construction during the early stages of language-learning (e.g., lollipop is like a balloon), they produce similarity comparisons without using the word like by juxtaposing a gesture and a word (e.g., balloon+POINT TO LOLLIPOP, Özçalışkan & Goldin-Meadow, 2006). We used the similarity expressions that hearing children produce with and without like as a standard against which to measure the deaf children’s gesture+gesture similarity expressions (all of which lacked a term for like).

If having an explicit term for comparison (i.e., the word like) is not instrumental in expressing both global and focused similarity relations, then we would expect the deaf children to gesture about the same kinds of similarity relations that the hearing children talk about. If, however, having an explicit term for comparison is instrumental in expressing the full range of similarity relations, then the deaf children may not communicate about the same types of similarity relations as the hearing children. We describe here the similarity expressions that deaf children produce in the absence of conventional linguistic input, and compare them to similarity expressions produced by hearing children who are learning English.

METHODS

Participants

We examined videotapes of four deaf children (2 boys, 2 girls), referred to here as Abe, David, Marvin and Kathy, each followed longitudinally, starting at ages 2;3, 2;10, 2;11, and 3;1, respectively. The children came from working class families, all of whom spoke English. All four children were profoundly deaf (>90dB bilateral hearing loss across the entire speech range), and were being educated in preschools by an oral method of deaf education that advocated early and intense training in sound sensitivity, lip-reading, and speech production. It is very difficult to acquire language via lip-reading, and none of the four children in our sample had made progress in acquiring spoken English at the time of our observations. Moreover, all four children were being raised by hearing parents who themselves did not know a conventional sign language. Consequently, none of the children had been exposed to sign language, either at home by their parents or in preschool by their teachers.

Nonetheless, all four children developed spontaneous gesture systems to communicate, and these gesture systems were structured in language-like ways (see Goldin-Meadow & Mylander, 1984, for further details on the deaf children’s communicative capacities). The deaf children were recorded on videotape, gesturing while they played with their parents, siblings, or the experimenters. These video sessions took place in their homes for 70–130 minutes at a time, at intervals of approximately two months. The deaf children were followed longitudinally for an average of 3 years and 3 months from age 2;3 to age 4;2.

Although the deaf children were not exposed to a conventional sign language, they did see the gestures that hearing speakers routinely produce when they talk. In previous work, we have found that the hearing mothers of the deaf children in our sample did produce gestures as they spoke to their children (Goldin-Meadow & Mylander, 1983, 1984). However, the gestures that the hearing mothers produced were different on many levels from their children’s gestures. For example, unlike their children, the mothers tended to produce single gestures rather than gesture strings (i.e., gesture+gesture combinations). Moreover, even when mothers did concatenate their gestures into strings, their strings did not show the same structural regularities as their children’s gesture strings. To explore whether the gestures that the hearing mothers produce might have served as a model for the deaf children’s expressions of similarities, we applied the coding system developed to analyze the deaf children’s gestures to the gestures that the mothers produced when talking to their children.

In addition, we examined videotapes of 40 hearing children (22 girls, 18 boys) followed longitudinally for two years, from 1;2 to 2;10.1 The hearing children were observed in their homes for 90 minutes every four months while interacting with their parents. The parents were told to interact with their children as they normally would and ignore the presence of the experimenter. The hearing children’s families were a heterogeneous mix in terms of family income and ethnicity, and were representative of the population distribution in the greater Chicago area. All hearing children were being raised as monolingual English speakers. Data collection involved home visits for both the deaf and hearing children. However, the experimenter often interacted with the deaf children along with or instead of the child’s parent; the hearing children interacted only with their parents.

Transcription and coding

We transcribed all of the children’s communicative and intelligible words and gestures. The criterion for coding a gesture or a word as communicative was clear behavioral evidence that the child meant to engage the listener. Sounds that were used reliably to refer to entities, properties, or events (doggie, pretty, gone), along with onomatopoeic sounds (e.g., meow, choo-choo) and conventionalized evaluative sounds (e.g., oopsie, uh-oh), were counted as words. Communicative hand movements that did not involve direct manipulation of objects (e.g., twisting a jar open) or a ritualized game (e.g., patty cake) were counted as gestures. The only exception was when the child held up an object to bring it to the listener’s attention; although these movements are direct actions on an object, they serve the same function as pointing gestures and thus were considered gestures. We divided all gesture and speech production into communicative acts. A communicative act was defined as a word or gesture, alone or in combination, preceded and followed by a pause, change in conversational turn, or change in intonational pattern.2

We extracted all communicative acts conveying relations between two objects. Our first concern was that not all juxtapositions of two objects necessarily involved similarity relations. Consequently, we began our analyses by dividing communicative acts juxtaposing two objects into those that conveyed thematic relations (e.g., mommy+POINT TO BALLOON, meaning mommy is holding the balloon) and those that conveyed similarity relations (e.g., lollipop+POINT TO BALLOON, meaning the lollipop is like the balloon) (see Özçalışkan & Goldin-Meadow, 2005, 2009, for more information on thematic relations in the hearing children’s speech and gestures, and Goldin-Meadow & Mylander, 1984, for information on thematic relations in the deaf children’s gestures). We next classified all instances of similarity relations into three categories based on form: (1) similarity comparison in gesture-only (e.g., POINT TO LOLLIPOP+POINT TO BALLOON), (2) similarity comparison in gesture+speech combinations without the word like (e.g., lollipop+POINT TO BALLOON), and (3) similarity comparison in speech, with or without gesture, containing the word like (e.g., balloon is like a lollipop; like a lollipop+POINT TO BALLOON).

Some gesture+gesture and gesture+speech combinations were inherently ambiguous; notably, gestures pointing to two items from the same basic-level category (e.g., POINT TO A TOY WHALE+POINT TO PICTURE OF A WHALE). The child could be pointing out the similarity between the toy whale and the picture of the whale. But he might also be using the picture of the whale to identify the toy whale as a whale, akin to a gesture+speech combination in which a hearing child points at the toy whale and says whale. Because of the inherent ambiguity in gesture+speech and gesture+gesture combinations of this type, we decided to be conservative and exclude all combinations in which the two entities in the comparison were from the same basic-level category; for example, dog+POINT TO DOG TOY, a gesture+speech combination; POINT TO TOY DOG + POINT TO DOG PICTURE, a gesture+gesture combination. On average, the deaf children produced M=6.12 (SD=6.74) gesture+gesture combinations of this type per hour, and the hearing children produced M=13.5 (SD=8.62) gesture+speech combinations of this type per hour.

We further coded all similarity relations in terms of the category membership of the objects compared: The objects either belonged to the same superordinate category or to different superordinate categories. In addition, we coded all similarity relations in terms of the degree of feature overlap: The similarity between objects could be based either on a single feature or on multiple features. Single-feature comparisons always involved one dimension of similarity between the two objects, for example, color, shape, size, smell, sound, or action. Multi-feature comparisons involved two or more dimensions along which the two objects were compared. Single-feature comparisons involving objects from different superordinate categories highlight the partial overlap of features between two objects and thus require a focus on similarity; we therefore refer to these comparisons as focused similarity comparisons. In contrast, multi-feature comparisons involving objects from the same superordinate category are comments on the overall similarity between two objects; we therefore refer to these comparisons as global similarity comparisons. We also classified the objects described in the similarity relations into types: people, animals, body parts, vehicles, clothing, furniture, appliances, kitchen utensils, tools, musical instruments, food, plants, activity toys, and places (see examples in Table 1). To assess the gestural model that the deaf children had for the expression of similarity relations, we coded the gestures that the deaf children’s hearing mothers produced when talking to their children for the same three distinctions: Category membership of the objects being compared (same or different), degree of feature overlap between the objects (single- or multiple-feature), and type of object (people, animals, etc.).

Table 1
Examples of types of comparisons and types of objects that hearing and deaf children used in their similarity expressions a,b

Reliability for gesture coding was assessed on a subset of the videotaped sessions by independent coders. For the hearing children, agreement between coders was 88% for identifying gestures (i.e., presence or absence of a gesture), 91% for assigning meaning glosses to each gesture, and 96% for coding semantic relations (e.g., thematic vs. similarity relation) in multi-word speech and supplementary gesture-speech combinations. For the deaf children and their hearing mothers, agreement ranged between 93% and 97% for identifying gestures, between 93% and 95% for assigning meaning to gestures, and between 94% and 100% for coding semantic relations in gesture-gesture combinations.

RESULTS

Similarity vs. thematic relations in hearing and deaf children’s communications about objects

Figure 1 shows the mean percentage of thematic (black bars) and similarity (white bars) relations observed in the hearing children’s multi-word speech and gesture+speech combinations and in the deaf children’s gesture+gesture combinations across all the observation sessions.3 The majority of the communicative acts conveying relations between two objects involved thematic relations (e.g., mommy + POINT TO JUICE) for both the hearing (speech: 91%, gesture+speech: 61%) and deaf (70%) children. Nevertheless, both groups also expressed a substantial percentage of similarity relations (e.g., cat+POINT AT LION; POINT TO CAR+POINT TO TRUCK; 10–30% for the hearing and deaf children.

Figure 1
Mean percentage of thematic (black bars) and similarity (white bars) relations produced by hearing children in speech or gesture+speech combinations and by deaf children in gesture+gesture combinations

Thus the deaf children, who were not exposed to a usable language model, were nevertheless able to express similarity relations in their homesigns. Moreover, the percentage of similarity vs. thematic relations expressed was comparable in the deaf and hearing children, t(41)=0.34, p=0.73, η2=.0034—in both groups, approximately one third of the children’s early expressions conveying relations between two objects involved similarity comparisons.

We turn next to the types of similarity relations that the children produced. We begin by describing the similarity comparisons that the 40 hearing children expressed in speech using the word like. These descriptions will establish the standard against which we can evaluate the deaf children. We then describe the hearing children’s similarity comparisons in gesture+speech without the word like. Finally, we describe the deaf children’s similarity comparisons expressed in gesture+gesture.

Similarity expressions with the word like in the hearing children

Emergence of similarity relations in hearing children’s speech with like

As shown in Figure 2 (solid lines), only a few hearing children produced similarity relations with the word like at 26 (N=4) and 30 (N=9) months. However, by 34 months, more than half of the 40 children were producing similarity expressions in speech with like. Across the 6 observation sessions, 17 of the 40 children never produced similarity relations with the word like. Of the 23 children who did express similarity using the word like, 18 either maintained or increased their production of this type of comparison over time, compared to 5 who decreased their production (p<.01, two-tailed sign-test).5

Figure 2
Mean number of similarity relations hearing children produced per hour of observation at each observation session either in a gesture-speech combination (dotted lines) or in speech with the word ‘like’ (solid lines)

The number of hearing children who used gesture in their similarity expressions with like also increased from 2 at 26 months to 11 at 34 months. The hearing children used these gestures to specify an object of comparison not conveyed in speech (e.g., like ice-cream cone+POINT TO MUSHROOM [26 months]) or to clarify an object expressed by a referentially ambiguous proform (e.g., they look like strawberries+ POINT TO TOY TOMATOES [30 months]). As these examples suggest, gesture often conveyed the target domain (mushroom, tomatoes) of the comparison, rather than the source domain (ice-cream cone, strawberries). Indeed, in their early similarity expressions with like, children virtually always conveyed the source in speech, relying on gesture or context to convey the target (Özçalışkan & Goldin-Meadow, 2006). This marked asymmetry between source and target is consistent with the general pattern found in adult speech expressing similarity and metaphor (Gentner, 1983; Gleitman et al., 1996, Ortony, 1979; Tversky, 1977).

Types of similarity relations conveyed by hearing children in speech with like

We turn next to the types of similarity comparisons that the children conveyed in similarity expressions containing like. We examine the types of similarity comparisons before, at, and after the 30-month observation session, the moment when like became frequent in the hearing children’s similarity comparisons.

Figure 3A displays the proportion of similarity comparisons with like that the hearing children produced before 30 months, at 30 months, and after 30 months, classified according to whether the objects compared belonged to the same or different superordinate categories. The majority of similarity comparisons in speech with like before 30 months and at 30 months involved objects from the same superordinate category (90% and 67%, respectively). However, after 30 months, we see a shift from same category object comparisons (e.g., cat and tiger) to different category object comparisons (e.g., balloon and lollipop). By 34 months, only 30% of the similarity comparisons the hearing children produced in speech with like involved objects that belong to the same superordinate category; 70% involved objects belonging to different superordinate categories.

Figure 3
Similarity relations produced by hearing children in speech with the word “like”(Panel A) or in gesture+speech without the word “like” (Panel B) and by deaf children in gesture (Panel C), grouped according to whether the ...

The same pattern emerges if we consider the degree of feature overlap. Figure 4A displays the proportion of similarity comparisons with like that the hearing children produced before 30 months, at 30 months, and after 30 months, classified according to the degree of feature overlap (single feature vs. multiple features). The majority of similarity comparisons in speech with like that the hearing children produced before 30 months and at 30 months were based on multiple features (80% and 92%, respectively). Children’s comparisons became more targeted after 30 months and by 34 months, only 30% of the comparisons the children produced in speech with like were based on multiple features; 70% were based on a single feature (e.g., color, shape or size similarity between two objects).

Figure 4
Similarity relations produced by hearing children in speech with the word “like”(Panel A) or in gesture+speech without the word “like” (Panel B) and by deaf children in gesture (Panel C), grouped according to whether the ...

Similarity expressions without like in hearing children

Emergence of similarity relations in hearing children’s gesture+speech without like

The hearing children did not express similarity by juxtaposing two words without like (e.g., balloon lollipop) or by juxtaposing two gestures (e.g., POINT TO BALLOON + POINT TO LOLLIPOP). However, they did produce what appeared to be similarity expressions without like in their gesture + speech combinations (balloon + POINT TO LOLLIPOP [26 months]). Can we be sure that combinations of this sort were used to highlight the similarity between two objects (e.g., roundness of balloon and lollipop)? One type of confirmatory evidence comes from the developmental offset of gesture+speech combinations without like in relation to the onset of similarity expressions with like.

As seen in Figure 2 (dotted lines), the hearing children produced a small number of gesture+speech combinations without like when first observed at 14 months and increased their production of these combinations at 18 months. Interestingly, the number of gesture+speech combinations expressing similarity without like remained stable until 30 months when it began to decline—precisely the age at which the children began producing a sizeable number of similarity expressions with like. Thus, the children became less likely to produce similarity expressions without like at just the point when they were able to produce an explicit comparison marker (i.e., like).

This pattern was also evident at the individual child level—20 children produced their first gesture+speech combination expressing similarity without like before producing their first similarity expression with like; only one child showed the reverse pattern (p<.001, two-tailed sign test). On average, these 21 children produced their first similarity expression in gesture+speech without like at 20.20 (SD=5.45) months, significantly earlier than they produced their first similarity expression with like, which took place at 30.95 (SD=3.07) months, t(20)=8.57, p<.001, η2=.79. Of the remaining 19 hearing children, 15 produced gesture+speech combinations expressing similarity without like during our observation sessions and had not yet produced similarity expressions with like; 2 produced their first similarity expression with and without like during the same observation session; and only 2 had not yet produced similarity comparisons with or without like at the time of our last observation.

Types of objects hearing children compare in similarity expressions with and without like

Another line of evidence suggesting that hearing children’s similarity expressions without like functioned to highlight similarity between objects comes from the fact that the utterances without like resembled those with like in terms of the kinds of objects compared. Table 2 displays the proportion of objects that hearing children conveyed in their similarity expressions, classified according to type of object. The top row displays the proportion of objects mentioned in the hearing children’s similarity expressions in speech with like. The second row presents the objects conveyed in the spoken part of the hearing children’s gesture+speech combinations without like, and the third row presents the objects conveyed in the gestured part of the hearing children’s gesture+speech combinations without like.

Table 2
Proportion of different kinds of entities hearing and deaf children compared in their similarity expressions

As in similarity expressions with like, in similarity expressions without like, the person, animal, food, and body part categories accounted for approximately 65% of the objects conveyed in speech, and 65% of the objects conveyed in gesture; activity toys, vehicles, clothing and places accounted for another 15–25% in speech and in gesture. There was a significant, positive correlation between the different types of objects conveyed in speech in similarity expressions with like and in speech in similarity expressions without like (rows 1 and 2 in Table 2, Spearman’s rho=.81, p<.01); and between the different types of objects conveyed in speech in similarity expressions with like and in gesture in similarity expressions without like (rows 1 and 3, Spearman’s rho=.71, p<.01).6

The developmental timing of similarity expressions without like relative to similarity expressions with like, in conjunction with the comparable patterns in types of objects, suggests that the utterances we have been calling similarity expressions without like really do express a similarity relation. But perhaps the child is merely trying to label an object for which he does not have a word. For example, the child might point at a small hole and call it a balloon because he does not know the word hole, and balloon is his best substitute. We think this possibility unlikely simply because children did have words for 51% (SD=26.88) of the objects that they indicated in gesture in their similarity expressions. True errors (where there was no apparent similarity between objects, e.g., ball+point at ribbon) were infrequent in the hearing children (10% of all gesture-speech combinations conveying relations between objects; M=0.52 [SD=0.81]) and the rate of these errors did not systematically increase or decrease over time.7

Types of similarity relations hearing children convey in gesture+speech without like

The findings thus far suggest that children can convey similarity relations between objects across gesture and speech several months before they begin to convey similarity relations explicitly marked with like. But are these early similarity comparisons without like as sophisticated as the later similarity comparisons with like? It is possible that learning the word like helps children express similarity relations that they might not have otherwise expressed.

Figure 3B shows the proportion of similarity comparisons that the hearing children produced in a gesture+speech combination without the word like before, at, and after 30 months, classified according to whether the objects compared belonged to either the same or different superordinate categories. At all three age points, the majority of similarity comparisons that the children produced in gesture+speech without like involved objects from the same superordinate category (77%, 77%, 85%, respectively).

The same pattern was true for the degree of feature overlap. Figure 4B displays the proportion of similarity comparisons without like that the hearing children produced in a gesture+speech combination before, at, and after 30 months, classified according to the degree of feature overlap (single feature vs. multiple feature). The majority of similarity comparisons that the children produced in gesture+speech without like were based on multiple features at all three of these early time points (95%, 89% and 89%, respectively).

Thus, the types of similarity comparisons the hearing children produced in gesture+speech without like resembled the comparisons that they produced in speech with like before 30 months: both involved objects from the same superordinate category and were typically comparisons based on multiple features, that is, overall comparisons that were global. With the onset and continued use of the comparison marker like, children’s comparisons changed; by 34 months, the majority (70%) of their similarity comparisons compared objects that were from different superordinate categories and that shared a single feature—that is, they produced highly focused comparisons. These focused similarity comparisons were extremely rare or nonexistent in the children’s gesture+speech combinations without like at any time point, suggesting that the routine use of an explicit word for comparison makes it easier for the child to comment on—and perhaps notice—more focused similarity comparisons.

Similarity expressions without like in deaf children

Having discovered that hearing children convey similarity relations without like, we are now ready to examine the similarity expressions that the deaf children produced, none of which contained a gesture for like.

Emergence of similarity relations in deaf children’s gesture+gesture combinations

All four deaf children produced similarity comparisons in their gesture+gesture combinations, but, for at least two of the deaf children, similarity comparisons were delayed compared to hearing children.8 Abe produced his first similarity comparison at 34 months and Marvin at 50 months (recall that the average onset of similarity expressions for hearing children was 21 months). David and Kathy produced similarity expressions during their first observation sessions at 34 months and 37 months, respectively; we therefore cannot pinpoint age of onset for these two children.

Table 3 presents the number of similarity expressions without like that each deaf child produced per hour (beginning when the child first produced similarity expressions). For comparison, the table also presents the mean number of similarity expressions with and without like that hearing children produced per hour (beginning when the child first produced similarity expressions). The numbers of similarity expressions that the deaf children produced clearly fall within the range for the hearing children. Note also that hearing and deaf children both exhibited wide individual variability in their overall production of similarity comparisons.

Table 3
Mean number of similarity relations in children’s early communications

Types of objects deaf children compare in similarity expressions without like

Do the deaf children’s similarity expressions resemble hearing children’s similarity expressions in terms of the types of objects being compared? The short answer is yes. Table 2 presents the data (bottom rows in each table display individual data for the deaf children; last row displays the mean for all four). The deaf children as a group produced at least a few similarity expressions of each object type. As in the hearing children’s similarity expressions, person, animal, food and body parts accounted for 72% of the objects that the deaf children compared; activity toys, vehicles, clothing, and places accounted for another 19%. The biggest difference between groups was that the deaf children tended to highlight similarities between body parts whereas the hearing children highlighted similarities most commonly between people and animals. Nonetheless, there were significant correlations between the different types of objects that the deaf children conveyed in their gestures and those that the hearing children conveyed (1) in gesture in gesture+speech combinations without like (rows 8 and 3 in Table 2, Spearman’s rho=.40, p<.01), (2) in speech in gesture+speech combinations without like (rows 8 and 2, Spearman’s rho=.37, p<.01) and (3) in speech combinations with like (rows 8 and 1, Spearman’s rho=.44, p<.01).

Types of similarity relations deaf children convey in gesture+gesture without like

Taken together, these findings show that the deaf children not only produced comparisons at rates comparable to hearing children, but also expressed similarity relations between comparable sets of objects. However, unlike hearing children, the deaf children did not have access to an explicit word for comparison—namely, a gesture for like. If learning and using like is instrumental in expressing focused similarity relations, then the deaf children ought not produce single-feature comparisons between objects from different categories, that is, the focused similarity comparisons found in the hearing children’s combinations with like. They should instead produce only the multiple-feature comparisons between objects from the same superordinate category, that is, the global and relatively simple similarity comparisons found in the hearing children’s gesture+speech combinations without like. If, on the other hand, access to an explicit word for comparison is not instrumental in producing the more focused similarity relations, then the deaf children should be able to produce the full range of similarity comparisons found in the hearing children (i.e., including focused comparisons between objects that are from different categories and that share only one feature found in the hearing children’s repertoires after 30 months).

Figure 3C shows the proportion of similarity comparisons that the deaf children produced in gesture across all observations sessions, classified according to whether the objects compared belonged to the same or different superordinate categories. Over 70% of the similarity comparisons involved objects from the same superordinate category and thus were comparable to the similarity comparisons produced by hearing children before 30 months, the age at which many of the children began to learn the word like (cf., Figures 3A and 3B).

Turning next to the degree of feature overlap, we see a similar pattern. Figure 4C displays the proportion of similarity comparisons that the deaf children produced in gesture+gesture across the observation sessions, classified according to the degree of feature overlap (single feature vs. multiple features). Comparisons based on multiple features accounted for 88% of the similarity comparisons that the deaf children produced. Comparisons based on a single feature were quite rare; indeed only two of the four deaf children (Abe and David) produced 16 instances of these targeted comparisons, and color was always the dimension on which the comparison was based (e.g., POINT AT RED FLOWER+POINT AT RED TRUCK). Again, this pattern resembles similarity comparisons produced by hearing children before 30 months, the age at which many of the children learned the word like (cf., Figures 4A and 4B). Thus, even though the deaf children were able to convey similarity relations in their spontaneous gestures, the majority of their comparisons were limited in scope, involving objects that were from the same superordinate category and that shared multiple features.

These findings are particularly interesting because the hearing parents of the deaf children did produce instances of focused similarity comparisons in the spontaneous gestures that they produced while interacting with their children. Many of the comparisons that the hearing parents produced in gesture highlighted similarities between objects from different superordinate categories and were based on a single feature (typically the color of the objects). Across all observation sessions, David’s mother produced a total of 15 gesture+gesture combinations conveying similarity; more than half of these comparisons were based on a single feature (i.e., color) and 75% involved objects that belong to different superordinate categories (e.g., POINT TO BROWN RUG+POINT TO BROWN COOKIE). Abe’s and Marvin’s mothers each produced 6 similarity comparisons in their gesture+gesture combinations, and 50% of their comparisons involved objects from different categories and were based on a single feature (the color of the objects). Kelly’s mother was the exception; she produced no similarity comparisons at all in her gestures. 9

Thus, three of the four deaf children received adult models for focused similarity comparisons. Yet in spite of this input, only two of the three children expressed this type of comparison, and the frequency with which they did so was markedly lower than the frequency with which the hearing children produced focused similarity comparisons after they learned the word like. Thus, although not having a term for like does not preclude expressing focused similarity comparisons, it does seem to dramatically decrease their frequency.

DISCUSSION

Similarity plays a key role in conceptual development, as it constitutes the child’s first step in aligning two different representations within a unified frame (Gentner & Namy, 1999; Gentner & Rattermann, 1991). As such, the expression of relations between objects based on commonalities in their features (e.g., an orange is round like the sun) stands out as an important linguistic achievement—one that is likely to serve as the stepping-stone for the development of categorization (Landau, Smith and Jones, 1988; Smith, 1983) and more complex metaphorical and analogical abilities (Gentner, 1988; 2003). Prior work (Gentner & Christie, 2008; Lowenstein & Gentner, 2005) has suggested a facilitating effect for language in learning to attend to relational commonalities between objects.

In this paper, we investigated whether language has an effect on children’s early similarity comparisons. A language model such as English offers children the lexical item, like, that can be used to mark an utterance as a similarity expression. Our findings suggest that this lexical item is not necessary for children to express similarity relationships—deaf children who are not exposed to usable linguistic input can produce similarity comparisons in their gesture sentences at rates comparable to those of hearing children exposed to spoken English.

However, having an explicit term for similarity may influence which types of similarities children express. In our findings, the kinds of similarity comparisons that the deaf children routinely produce are more limited in scope than the similarity comparisons produced by hearing children after learning the word like. In fact, the similarity comparisons that the deaf children produced in their gesture+gesture combinations showed striking parallels to the early similarity comparisons that the hearing children produced in their gesture+speech combinations without the word like: Both involved comparisons between the same types of objects (e.g., animals, people, food, body parts) and occurred at comparable rates. Moreover, consistent with earlier work (Gentner & Rattermann, 1991; Kemler, 1982; Smith, 1983), these early similarity comparisons were holistic and global, most often highlighting strong overall similarity between objects that belong to the same superordinate category (Point to cat+POINT TO TIGER; cat + POINT TO TIGER).

However, the hearing children went on to learn the word like and incorporated it into their similarity expressions. At that point, the children’s similarity expressions became more subtle. After 30 months, a majority (70%) of the hearing children’s similarity comparisons were between objects that belonged to different superordinate categories and that focused on a single dimension (brown crayon is brown like my hair). Although the deaf children did produce instances of this more focused similarity comparison (that is, they compared objects that were from different categories and that shared only one feature in their gestures), only two of the four deaf children produced this type of comparison and they did so infrequently. Our data thus suggest that having a word such as like, which explicitly marks similarity, may make it easier for children to routinely produce similarity comparisons involving objects that share only a single feature.

In contrast to the deaf children who were creating a language with their hands to convey similarities, the hearing children were learning to convey similarity expressions from a language model provided by their caregivers. Nonetheless, they too produced gestures and those gestures seemed to serve as the supporting context for the children’s early ‘x is like y’ constructions. The hearing children initially expressed one term of a similarity comparison in speech and used gesture to convey the other term (e.g., like a sheep+POINT TO COW). Even when children expressed both domains in speech, they often used ambiguous language, relying on gesture to clarify the referent (e.g., This like earl grey+POINT TO COFFEE). Thus, in the early stages of language learning, hearing children convey the skeletal structure of the ‘x is like y’ construction in speech and use gesture to flesh out the skeleton.

Using gesture to flesh out linguistic constructions is not unique to early similarity comparisons. Recruiting gesture to clarify ambiguous speech has also been observed in early constructions involving thematic relations (Goldin-Meadow & Butcher, 2003; Iverson & Goldin-Meadow, 2005; Özçalışkan & Goldin-Meadow, 2005, 2009) and later metaphorical mappings (Özçalışkan, 2007). For example, when 3- to 4-year-old children are questioned about metaphorical mappings (e.g., How do ideas pass through the mind?), they produce referentially ambiguous constructions in speech and use gesture to clarify the domain of comparison (e.g., like this+CHILD JUMPS UP AND DOWN TO INDICATE IDEAS BOUNCING IN THE MIND). By age 5;0, children’s verbal explanations are more elaborate, but they still involve gesture, although the gestures are now semantically integrated into the response (e.g., Time drips by means it goes really slowly like that+CHILD MOVES FINGER DOWNWARD IN SMALL PAUSES LIKE DRIPPING WATER; Özçalışkan, 2005, 2007). Thus, gesture previews the child’s next step into a more complete linguistic construction in these later metaphorical mappings, just as it did in the early similarity expressions produced by the hearing children in our study.

Nonetheless, as noted earlier, the facilitating effect of gesture seems to be limited—the more focused comparisons highlighting similarities across objects that share a single feature became dominant in the hearing children’s speech only after they acquired the word like. Moreover, only two of the four deaf children in our study produced these more focused comparisons in their gestures, and the number of times they did so was small and the scope limited (typically involving only color). Thus, although having an explicit term for comparison is clearly not necessary for children to express similarity comparisons, it does seem to affect the rate at which certain types of similarity comparisons (comparisons between objects that are from different categories and share only a single feature) are expressed.

The current findings do not tell us about which similarities children notice—only which ones they choose to express. It is possible that not having a word such as like simply makes it harder to communicate about the more subtle types of similarity. But we suggest that even if the difference between children with and without an explicit term for similarity initially involves only how often they express focused similarity comparisons in their communications, eventually this difference in routine communication could come to influence how likely the children are to notice such similarities and use them in reasoning tasks. Lacking an easy way to convey nonobvious comparisons and to initiate conversation concerning such focused similarities, deaf children may not dwell on them as much in their own thoughts as hearing children do.

Evidence in support of this possibility comes from the finding that having an explicit same-different marker facilitates children’s attention to, and ability to reason about, relational commonalities between objects (Christie & Gentner, 2008; Gentner, 2003). For example, 3- to 5-year-old human children (Gentner & Christie, 2008; Gentner & Rattermann, 1991; Lowenstein & Gentner, 2005), as well as symbol-trained nonhuman primates (Premack, 1971; Thompson et al., 1997), solve tasks that involve noticing relational commonalities among objects more easily when given relational symbols than when not given these symbols. Thus, it is possible that having an explicit term for comparison might affect the ease with which matches based purely on similar relational structure might be made.

Given the observational nature of our data, we cannot attribute a causal role to language—in particular, to having an explicit term for comparison—in fostering children’s similarity comparisons. But our findings are suggestive and highlight the need for future work that manipulates children’s language for comparison and explores the effect of this manipulation on the similarity comparisons children express and use in reasoning tasks.

In addition to fostering similarity comparisons based on a single feature, having an explicit term for comparison may have long-term benefits. Even at 34 months of age, the hearing children’s ‘x is like y’ constructions were restricted to similarity comparisons based on shared perceptual features rather than comparisons based on analogy or metaphor (e.g., a stem is a straw for flowers), providing support for the hypothesis that featural similarity comparisons precede and perhaps are precursors to more abstract mapping abilities (Gentner, 1988, 2003). Will children who produce similarity comparisons using the word like at an early age be among the first to produce analogies or metaphors later on, thus providing support for the idea that similarity comparison bootstraps children into more abstract cognitive abilities? If so, then it is an open question as to whether the deaf children will ever be able to produce these more abstract types of similarity relations in their homemade gesture systems, unless they somehow can import or invent an explicit term for similarity.

In sum, children find the overall similarity between objects sufficiently noteworthy to express it in their spontaneous communications. If children are not exposed to a usable conventional language model, they express similarity relations using gesture, the only communicative vehicle available to them. Even if children are exposed to a conventional language, they manage to express similarity relations before they have acquired the linguistic tools to do so (i.e., before acquiring the word like) and they do it by integrating gesture into their utterances. These early similarity expressions without like precede and set the stage for similarity expressions with like. Perceiving and talking about similarity thus appears to be a robust aspect of early human cognition and communication. However, it is only after the acquisition of the word like that children routinely produce single-feature comparisons between objects from different superordinate categories, suggesting that conventional terms for comparison may make it easier for children to routinely express the full range of similarity comparisons. Language about similarity can thus play a role in how often children comment on, and perhaps notice, more abstract types of similarity.

Acknowledgments

We thank K. Schonwald, J. Voigt for their administrative and technical help, K. Brasky, E. Croft, K. Duboc, Becky Free, J. Griffin, S. Gripshover, C. Meanwell, E. Mellum, M. Nikolas, J. Oberholtzer, L. Rissman, L. Schneidman, B. Seibel, K. Uttich, and J. Wallman for help in data collection and transcription, and Roger Bakeman for his help in statistical analysis. Supported by R01DC00491 and P01HD40605 to Goldin-Meadow and SBE-0541957 to Gentner. We also thank Rebecca Gomez and the three anonymous reviewers for their helpful comments on an earlier version of the manuscript, which improved the manuscript in significant ways.

Footnotes

1The deaf children in our sample were, on average, one year older than the hearing children when they entered the study. Our decision to use a younger group of hearing children as a comparative base grew out of work by Morford and Goldin-Meadow (1997) showing a year’s delay in the onset of communication about displaced objects and events in the same four deaf children. We guessed that the deaf children might also be delayed in other aspects of their communicative development and therefore chose to observe younger hearing children.

2For the deaf children, a pause was defined as either a long temporal interruption between two gestures, or relaxation of the hand after a gesture or a series of gestures (see Goldin-Meadow & Mylander, 1984, for details).

3The hearing children also produced a small number of gesture+speech combinations that appeared to be labeling errors (e.g., ball+ POINT TO RIBBON; five+POINT TO NUMBER 3). These combinations accounted for 10% of the gesture+speech combinations that the hearing children produced and are not included in Figure 1.

4We examined the skewness of the distribution separately for the deaf and hearing children. The ratio of skewness to standard error of skewness was less than 1.96, indicating no significant skewness in the data. We report only one t-value because the data for similarity and thematic relations were reciprocal and thus perfectly correlated.

5The word like became polysemous at 26 months, functioning not only as a comparison term (e.g., ice-cream cone is like mushroom) but also as a verb (e.g., I like ice-cream). Beginning at 30 months, a few children used like as a discourse marker as well. Here we focus exclusively on the uses of like as a comparison term.

6We examined children’s percent mention of different kinds of objects separately for similarity expression in speech with like, and similarity expressions in gesture+speech without like (in speech and in gesture), and found skewed distributions throughout (standard skews ranged between 2.25 and 4.11). We therefore used Spearman’s rho rather than Pearson’s r to assess correlations between variables.

7Of the 40 children in our sample, 16 never produced errors of this type at any of the six observation sessions; 13 did not show either consistent decreases or increases in their production of these errors; 10 decreased their errors from M=0.6, (SD=1.58) at 14 months to none at 34 months; and one increased her errors from none at 14 months to 1 at 34 months.

8A similar delay of about a year has been reported for the onset of displaced reference (i.e., information that is spatially and temporally displaced from the location of speaker and listener) in these deaf children’s homesign systems, compared to the onset of displaced reference in hearing children’s speech (Morford & Goldin-Meadow, 1997).

9The deaf children in our study typically directed their attention to the hand movements of their communication partners, as do hearing children of language-learning age (Yoshida & Smith, 2008). As a result, the deaf children rarely attended to their parents’ lip movements unless explicitly instructed to do so (which did not happen often); the parents’ spontaneous gestures were therefore the most likely source of input for the deaf children’s gestures.

References

  • Billow RM. A cognitive developmental study of metaphor comprehension. Developmental Psychology. 1975;11(4):415–423.
  • Billow RM. Observing spontaneous metaphor in children. Journal of Experimental Child Psychology. 1981;31:430–445.
  • Chukovsky K. From two to five. Berkeley: University of California Press; 1968.
  • Clark EV. What is in a word? On the child’s acquisition of semantics in his first language. In: Moore TE, editor. Cognitive development and the acquisition of language. New York: Academic Press; 1973. pp. 65–110.
  • Elbers L. New names from old words: related aspects of children’s metaphors and word compounds. Journal of Child Language. 1988;15:591–617. [PubMed]
  • Epstein RL, Gamlin PJ. Young children’s comprehension of simple and complex metaphors presented in pictures and words. Metaphor and Symbolic Activity. 1994;9(3):179–191.
  • Feldman H, Goldin-Meadow S, Gleitman L. Beyond Herodotus: The creation of language by linguistically deprived deaf children. In: Lock A, editor. Action, symbol and gesture: The emergence of language. New York: Academic; 1978. pp. 351–414.
  • Gardner H, Kircher M, Winner E, Perkins D. Children’s metaphoric productions and preferences. Journal of Child Language. 1975;2:125–141.
  • Gentner D. Structure-mapping: A theoretical framework for analogy. Cognitive Science. 1983;7:155–170.
  • Gentner D. Metaphor as structure mapping: the relational shift. Child Development. 1988;59:47–59.
  • Gentner D. Why we’re so smart. In: Gentner D, Goldin-Meadow S, editors. Language in Mind: Advances in the study of language and thought. MIT Press; 2003. pp. 195–235.
  • Gentner D, Christie S. Relational language supports relational cognition in humans and apes. Behavioral and Brain Sciences. 2008;31:137–183.
  • Gentner D, Namy L. Comparison in the development of categories. Cognitive Development. 1999;14:487–513.
  • Gentner D, Rattermann MJ. Language and the career of similarity. In: Gelman SA, Byrnes JP, editors. Perspectives on language and thought: interrelations in development. New York: Cambridge University Press; 1991. pp. 225–277.
  • Gleitman LR, Gleitman H, Miller C, Ostrin R. Similar, and similar concepts. Cognition. 1996;58(3):321–376. [PubMed]
  • Goldin-Meadow S. The resilience of language. NY: Psychology Press; 2003.
  • Goldin-Meadow S, Butcher C. Pointing toward two-word speech in young children. In: Kita S, editor. Pointing: Where language, culture, and cognition meet. N.J.: Earlbaum Associates; 2003. pp. 85–107.
  • Goldin-Meadow S, Butcher C, Mylander C, Dodge M. Nouns and verbs in a self-styled gesture system. What’s in a name? Cognitive Psychology. 1994;27:259–319. [PubMed]
  • Goldin-Meadow S, Mylander C. Gestural communication in deaf children: noneffect of parental influence on language development. Science. 1983;221:372–374. [PubMed]
  • Goldin-Meadow S, Mylander C. Gestural communication in deaf children: the effects and noneffects of parental input on early language development. Monographs of the Society for Research in Child Development. 1984;49(3–4):207. [PubMed]
  • Goldin-Meadow S, Mylander C. Spontaneous sign systems created by deaf children in two cultures. Nature. 1998;91:279–281. [PubMed]
  • Gopnik A, Meltzoff AN. Categorization and naming: Basic level sorting in eighteen-month-olds and its relation to language. Child Development. 1992;63:1091–1103.
  • Hernstein RJ, Loveland DH, Cable C. Natural concepts in pidgins. Journal of Experimental Psychology: Animal Behavior Processes. 1976;2(4):285–302. [PubMed]
  • Iverson JM, Goldin-Meadow S. Gesture paves the way for language development. Psychological Science. 2005;16:368–371. [PubMed]
  • Kemler DG. Classification in young and retarded children: The primacy of overall similarity relations. Child Development. 1982;53:768–779. [PubMed]
  • Landau B, Smith LB, Jones SS. The importance of shape in early lexical learning. Cognitive Development. 1988;3:299–321.
  • Loewenstein J, Gentner D. Relational language and the development of relational mapping. Cognitive Psychology. 2005;50:315–353. [PubMed]
  • Mendelsohn E, Robinson S, Gardner H, Winner E. Are preschooler’s renamings intentional category violations? Developmental Psychology. 1984;20(2):187–192.
  • Morford JP, Goldin-Meadow S. From here and now to there and then: the development of displaced reference in homesign and English. Child Development. 1997;68(3):420–435. [PubMed]
  • Oakes LM, Madole KL. The future of infant categorization research: A process-oriented approach. Child Development. 2000;71(1):119–126. [PubMed]
  • Oden DL, Thompson RK, Premack D. Infant chimpanzees spontaneouslyperceive both concrete and abstract same/different relations. Child Development. 1990;61:621–631. [PubMed]
  • Ortony A. Beyond literal similarity. Psychological Review. 1979;86:161–180.
  • Özçalişkan S. On learning to draw the distinction between physical and metaphorical motion: Is metaphor an early emerging cognitive and linguistic capacity? Journal of Child Language. 2005;32(2):291–318. [PubMed]
  • Özçalışkan S. Metaphors we move by: Children’s developing understanding of metaphorical motion events in typologically contrastive languages. Metaphor and Symbol. 2007;22 (2):147–168.
  • Özçalışkan S, Goldin-Meadow S. Gesture is at the cutting edge of early language development. Cognition. 2005;96:B01–113. [PubMed]
  • Özçalışkan S, Goldin-Meadow S. X IS LIKE Y: The emergence of similarity mappings in children’s early speech and gesture. In: Kristianssen G, Achard M, Dirven R, Ruiz de Mendoza F, editors. Cognitive Linguistics: Foundations and fields of application. Mouton de Gruyter; 2006. pp. 229–262.
  • Özçalışkan S, Goldin-Meadow S. When gesture-speech combinations do and do not index linguistic change. Language and Cognitive Processes. 2009;28(24):190–217. [PMC free article] [PubMed]
  • Özçalışkan S, Goldin-Meadow S, Gentner D. Do parents provide a helping hand for children’s early similarity comparisons? 2009 Manuscript in preparation.
  • Premack D. Language in chimpanzees? Science. 1971;172:808–822. [PubMed]
  • Samuelson LK, Smith LB. Grounding development in cognitive processes. Child Development. 2000;71:98–106. [PubMed]
  • Smith LB. Development of classification: The use of similarity and dimensional relations. Journal of Experimental Child Psychology. 1983;36:150–178.
  • Sugarman S. Children’s early thought: Developments in classification. Cambridge: Cambridge University Press; 1983.
  • Thompson RKR, Oden DL, Boysen ST. Language-naive chimpanzees (Pan troglodytes) judge relations between relations in a conceptual matching-to-sample task. Journal of Experimental Psychology: Animal Behavior Processes. 1997;23(1):31–43. [PubMed]
  • Tversky A. Features of similarity. Psychological Review. 1977;84(4):327–352.
  • Vosniadou S, Ortony A. The emergence of the literal-metaphorical-anomolous distinction in young children. Child Development. 1983;54:154–161.
  • Winner E. New names for old things: the emergence of metaphoric language. Journal of Child Language. 1979;6:469–491. [PubMed]
  • Winner E, McCarthy M, Gardner H. The ontogenesis of metaphor. In: Honeck RP, Hoffman R, editors. Cognition and figurative language. Hillsdale, New Jersey: Lawrence Erlbaum Associates; 1980. pp. 341–361.
  • Yoshida H, Smith LB. What’s in view for toddlers? Using a head camera to study visual experience. Infancy. 2008;13:229–248. [PMC free article] [PubMed]