A lot can be learned about our social world from the faces of others. Faces provide information about age, race, gender, physical health, emotional state, and focus of attention, giving observers a window into the mental states of other human beings. During the first year after birth, infants begin to extract a large amount of information from faces: they begin to recognize identities (Pascalis, De Haan, Nelson, & De Schonen, 1998
), recognize and prefer faces from their own race (Kelly et al., 2005
), detect affect (Cohn & Tronick, 1983
; Tronick, 1989
), and follow gaze (Corkum & Moore, 1998
; Scaife & Bruner, 1975
). However, these sophisticated abilities are of little use if infants don’t look at faces to begin with. Put another way, to extract information from faces, infants must first attend to them.
Although there is a large literature on the origins and development of infants’ face representations during infancy, far less research has examined the behavior of infants outside of controlled laboratory settings. In particular, both the extent to which infants attend to faces when other objects are present—as in most real-world situations—and the extent to which this behavior changes across development are still largely unknown. The reasons for this gap in the literature may be both methodological and theoretical. Methodologically, standard looking-time paradigms used in infant research typically produce only qualitative evidence and do not make sense in older populations; hence it is difficult to design experimental paradigms whose results can be compared across wide age ranges. Theoretically, many researchers have been interested primarily in the question of innateness: whether human infants are predisposed to treat human faces as “special” relative to other objects and whether the representations underlying these judgments are qualitatively similar to those used by adults.
Our goal in the current study is to address the resulting question—how does infants’ attention to faces change across development—by characterizing the development of infants’ attention to faces in complex, noisy settings. To motivate our work, we begin by briefly reviewing two literatures: first, face perception in infancy and second, the abilities of adults to detect faces in complex scenes.
A large body of evidence suggests that newborn infants have a generalized bias to attend to faces and face-like stimuli (Cassia, Turati, & Simion, 2004
; Farroni et al., 2005
; Johnson, Dziurawiec, Ellis, & Morton, 1991
; Simion, Macchi Cassia, Turati, & Valenza, 2001
). This bias may result from the application of specific face-recognition mechanisms (Farroni et al., 2005
; Johnson et al., 1991
; Morton & Johnson, 1991
) or more general preferences (Cassia et al., 2004
; Nelson, 2001
; Simion et al., 2001
). However, experiments in this literature do not establish either the degree of this preference—what portion of the time infants will typically look at face—or its robustness—to what extent infants prefer to look at faces in noisy, real-world situations.
Work on the later development of face processing has examined the selectivity of infants’ representations of faces. Much of this work has supported the perceptual narrowing
view first proposed in studies of infants’ phonetic development (Kuhl, 2000
; Kuhl, Tsao, & Liu, 2003
). On this view, infants’ face representations become specific to those types of faces they see most often as they lose the ability to discriminate the faces of other races and other species during the period from 3 to 9 months (Kelly et al., 2007
; Kelly et al., 2005
; Pascalis, de Haan, & Nelson, 2002
; Pascalis et al., 2005
). For instance, 6-month-olds discriminated pairs of monkey faces as well as pairs of human faces, but 9-month-olds provided no evidence of discriminating pairs of monkey faces (Pascalis et al., 2002
); the ability to make these discriminations was preserved via repeated exposure to monkey faces (Pascalis et al., 2005
). Similarly, baby monkeys reared with no exposure to faces of any species maintained an ability to discriminate both monkey and human faces, but upon selective exposure to monkey faces, discrimination of human faces suffered, and vice versa (Sugita, 2008
Other research has investigated whether infants’ face representations exhibit the same behavioral signatures as adult face processing. For instance, 3-month-olds (but not 1-month-olds) were found to show prototype effects, suggesting that by 3 months, infants represent faces within a “face-space” that shows some similarities to the perceptual space of adult face representations (de Haan, Johnson, Maurer, & Perrett, 2001
). Event-related potential (ERP) research has examined differential responses to inverted as opposed to upright faces, a signature of face-specific processing in adults, and suggests that there is an inversion effect in some ERP components by 6 months (de Haan, Pascalis, & Johnson, 2002
; Halit, Csibra, Volein, & Johnson, 2004
; Halit, de Haan, & Johnson, 2003
). All of these results shed light on the format of infants’ representations of faces, but do not address whether infants choose to (or are able to) attend to faces in the real world.
In adults, the question of attention to faces has primarily been investigated via visual search tasks. This literature suggests that faces are relatively easy to identify, even in crowded displays (Hershler & Hochstein, 2005
; Lewis & Edmonds, 2003
; VanRullen, 2006
). For example, Lewis and Edmonds (2005)
found that search for faces in grids of scrambled non-face stimuli was efficient, with shallow search slopes (search latencies that did not increase much as the number of distractors increased), suggesting that this search relied on a “parallel,” pre-attentive component. In follow-up experiments, they found that inverting the luminance of faces (which makes face-identification quite challenging) increased not only search latencies, but also search slopes. However, VanRullen (2006)
showed that pop-out effects could also be found for cars when distractor stimuli were properly controlled, suggesting again that pop-out effects for faces are driven by their lower-level features (such as phase information) rather than their social relevance. Regardless of whether the relevant cues are low-level perceptual cues or higher-level, face-specific cues, if infants represent faces in a qualitatively similar scheme as adults, infants should show rapid and effortless detection of faces even in displays with multiple distractors. However, because one cannot give explicit verbal instructions to infants, it is impossible to compare how infants and adults perform in explicit visual search tasks.
In the current study, we measured the extent to which faces within complex, dynamic scenes draw attention (as measured by eye-movements) in a task with no explicit instructions. We presented 3-, 6-, and 9-month-old infants and adults a series of 4-second clips from an animated children’s cartoon (A Charlie Brown Christmas; see , top panels) and used a corneal reflection eye-tracker to measure where observers were looking during the video clips.
Figure 1 Example stimuli and models (averaged across time) for three different 4-s clips from A Charlie Brown Christmas. The first row depicts time-averaged stimuli. The second row shows the assignment of predictive probability in the face model for each of these (more ...)
By using an implicit free-viewing task instead of an explicit search task, we could eliminate reliance on interpretive assumptions linking looking times after habituation/familiarization to preference (Haith, 1998
) and directly compare the distribution of attention across a range of age groups, including adults. Nevertheless, comparing the viewing behavior of young infants to that of adults can be problematic: developments in visual acuity (Mayer & Dobson, 1982
) might account for changes in infants’ tendencies to fixate on faces. To control for this possibility, we showed a separate group of adult participants a version of our stimuli that had been blurred to simulate the contrast sensitivity function of a 3-month-old.
The genesis of this study comes from a previous experiment (Johnson, Davidow, Hall-Haro, & Frank, 2008
) in which the Charlie Brown
cartoon was used as an engaging distractor stimulus. Because children were so drawn to the Charlie Brown
stimulus, we were able to gather a large amount of data on their fixations patterns, and our anecdotal observation of the youngest infants’ fixations suggested that they were distributed far more broadly over the movie than the fixations of older observers. One important goal of the current study is to quantify this observation and examines its developmental time-course. Despite the schematic, cartoon nature of the faces in Charlie Brown
, our stimulus provides a visual and linguistic environment that is rich in social content, enjoyable and motivating to our infant participants, and far more complex than those used in previous face perception experiments. Thus, infants’ preferences in viewing this stimulus will give some insight into their attention to faces in real-world scenes. Of course, no laboratory task perfectly captures the structure of real experience, and we will discuss how the details of our stimulus (a cartoon movie on a small screen) limit our ability to generalize these results.