The present research investigated whether 13.5-month-old infants would attribute to an actor a disposition to perform a recurring action, and would then use this information to predict which of two new objects—one that could be used to perform the action and one that could not—the actor would grasp next. During familiarization, the infants watched an actor slide various objects forward and backward on an apparatus floor. During test, the infants saw two new identical objects placed side by side: one stood inside a short frame that left little room for sliding; the other stood inside a longer frame that left ample room for sliding. The infants who saw the actor grasp the object inside the short frame looked reliably longer than those who saw the actor grasp the object inside the long frame. This and control results from a lifting condition provide evidence that by 13.5 months, infants can attribute to an actor a disposition to perform a particular action.
Fourteen-month-old infants saw an object hidden inside a container and were removed from the disappearance locale for 24 hr. Upon their return, they searched correctly for the hidden object, demonstrating object permanence and long-term memory. Control infants who saw no disappearance did not search. In Experiment 2, infants returned to see the container either in the same or a different room. Performance by room-change infants dropped to baseline levels, suggesting that infant search for hidden objects is guided by numerical identity. Infants seek the individual object that disappeared, which exists in its original location, not in a different room. A new behavior, identity-verifying search, was discovered and quantified. Implications are drawn for memory, spatial understanding, object permanence, and object identity.
The present research examined whether 12.5-month-old infants take into account what objects an agent knows to be present in a scene when interpreting the agent’s actions. In two experiments, the infants watched a female human agent repeatedly reach for and grasp object-A as opposed to object-B on an apparatus floor. Object-B was either (1) visible to the agent through a transparent screen; (2) hidden from the agent (but not the infants) by an opaque screen; or (3) placed by the agent herself behind the opaque screen, so that even though she could no longer see object-B, she knew of its presence there. The infants interpreted the agent’s repeated actions toward object-A as revealing a preference for object-A over object-B only when she could see object-B (1) or was aware of its presence in the scene (3). These results indicate that, when watching an agent act on objects in a scene, 12.5-month-old infants keep track of the agent’s representation of the physical setting in which these actions occur. If the agent’s representation is incomplete, because the agent is ignorant about some aspect of the setting, infants use the agent’s representation, rather than their own more complete representation, to interpret the agent’s actions.
Do 18-month-olds understand that an agent’s false belief can be corrected by an appropriate, though not an inappropriate, communication? In Experiment 1, infants watched a series of events involving two agents, a ball, and two containers: a box and a cup. To start, agent1 played with the ball and then hid it in the box, while agent2 looked on. Next, in agent1’s absence, agent2 moved the ball from the box to the cup. When agent1 returned, agent2 told her “The ball is in the cup!” (informative-intervention condition) or “I like the cup!” (uninformative-intervention condition). During test, agent1 reached for either the box (box event) or the cup (cup event). In the informative-intervention condition, infants who saw the box event looked reliably longer than those who saw the cup event; in the uninformative-intervention condition, the reverse pattern was found. These results suggest that infants expected agent1’s false belief about the ball’s location to be corrected when she was told “The ball is in the cup!”, but not “I like the cup!”. In Experiment 2, agent2 simply pointed to the ball’s new location, and infants again expected agent1’s false belief to be corrected. These and control results provide additional evidence that infants in the second year of life can attribute false beliefs to agents. In addition, the results suggest that by 18 months of age infants expect agents’ false beliefs to be corrected by relevant communications involving words or gestures.
Twelve-month-old infants attribute goals to both familiar, human agents and unfamiliar, non-human agents. They also attribute goal-directedness to both familiar actions and unfamiliar ones. Four conditions examined information 12-month-olds use to determine which actions of an unfamiliar agent are goal-directed. Infants who witnessed the agent interact contingently with a human confederate encoded the agent's actions as goal-directed; infants who saw a human confederate model an intentional stance toward the agent without the agent's participation, did not. Infants who witnessed the agent align itself with one of two potential targets before approaching that target encoded the approach as goal-directed; infants who did not observe the self-alignment did not encode the approach as goal-directed. A possible common underpinning of these two seemingly independent sources of information is discussed.
Recent research has shown that infants as young as 13 months can attribute false beliefs to agents, suggesting that the psychological-reasoning subsystem necessary for attributing reality-incongruent informational states (SS2) is operational in infancy. The present research asked whether 18-month-olds’ false-belief reasoning extends to false beliefs about object identity. Infants watched events involving an agent and two toy penguins; one penguin could be disassembled (2-piece penguin) and the other could not (1-piece penguin). Infants realized that outdated contextual information could lead the agent to falsely believe she was facing the 1-piece rather than the 2-piece penguin, suggesting that 18-month-olds can attribute false beliefs about the identity of objects and providing new evidence for SS2 reasoning in the second year of life.
Two experiments investigated 18-month-olds’ understanding of the link between visual perception and emotion. Infants watched an adult perform actions on objects. An Emoter then expressed Anger or Neutral affect toward the adult in response to her actions. Subsequently, infants were given 20 s to interact with each object. In Experiment 1, the Emoter faced infants with a neutral expression during each 20-s response period, but either looked at a magazine or the infant. In Experiment 2, the Emoter faced infants with a neutral expression and her eyes were either open or closed. When the Emoter visually monitored infants’ actions, they regulated their object-directed behavior based on their memory of her affect. However, if the previously angry Emoter read a magazine (Exp. 1) or closed her eyes (Exp. 2), infants were not governed by her prior emotion. Infants behaved as if they expected the Emoter to get angry only if she could see them performing the actions. These findings suggest that infants appreciate how people's visual experiences influence their emotions and use this information to regulate their own behavior.
social cognition; social referencing; gaze following; imitation; self-regulation
Some researchers have suggested that infants’ ability to reason about goals develops as a result of their experiences with human agents and is then gradually extended to other agents. Other researchers have proposed that goal attribution is rooted in a specialized system of reasoning that is activated whenever infants encounter entities with appropriate features (e.g., self-propulsion). The first view predicts that young infants should attribute goals to human but not other agents; the second view predicts that young infants should attribute goals to both human and nonhuman agents. The present research revealed that 5-month-old infants (the youngest found thus far to attribute goals to human agents) also attribute goals to nonhuman agents. In two experiments, infants interpreted the actions of a self-propelled box as goal-directed. These results provide support for the view that from an early age, infants attribute goals to any entity they identify as an agent.
Reports that infants in the second year of life can attribute false beliefs to others have all used a search paradigm in which an agent with a false belief about an object’s location searches for the object. The present research asked whether 18-month-olds would still demonstrate false-belief understanding when tested with a novel non-search paradigm. An experimenter shook an object, demonstrating that it rattled, and then asked an agent, “Can you do it?” In response to this prompt, the agent selected one of two test objects. Infants realized that the agent could be led through inference (Experiment 1) or memory (Experiment 2) to hold a false belief about which of the two test objects rattled. These results suggest that 18-month-olds can attribute false beliefs about non-obvious properties to others, and can do so in a non-search paradigm. These and additional results (Experiment 3) help address several alternative interpretations of false-belief findings with infants.
Two experiments examined whether 18-month-olds learn from emotions directed to a third party. Infants watched an adult perform actions on objects, and an Emoter expressed Anger or Neutral affect toward the adult in response to her actions. The Emoter then became neutral and infants were given access to the objects. Infants’ actions were influenced by their memory of the Emoter’s affect. Moreover, infants’ actions varied as a function of whether they were currently in the Emoter’s visual field. If the previously angry Emoter was absent (Experiment 1) or turned her back (Experiment 2), infants did not use the prior emotion to regulate their behavior. Infants learn from emotional eavesdropping, and their subsequent behavior depends on the Emoter’s orientation toward them.
Psychological scientists use statistical information to determine the workings of fellow humans. We argue so do young children. In a few years, children progress from viewing human actions as intentional and goal-directed to reasoning about the psychological causes underlying such actions. Here we show that preschoolers and 20-month-old infants can use statistical information – namely, a violation of random sampling – to infer that an agent is expressing a preference for one object over another. Children saw a person remove 5 items of one type from a container of objects. Preschoolers and infants only inferred a preference for that type of object when there was a mismatch between the sample and population. Mere outcome consistency, time spent with and positive attention toward the objects did not lead children to infer a preference. The findings provide an important demonstration of how statistical learning could underpin the rapid acquisition of early psychological knowledge.
Recent investigations of early psychological understanding have revealed three key findings. First, young infants attribute goals and dispositions to any entity they perceive as an agent, whether human or non-human. Second, when interpreting an agent’s actions in a scene, young infants take into account the agent’s representation of the scene, even if this representation is less complete than their own. Third, at least by the second year of life, infants recognize that agents can hold false beliefs about a scene. Together, these findings support a system-based, mentalistic account of early psychological reasoning.
infant cognition; psychological understanding; mentalistic reasoning
Toddlers readily learn predictive relations between events (e.g., that event A predicts event B). However, they intervene on A to try to cause B only in a few contexts: When a dispositional agent initiates the event or when the event is described with causal language. The current studies look at whether toddlers’ failures are due merely to the difficulty of initiating interventions or to more general constraints on the kinds of events they represent as causal. Toddlers saw a block slide towards a base, but an occluder prevented them from seeing whether the block contacted the base; after the block disappeared behind the occluder, a toy connected to the base did or did not activate. We hypothesized that if toddlers construed the events as causal, they would be sensitive to the contact relations between the participants in the predictive event. In Experiment 1, the block either moved spontaneously (no dispositional agent) or emerged already in motion (a dispositional agent was potentially present). Toddlers were sensitive to the contact relations only when a dispositional agent was potentially present. Experiment 2 confirmed that toddlers inferred a hidden agent was present when the block emerged in motion. In Experiment 3, the block moved spontaneously, but the events were described either with non-causal (“here’s my block”) or causal (“the block can make it go”) language. Toddlers were sensitive to the contact relations only when given causal language. These findings suggest that dispositional agency and causal language facilitate toddlers’ ability to represent causal relationships.
There has been some debate about whether infants 10 months and younger can use featural information to individuate objects. The present research tested the hypothesis that negative results obtained with younger infants reflect limitations in information processing capacities rather than the inability to individuate objects based on featural differences. Infants aged 9.5 months saw one object (i.e. a ball) or two objects (i.e. a box and a ball) emerge successively to opposite sides of an opaque occluder. Infants then saw a single ball either behind a transparent occluder or without an occluder. Only the infants who saw the ball behind the transparent occluder correctly judged that the one-ball display was inconsistent with the box–ball sequence. These results suggest that: (a) infants categorize events involving opaque and transparent occluders as the same kind of physical situation (i.e. occlusion) and (b) support the notion that infants are more likely to give evidence of object individuation when they need to reason about one kind of event (i.e. occlusion) than when they must retrieve and compare categorically distinct events (i.e. occlusion and no-occlusion).
Object individuation; Infant cognition; Cognitive development
The associative learning account of how infants identify human motion rests on the assumption that this knowledge is derived from statistical regularities seen in the world. Yet, no catalog exists of what visual input infants receive of human motion, and of causal and self-propelled motion in particular. In this manuscript, we demonstrate that the frequency with which causal agency and self-propelled motion appear in the visual environment predicts infants’ understanding of these motions. In an observational study, an infant wearing a head-mounted camera saw people act as agents in causal events three times more often than he saw people engaged in self-propelled motion. Subsequent experiments with the habituation paradigm revealed that infants begin to generalize self-propulsion to agents in causal events between 10 and 14 months of age. However, infants cannot generalize causal agency to a self-propelled object at 14 or 18 months unless the object exhibits additional cues to animacy. The results are discussed within a domain-general framework of learning about human action.
There are two fundamentally different ways to attribute intentional mental states to others upon observing their actions. Actions can be interpreted as goal-directed, which warrants ascribing intentions, desires and beliefs appropriate to the observed actions, to the agents. Recent studies suggest that young infants also tend to interpret certain actions in terms of goals, and their reasoning about these actions is based on a sophisticated teleological representation. Several theorists proposed that infants rely on motion cues, such as self-initiated movement, in selecting goal-directed agents. Our experiments revealed that, although infants are more likely to attribute goals to self-propelled than to non-self-propelled agents, they do not need direct evidence about the source of motion for interpreting actions in teleological terms. The second mode of action-based mental state attribution interprets actions as referential, and allows ascription of attentional states, referential intents, communicative messages, etc., to the agents. Young infants also display evidence of interpreting actions in referential terms (for example, when following others' gaze or pointing gesture) and are very sensitive to the communicative situations in which these actions occur. For example, young infants prefer faces with eye-contact and objects that react to them contingently, and these are the very situations that later elicit gaze following. Whether or not these early abilities amount to a 'theory of mind' is a matter of debate among infant researchers. Nevertheless, they represent skills that are vital for understanding social agents and engaging in social interactions.
Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants’ ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants’ expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants’ anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time.
statistical learning; sequence learning; infant; eye-tracking; prediction
Infants’ ability to represent objects has received significant attention from the developmental research community. With the advent of eye-tracking technology, detailed analysis of infants’ looking patterns during object occlusion have revealed much about the nature of infants’ representations. The current study continues this research by analyzing infants’ looking patterns in a novel manner and by comparing infants’ looking at a simple display in which a single 3-dimensional (3-D) object moves along a continuous trajectory to a more complex display in which two 3-D objects undergo trajectories that are interrupted behind an occluder. Six-month-old infants saw an occlusion sequence in which a ball moved along a linear path, disappeared behind a rectangular screen, and then a ball (ball-ball event) or a box (ball-box event) emerged at the other edge. An eye-tracking system recorded infants’ eye-movements during the event sequence. Results from examination of infants’ attention to the occluder indicate that during the occlusion interval infants looked longer to the side of the occluder behind which the moving occluded object was located, shifting gaze from one side of the occluder to the other as the object(s) moved behind the screen. Furthermore, when events included two objects, infants attended to the spatiotemporal coordinates of the objects longer than when a single object was involved. These results provide clear evidence that infants’ visual tracking is different in response to a one-object display than to a two-object display. Furthermore, this finding suggests that infants may require more focused attention to the hidden position of objects in more complex multiple-object displays and provides additional evidence that infants represent the spatial location of moving occluded objects.
Object Representation; Infants; Eye-tracking
By the end of the first year, infants are able to recognize both goal-directed and perceptually guided behavior in the actions of non-human agents, even faceless ones. How infants derive the relevant orientation of an unfamiliar agent in the absence of familiar markers such as eyes, ears, or face is unknown. The current studies tested the hypothesis that infants’ calculate an agent’s “front” from the geometry of its behavior in the spatial environment. In the first study, 14- to 15-month-old infants observed a symmetrical, faceless agent either interact contingently with a confederate or act randomly. It then turn toward one of two target objects. Infants were more likely to look in the direction the agent turned than the opposite direction, but only in the contingent condition. In the second study, the location of the confederate and target objects was varied, which in turned influenced which end of the agent infants interpreted as the front. Finally, implications for infants’ early gaze-following behaviors with humans are tested and implications for theory of mind development more broadly are discussed.
This paper reports the results of two sets of studies demonstrating 14-month-olds’ tendency to associate an object’s behavior with internal, rather than external features. In Experiment 1 infants were familiarized to two animated cats that each exhibited a different style of self-generated motion. Infants then saw a novel individual that had an internal feature (stomach color) similar to one cat, but an external feature (hat color) similar to the other. Infants looked reliably longer when the individual’s motion was congruent with the hat than when it was congruent with the stomach. Using a converging method involving object choice, Experiment 2 found that infants prioritized the internal feature over the external feature only when the object’s behavior was self-generated. In the absence of self-generated behaviors, infants did not show a preference towards the internal feature.
Young children can be motivated to help adults by sympathetic concern based upon empathy, but the underlying mechanisms are unknown. One account of empathy-based sympathetic helping in adults states that it arises due to direct-matching mirror-system mechanisms which allow the observer to vicariously experience the situation of the individual in need of help. This mechanism could not account for helping of a geometric-shape agent lacking human-isomorphic body-parts. Here 17-month-olds observed a ball-shaped non-human agent trying to reach a goal but failing because it was blocked by a barrier. Infants helped the agent by lifting it over the barrier. They performed this action less frequently in a control condition in which the barrier could not be construed as blocking the agent. Direct matching is therefore not required for motivating helping in infants, indicating that at least some of our early helpful tendencies do not depend on human-specific mechanisms. Empathy-based mechanisms that do not require direct-matching provide one plausible basis for the observed helping. A second possibility is that rather than being based on empathy, the observed helping occurred as a result of a goal-contagion process in which the infants were primed with the unfulfilled goal.
Human infants readily interpret others’ actions as goal-directed and their understanding of previous goals shapes their expectations about an agent’s future goal-directed behavior in a changed situation. According to a recent proposal (Luo & Baillargeon, 2005), infants’ goal-attributions are not sufficient to support such expectations if the situational change involves broadening the set of choice-options available to the agent, and the agent’s preferences among this broadened set are not known. The present study falsifies this claim by showing that 9-month-olds expect the agent to continue acting towards the previous goal even if additional choice-options become available for which there is no preference-related evidence. We conclude that infants do not need to know about the agent’s preferences in order to form expectations about its goal-directed actions. Implications for the role of action persistency and action selectivity are discussed.
The current study investigated whether 18-months-olds attribute opaque mental states when they solve false belief tests, or simply rely on behavioural cues available in the stimuli. Infants experienced either a trick blindfold that looked opaque but could be seen through, or an opaque blindfold. Then both groups of infants observed an actor wearing the same blindfold that they had themselves experienced, whilst a puppet removed an object from its location. Anticipatory eye movements revealed that infants who experienced the opaque blindfold expected the actor’s action in accord with her having a false belief about the object’s location, but infants who experienced the trick blindfold did not. The results suggest that 18-months-olds used self-experience with the blindfold to assess the actor’s visual access, and updated her knowledge/belief state accordingly. These data constitute compelling evidence that 18-months-olds infer perceptual access and appreciate its causal role in altering epistemic states of others.
Theory of mind; infants; eye-tracking; social cognition
Six-month-old infants’ ability to form an abstract category of containment was examined using a standard infant categorization task. Infants were habituated to 4 pairs of objects in a containment relation. Following habituation, infants were tested with a novel example of the familiar containment relation and an example of an unfamiliar relation. Results indicate that infants look reliably longer at the unfamiliar versus familiar relation, indicating that they can form a categorical representation of containment. A second experiment demonstrated that infants do not rely on object occlusion to discriminate containment from a support or a behind spatial relation. Together, the results indicate that by 6 months, infants can recognize a containment relation from different angles and across different pairs of objects.
Action is a fundamental component of object representations. However, little is known about how infants represent actions performed on objects. Across four experiments, we tested the hypothesis that at 10 months of age (N = 80) infants represent the general ability of actions to produce outcomes (sounds). Experiments 1A and 1B showed that infants encode actions and associate actions and object appearances in events in which actions produced no sound outcomes. Experiment 2 showed that infants associate the presence or absence of outcomes with actions. Experiment 3 showed, in contrast, that infants did not associate the presence or absence of outcomes with object appearances. Together, these studies suggest that infants encode the outcome potential of specific actions. We discuss the implications of these findings for our understanding of the development of action representations.