There has been a revolution in our understanding of infant and toddler cognition that promises to have far-reaching implications for our understanding of communicative and linguistic development. Four empirical findings that helped to prompt this change in theory are analyzed: (a) Intermodal coordination—newborns operate with multimodal information, recognizing equivalences in information across sensory-modalities; (b) Imitation—newborns imitate the lip and tongue movements they see others perform; (c) Memory—young infants form long-lasting representations of perceived events and use these memories to generate motor productions after lengthy delays in novel contexts; (d) Theory of mind—by 18 months of age toddlers have adopted a theory of mind, reading below surface behavior to the goals and intentions in people's actions. This paper examines three views currently being offered in the literature to replace the classical framework of early cognitive development: modularity-nativism, connectionism, and theory-theory. Arguments are marshaled to support the “theory-theory” view. This view emphasizes a combination of innate structure and qualitative reorganization in children's thought based on input from the people and things in their culture. It is suggested that preverbal cognition forms a substrate for language acquisition and that analyzing cognition may enhance our understanding of certain disorders of communication.
Intermodal coordination; Imitation; Memory; Theory of mind; Representation; Language acquisition; Face perception
There is converging evidence that the observation of an action activates a corresponding motor representation in the observer through a ‘mirror-matching’ mechanism. However, research on such ‘shared representations’ of perception and action has widely neglected the question of how we can distinguish our own motor intentions from externally triggered motor representations. By investigating the inhibition of imitative response tendencies, as an index for the control of shared representations, we can show that self–other distinction plays a fundamental role in the control of shared representations. Furthermore, we demonstrate that overlapping brain activations can be found in the anterior fronto-median cortex (aFMC) and the temporo-parietal junction (TPJ) area for the control of shared representations and complex social-cognitive tasks, such as mental state attribution. In a functional magnetic resonance imaging experiment, we functionally dissociate the roles of TPJ and aFMC during the control of shared representations. Finally, we propose a hypothesis stating that the control of shared representations might be the missing link between functions of the mirror system and mental state attribution.
imitation; inhibition; prefrontal cortex; temporo-parietal junction; mentalizing
Infants represent the acts of others and their own acts in commensurate terms. They can recognize cross-modal equivalences between acts they see others perform and their own felt bodily movements. This recognition of self–other equivalences in action gives rise to interpreting others as having similar psychological states such as perceptions and emotions. The ‘like me’ nature of others is the starting point for social cognition, not its culmination.
Understanding the intentional relations in others' actions is critical to human social life. Origins of this knowledge exist in the first year and are a function of both acting as an intentional agent and observing movement cues in actions. We explore a new mechanism we believe plays an important role in infants' understanding of new actions: comparison. We examine how the opportunity to compare a familiar action with a novel, tool use action helps 7- and 10-month-old infants extract and imitate the goal of a tool use action. Infants given the chance to compare their own reach for a toy with an experimenter's reach using a claw later imitated the goal of an experimenter's tool use action. Infants who engaged with the claw, were familiarized with the claw's causal properties, or learned the associations between claw and toys (but did not align their reaches with the claw's) did not imitate. Further, active participation in the familiar action to be compared was more beneficial than observing a familiar and novel action aligned for 10-month-olds. Infants' ability to extract the goal-relation of a novel action through comparison with a familiar action could have a broad impact on the development of action knowledge and social learning more generally.
infancy; cognitive development; action understanding; analogical reasoning
How do human children come to understand the actions of other people? What neural systems are associated with the processing of others’ actions and how do these systems develop, starting in infancy? These questions span cognitive psychology and developmental cognitive neuroscience, and addressing them has important implications for the study of social cognition. A large amount of research has used behavioral measures to investigate infants’ imitation of the actions of other people; a related but smaller literature has begun to use neurobiological measures to study of infants’ action representation. Here we focus on experiments employing electroencephalographic (EEG) techniques for assessing mu rhythm desynchronization in infancy, and analyze how this work illuminates the links between action perception and production prior to the onset of language.
A close coupling of perception and action processes is assumed to play an important role in basic capabilities of social interaction, such as guiding attention and observation of others’ behavior, coordinating the form and functions of behavior, or grounding the understanding of others’ behavior in one’s own experiences. In the attempt to endow artificial embodied agents with similar abilities, we present a probabilistic model for the integration of perception and generation of hand-arm gestures via a hierarchy of shared motor representations, allowing for combined bottom-up and top-down processing. Results from human-agent interactions are reported demonstrating the model’s performance in learning, observation, imitation, and generation of gestures.
Computational model; Interactive artificial agents; Nonverbal communication; Gestures; Perception-action links
There are two fundamentally different ways to attribute intentional mental states to others upon observing their actions. Actions can be interpreted as goal-directed, which warrants ascribing intentions, desires and beliefs appropriate to the observed actions, to the agents. Recent studies suggest that young infants also tend to interpret certain actions in terms of goals, and their reasoning about these actions is based on a sophisticated teleological representation. Several theorists proposed that infants rely on motion cues, such as self-initiated movement, in selecting goal-directed agents. Our experiments revealed that, although infants are more likely to attribute goals to self-propelled than to non-self-propelled agents, they do not need direct evidence about the source of motion for interpreting actions in teleological terms. The second mode of action-based mental state attribution interprets actions as referential, and allows ascription of attentional states, referential intents, communicative messages, etc., to the agents. Young infants also display evidence of interpreting actions in referential terms (for example, when following others' gaze or pointing gesture) and are very sensitive to the communicative situations in which these actions occur. For example, young infants prefer faces with eye-contact and objects that react to them contingently, and these are the very situations that later elicit gaze following. Whether or not these early abilities amount to a 'theory of mind' is a matter of debate among infant researchers. Nevertheless, they represent skills that are vital for understanding social agents and engaging in social interactions.
Research has shown that the brain is constantly making predictions about future events. Theories of prediction in perception, action and learning suggest that the brain serves to reduce the discrepancies between expectation and actual experience, i.e., by reducing the prediction error. Forward models of action and perception propose the generation of a predictive internal representation of the expected sensory outcome, which is matched to the actual sensory feedback. Shared neural representations have been found when experiencing one's own and observing other's actions, rewards, errors, and emotions such as fear and pain. These general principles of the “predictive brain” are well established and have already begun to be applied to social aspects of cognition. The application and relevance of these predictive principles to social cognition are discussed in this article. Evidence is presented to argue that simple non-social cognitive processes can be extended to explain complex cognitive processes required for social interaction, with common neural activity seen for both social and non-social cognitions. A number of studies are included which demonstrate that bottom-up sensory input and top-down expectancies can be modulated by social information. The concept of competing social forward models and a partially distinct category of social prediction errors are introduced. The evolutionary implications of a “social predictive brain” are also mentioned, along with the implications on psychopathology. The review presents a number of testable hypotheses and novel comparisons that aim to stimulate further discussion and integration between currently disparate fields of research, with regard to computational models, behavioral and neurophysiological data. This promotes a relatively new platform for inquiry in social neuroscience with implications in social learning, theory of mind, empathy, the evolution of the social brain, and potential strategies for treating social cognitive deficits.
predictive coding; social interaction; forward models; prediction error; sensorimotor control; social learning; imitation; social decision-making
Reaching is an important and early emerging motor skill that allows infants to interact with the physical and social world. However, few studies have considered how reaching experiences shape infants’ own motor development and their perception of actions performed by others. In the current study, two groups of infants received daily parent guided play sessions over a two-week training period. Using “Sticky Mittens”, one group was enabled to independently pick up objects whereas the other group only passively observed their parent’s actions on objects. Following training, infants’ manual and visual exploration of objects, agents, and actions in a live and a televised context were assessed. Our results showed that only infants who experienced independent object apprehension advanced in their reaching behavior, and showed changes in their visual exploration of agents and objects in a live setting. Passive observation was not sufficient to change infants’ behavior. To our surprise, the effects of the training did not seem to generalize to a televised observation context. Together, our results suggest that early motor training can jump-start infants’ transition into reaching and inform their perception of others’ actions.
Infant perception; motor development; perception-action; sticky mittens
Human beings are imitative generalists. We can immediately imitate a wide range of behaviors with great facility, whether they be vocal maneuvers, body postures, or actions on objects. The ontogeny of this skill has been an enduring question in developmental psychology. Classical theory holds that the ability to imitate facial gestures is a milestone that is passed at about one year. Before this time infants are thought to lack the perceptual-cognitive sophistication necessary to match a gesture they can see with one they cannot see themselves perform. A second developmental milestone is the capacity for deferred imitation, i.e. imitation of an absent model. This is said to emerge at about 18 months, in close synchrony with other higher-order activities such as object permanence and tool use, as part of a general cognitive shift from a purely sensory-motor level of functioning to one that allows language. Research suggests that the imitative capacity of young infants has been underestimated. Human infants are capable of imitating facial gestures at birth, with infants less than one day old manifesting this skill. Moreover recent experiments have established deferred imitation well before the predicted age of 18 months. Studies discussed here show that 9-month-olds can duplicate acts after a delay of 24 hours, and that 14-month-olds can retain and duplicate as many as five actions over a 1-week delay. These new findings re-raise questions about the relation between nonverbal cognitive development and language development: What aspects, if any, of these two domains are linked? A hypothesis is delineated that predicts certain very specific relations between particular cognitive and semantic achievements during the one-word stage, and data are reported supporting this hypothesis. Specifically, relations are reported between: (a) the development of object permanence and the use of words encoding disappearance, (b) means-ends understanding (as manifest in tool use) and words encoding success and failure, and (c) categorization behavior and the onset of the naming explosion. This research on human ontogeny suggests close and highly specific links between aspects of early language and thought.
imitation; language; object permanence; infants; cognitive development
Infants’ imitation of differently aged models has been predominately investigated with object-related actions and so far has lead to mixed evidence. Whereas some studies reported an increased likelihood of imitating peer models in contrast to adult models, other studies reported the opposite pattern of results. In the present study, 14-month-old infants were presented with four familiar gestures (e.g., clapping) that were demonstrated by differently aged televised models (peer, older child, adult). Results revealed that infants were more likely to imitate the peer model than the older child or the adult. This result is discussed with respect to a social function of imitation and the mechanism of imitating familiar behavior.
gestures; imitation; infancy; peers; model age
When we observe the actions performed by others, our motor system “resonates” along with that of the observed agent. Is a similar visuomotor resonant response observed in autism spectrum disorders (ASD)? Studies investigating action observation in ASD have yielded inconsistent findings. In this perspective article we examine behavioral and neuroscientific evidence in favor of visuomotor resonance in ASD, and consider the possible role of action-perception coupling in social cognition. We distinguish between different aspects of visuomotor resonance and conclude that while some aspects may be preserved in ASD, abnormalities exist in the way individuals with ASD convert visual information from observed actions into a program for motor execution. Such abnormalities, we surmise, may contribute to but also depend on the difficulties that individuals with ASD encounter during social interaction.
autism; visuomotor resonance; motor facilitation; mirror system; social cognition
Human neuroscience has seen a recent boom in studies on reflective, controlled, explicit social cognitive functions like imitation, perspective-taking, and empathy. The relationship of these higher-level functions to lower-level, reflexive, automatic, implicit functions is an area of current research. As the field continues to address this relationship, we suggest that an evolutionary, comparative approach will be useful, even essential. There is a large body of research on reflexive, automatic, implicit processes in animals. A growing perspective sees social cognitive processes as phylogenically continuous, making findings in other species relevant for understanding our own. One of these phylogenically continuous processes appears to be self-other matching or simulation. Mice are more sensitive to pain after watching other mice experience pain; geese experience heart rate increases when seeing their mate in conflict; and infant macaques, chimpanzees, and humans automatically mimic adult facial expressions. In this article, we review findings in different species that illustrate how such reflexive processes are related to (“higher order”) reflexive processes, such as cognitive empathy, theory of mind, and learning by imitation. We do so in the context of self-other matching in three different domains—in the motor domain (somatomotor movements), in the perceptual domain (eye movements and cognition about visual perception), and in the autonomic/emotional domain. We also review research on the developmental origin of these processes and their neural bases across species. We highlight gaps in existing knowledge and point out some questions for future research. We conclude that our understanding of the psychological and neural mechanisms of self-other mapping and other functions in our own species can be informed by considering the layered complexity these functions in other species.
reflective processing; reflexive processing; social cognition; empathy; comparative cognition; evolution; motor resonance
Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents.
Recent progress in cognitive neuroscience highlights the involvement of the prefrontal cortex (PFC) in social cognition. Accumulating evidence demonstrates that representations within the lateral PFC enable people to coordinate their thoughts and actions with their intentions to support goal-directed social behavior. Despite the importance of this region in guiding social interactions, remarkably little is known about the functional organization and forms of social inference processed by the lateral PFC. Here we introduce a cognitive neuroscience framework for understanding the inferential architecture of the lateral PFC, drawing upon recent theoretical developments in evolutionary psychology and emerging neuroscience evidence about how this region may orchestrate behavior on the basis of evolutionarily adaptive social norms for obligatory, prohibited, and permissible courses of action.
A growing consensus in social cognitive neuroscience holds that large portions of the primate visual brain are dedicated to the processing of social information, i.e., to those aspects of stimuli that are usually encountered in social interactions such as others' facial expressions, actions, and symbols. Yet, studies of social perception have mostly employed simple pictorial representations of conspecifics. These stimuli are social only in the restricted sense that they physically resemble objects with which the observer would typically interact. In an equally important sense, however, these stimuli might be regarded as “non-social”: the observer knows that they are viewing pictures and might therefore not attribute current mental states to the stimuli or might do so in a qualitatively different way than in a real social interaction. Recent studies have demonstrated the importance of such higher-order conceptualization of the stimulus for social perceptual processing. Here, we assess the similarity between the various types of stimuli used in the laboratory and object classes encountered in real social interactions. We distinguish two different levels at which experimental stimuli can match social stimuli as encountered in everyday social settings: (1) the extent to which a stimulus' physical properties resemble those typically encountered in social interactions and (2) the higher-level conceptualization of the stimulus as indicating another person's mental states. We illustrate the significance of this distinction for social perception research and report new empirical evidence further highlighting the importance of mental state attribution for perceptual processing. Finally, we discuss the potential of this approach to inform studies of clinical conditions such as autism.
social perception; social neuroscience; interaction; gaze perception; face perception; mental state attribution; theory of mind; autism
This study tested whether the Risk Perception Attitude Framework predicted nutrition-related cancer prevention cognitions and behavioral intentions. Data from the 2003 Health Information National Trends Survey (HINTS) were analyzed to assess respondents’ reported likelihood of developing cancer (risk) and perceptions of whether they could lower their chances of getting cancer (efficacy). Respondents with higher efficacy were more likely to report that good nutrition can prevent cancer and reported more preventive dietary changes compared to respondents with lower efficacy. Respondents with higher efficacy were more likely to report intentions to change their diets to prevent cancer and reported more preventive dietary changes to their own diets, but only at higher levels of risk. Results suggest that to improve cognitions about the role of nutrition in cancer prevention, interventions should target cancer prevention efficacy; however, to increase intentions to change nutrition behaviors, interventions should target efficacy and risk perceptions.
Risk Perception Attitude Framework; Health Information National Trends Survey; cancer prevention
Metacognition is usually construed as a conscious, intentional process whereby people reflect upon their own mental activity. Here, we instead suggest that metacognition is but an instance of a larger class of representational re-description processes that we assume occur unconsciously and automatically. From this perspective, the brain continuously and unconsciously learns to anticipate the consequences of action or activity on itself, on the world and on other people through three predictive loops: an inner loop, a perception–action loop and a self–other (social cognition) loop, which together form a tangled hierarchy. We ask what kinds of mechanisms may subtend this form of enactive metacognition. We extend previous neural network simulations and compare the model with signal detection theory, highlighting that while the latter approach assumes that both type I (objective) and type II (subjective, metacognition-based) decisions tap into the same signal at different hierarchical levels, our approach is closer to dual-route models in which it is assumed that the re-descriptions made possible by the emergence of meta-representations occur independently and outside of the first-order causal chain. We close by reviewing relevant neurological evidence for the idea that awareness, self-awareness and social cognition involve the same mechanisms.
consciousness; metacognition; blindsight; artificial grammar learning; neural networks; social cognition
Imitation is an important component of human social learning throughout life. Theoretical models and empirical data from anthropology and psychology suggest that people tend to imitate self-similar individuals, and that such imitation biases increase the adaptive value (e.g., self-relevance) of learned information. It is unclear, however, what neural mechanisms underlie people's tendency to imitate those similar to themselves. We focused on the own-gender imitation bias, a pervasive bias thought to be important for gender identity development. While undergoing fMRI, participants imitated own- and other-gender actors performing novel, meaningless hand signs; as control conditions, they also simply observed such actions and viewed still portraits of the same actors. Only the ventral and dorsal striatum, orbitofrontal cortex and amygdala were more active when imitating own- compared to other-gender individuals. A Bayesian analysis of the BrainMap neuroimaging database demonstrated that the striatal region preferentially activated by own-gender imitation is selectively activated by classical reward tasks in the literature. Taken together, these findings reveal a neurobiological mechanism associated with the own-gender imitation bias and demonstrate a novel role of reward-processing neural structures in social behavior.
imitation; neuroimaging; reward; gender; cultural learning
Children with Autistic Spectrum Disorders (ASD) are frequently hampered by motor impairment, with difficulties ranging from imitation of actions to recognition of motor intentions. Such a widespread inefficiency of the motor system is likely to interfere on the ontogeny of both motor planning and understanding of the goals of actions, thus delivering its ultimate effects on the emergence of social cognition.
We investigate the organization of action representation in 15 high functioning ASD (mean age: 8.11) and in two control samples of typically developing (TD) children: the first one, from a primary school, was matched for chronological age (CA), the second one, from a kindergarten, comprised children of much younger age (CY). We used nine newly designed behavioural motor tasks, aiming at exploring three domains of motor cognition: 1) imitation of actions, 2) production of pantomimes, and 3) comprehension of pantomimes. The findings reveal that ASD children fare significantly worse than the two control samples in each of the inspected components of the motor representation of actions, be it the imitation of gestures, the self-planning of pantomimes, or the (verbal) comprehension of observed pantomimes. In the latter task, owing to its cognitive complexity, ASD children come close to the younger TD children’s level of performance; yet they fare significantly worse with respect to their age-mate controls. Overall, ASD children reveal a profound damage to the mechanisms that control both production and pre-cognitive “comprehension” of the motor representation of actions.
Our findings suggest that many of the social cognitive impairments manifested by ASD individuals are likely rooted in their incapacity to assemble and directly grasp the intrinsic goal-related organization of motor behaviour. Such impairment of motor cognition might be partly due to an early damage of the Mirror Neuron Mechanism (MNM).
The theory of event coding (TEC) is a general framework explaining how perceived and produced events (stimuli and responses) are cognitively represented and how their representations interact to generate perception and action. This article discusses the implications of TEC for understanding the control of voluntary action and makes an attempt to apply, specify, and concretize the basic theoretical ideas in the light of the available research on action control. In particular, it is argued that the major control operations may take place long before a stimulus is encountered (the prepared-reflex principle), that stimulus-response translation may be more automatic than commonly thought, that action selection and execution are more interwoven than most approaches allow, and that the acquisition of action-contingent events (action effects) is likely to subserve both the selection and the evaluation of actions.
We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action–observation. These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference.
Action–observation; Mirror-neuron system; Inference; Precision; Free-energy; Perception; Generative models; Predictive coding
A core assumption of how humans understand and infer the intentions and beliefs of others is the existence of a functional self-other distinction. At least two neural systems have been proposed to manage such a critical distinction. One system, part of the classic motor system, is specialized for the preparation and execution of motor actions that are self realized and voluntary, while the other appears primarily involved in capturing and understanding the actions of non-self or others. The latter system, of which the mirror neuron system is part, is the canonical action 'resonance' system in the brain that has evolved to share many of the same circuits involved in motor control. Mirroring or 'shared circuit systems' are assumed to be involved in resonating, imitating, and/or simulating the actions of others. A number of researchers have proposed that shared representations of motor actions may form a foundational cornerstone for higher order social processes, such as motor learning, action understanding, imitation, perspective taking, understanding facial emotions, and empathy. However, mirroring systems that evolve from the classic motor system present at least three problems: a development, a correspondence, and a control problem. Developmentally, the question is how does a mirroring system arise? How do humans acquire the ability to simulate through mapping observed onto executed actions? Are mirror neurons innate and therefore genetically programmed? To what extent is learning necessary? In terms of the correspondence problem, the question is how does the observer agent know what the observed agent's resonance activation pattern is? How does the matching of motor activation patterns occur? Finally, in terms of the control problem, the issue is how to efficiently control a mirroring system when it is turned on automatically through observation? Or, as others have stated the problem more succinctly: "Why don't we imitate all the time?" In this review, we argue from an anatomical, physiological, modeling, and functional perspectives that a critical component of the human mirror neuron system is sensorimotor cortex. Not only are sensorimotor transformations necessary for computing the patterns of muscle activation and kinematics during action observation but they provide potential answers to the development, correspondence and control problems.
Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot’s joint configuration. Modeling the task was based on the assumption that the receiver’s decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.
This article discusses four different scenarios to specify increasingly complex mechanisms that enable increasingly flexible social interactions. The key dimension on which these mechanisms differ is the extent to which organisms are able to process other organisms' intentions and to keep them apart from their own. Drawing on findings from ecological psychology, scenario 1 focuses on entrainment and simultaneous affordance in ‘intentionally blind’ individuals. Scenario 2 discusses how an interface between perception and action allows observers to simulate intentional action in others. Scenario 3 is concerned with shared perceptions, arising through joint attention and the ability to distinguish between self and other. Scenario 4 illustrates how people could form intentions to act together while simultaneously distinguishing between their own and the other's part of a joint action. The final part focuses on how combining the functionality of the four mechanisms can explain different forms of social interactions. It is proposed that basic interpersonal processes are put to service by more advanced functions that support the type of intentionality required to engage in joint action, cultural learning, and communication.
joint action; intention; evolution of social interaction; tool use; communication; social cognitive neuroscience