|Home | About | Journals | Submit | Contact Us | Français|
Fear and disgust expressions are not arbitrary social cues. expressing fear maximizes sensory exposure (e.g., increases visual and nasal input), whereas expressing disgust reduces sensory exposure (e.g., decreases visual and nasal input).1 A similar effect of these emotional expressions has recently been found to modify sensory exposure at the level of the central nervous system (attention) in people perceiving these expressions.2 At an attentional level, sensory exposure is increased when perceiving fear and reduced when perceiving disgust. These findings suggest that response preparations are transmitted by expressers to perceivers. However, the processes involved in the transmission of such emotional action tendencies remain unclear. We suggest that emotional contagion by means of grounded cognition theories could be a simple, ecological and straight-forward explanation of this effect. The contagion through embodied simulation of others’ emotional states with simple, efficient and very fast facial mimicry may represent the underlying process.
One of the acknowledged key functions of facial expressions in primates is to communicate quickly and efficiently the expresser’s internal affective state to peers. In other words, facial expressions offer specific evocative information to the perceiver3 and allow for important regulation strategies between the expresser and the perceiver.4 However, besides the idea that facial expressions represent important social cues, recent physiological data indicates that facial expressions may also modify sensory acquisition. At a physiological level, a recent study1 has shown that fear and disgust are not arbitrary social signals. In expressers, fear was found to maximize sensory exposure (e.g., increases visual input) while disgust was found to reduce sensory exposure (e.g., decreases visual input) (and the same function seems to be present for anger and surprise as well5).
A similar relationship between emotional expressions and attention also occurs at the cognitive (i.e., attentional) level when people perceive emotional expressions.6,7 As well, we have shown by using the attentional blink (AB) paradigm that expressed fear and disgust emotions produced, on perceivers’ attentional system, a similar process of closure (induced by disgust) versus exposure (induced by fear) to external perceptual cues (see Figure 1 for an example of the type of trial used).2 The AB refers to the negative effect of the first target (T1) on the second target (T2) identification within a period of 200–500 ms following T1. It was recently suggested that this effect involves two processes, described in the ‘boost and bounce’ theory.8 Firstly, the detection of T1 results in a transient attentional enhancement (boost), which can also enhance the processing of distractor/s presented immediately after T1, but before T2. However, the distractor/s then elicit attentional suppression (bounce), which impairs the processing of the subsequent T2.
In our study,2 we found that processing fear faces impairs the detection of T2 to a greater extent than does the processing of disgust faces. When comparing (Fig. 2) with an analogous experimental design9 without facial expressions prime, these findings seem to be related to both an increased blink due to perceived fear emotion and to a decreased blink due to perceived disgust emotion.
The fact that perceiving certain facial emotions involves similar effects (i.e., consequences) as expressing those facial emotions suggests that expressing and perceiving facial expressions of emotion may engage similar processes. Then, the underlying question is to determine how the transmission (i.e., contagiousness) of these emotional properties by expressers to perceivers can induce modifications of information intake in perceivers.
Thus, an interesting question that follows our findings is related to the cognitive and emotional processes involved in the modification of perceivers’ behavioural efficiency. How can perceivers be cognitively influenced by an observed facial expression of emotion? In other words, how can a perceiver replicate the sensory and cognitive consequences displayed by an expresser? Facial expressions are known to spontaneously evoke (i.e., as a mirror) emotion responses in the perceiver, as it was shown by studies on facial mimicry. For instance, Dimberg and colleagues showed that angry expressions evoked similar facial expressions in the receiver10 and that these reactions can be present even when facial expressions were presented too quickly to allow conscious perception.11 Collectively, the literature in psychological science and cognitive neuroscience suggests that emotional expressions can “automatically” evoke prepared responses4 or action tendencies12 in others and this information is likely to influence the subsequent behavioural responses of the perceiver. As such, besides their role of social cues4 or their role of biomechanical (sensory) interface,1,5 facial expressions can also play another role in affective life, which is to serve as the grounding (i.e., a cognitive support) for the processing of emotional information.13 Indeed, research from the embodied or grounded cognition literature has demonstrated that individuals use simulations to represent knowledge.14–16 These simulations can occur in different sensory modalities9,17,18 and affective systems as well.13,19–21 For instance, a series of studies have found that participants expressed emotion on their faces when trying to represent discrete emotional contents from words. As an example, when participants had to indicate whether the words slug or vomit were related to an emotion, they expressed disgust on their faces, as measured by the contraction of the levator labialis (used to wrinkle the nose).19 When taken together, these findings suggest that facial expressions also constitute a cognitive support used to reflect on or to access to the affective meaning of a given emotion, and this processing often involves the display of a facial expression of emotion.13,19 As the grounded cognition models suggest, the same structures are active inside our brain during the first- and the third-person experience of emotions.13,22–24 The activation of this “shared manifold space”22 which is related to the automatic activation of grounded simulation routines allows for the creation of a bridge between others and ourselves.24 In other words, the comprehension of others’ facial expression depends on the re-activation of the neural structures normally devoted to our own personal experience of emotions.24
From this standpoint, it can be suggested that when humans perceive facial expressions of emotion they automatically behave as if they were actually experiencing and expressing that particular emotion. As such, perceivers are likely to act on the environment as expressers do. Since the specific forms of facial expressions may have originated to maximize beneficial adaptation in expressers,1,5 perceivers will similarly maximize their adaptation if they can mimic the communicated expressers’ internal state. As we found in our attentional blink study,2 such mimicry would quickly prepare perceivers to interact with environmental changes (i.e., attentional modulation) and probably to behave in the most adaptive way. Like a domino effect, facial adaptation in expressers will be transmitted to perceivers through mimicry and simulation processes. So, adapted response tendencies are primed in perceivers following the mere perception of expressers’ internal affective states. Such an interpretation would offer an additional added value to modal (i.e., grounded or embodied) as compared to amodal models of cognition.24,25 Classic “amodal” models posit that information is encoded in an abstract form that is functionally independent from the sensory systems that processed it.25 As such, a parallel effect of facial expression in expressers and perceivers might not be predicted a priori by amodal models. The added value of modal representations of knowledge consists of a rapid and very efficient adaptive response (i.e., through mimicry and simulation) in perceivers as soon as a facial expression is processed, even if that expression is not consciously perceived.11 This might explain the specific role of basic perceptual information during emotional processing.26,27
In order to determine whether this effect could be due to emotional contagion from expressers to perceivers, further studies that manipulate or that measure facial responses in perceivers are needed. For instance, it may be suggested that, as it was experimentally19,21 done recently, blocking facial reactions in perceivers would decrease the emotional modulation of attention.2 Moreover, electromyographic measures of facial muscles like the levator labialis (for disgusted emotional expressions) or the medial frontalis (for fearful expressions) may be employed to confirm the role of facial simulation19 in the appearance of the emotional modulation of the attentional blink. An implication of grounded cognition in this process would suggest that embodied emotion not only serves as the basis (i.e., an off-line cognitive support) for representing and understanding emotions13 but that it has also a role in allowing the transmission of internal states required for the survival of the species.
Previously published online: www.landesbioscience.com/journals/cib/article/10922