|Home | About | Journals | Submit | Contact Us | Français|
Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.
Facial expressions of emotion have long been of interest to philosophers and psychologists. Darwin's (1872/1965) seminal work On the Expression of the Emotions in Man and Animals was a first attempt to systematically understand emotion expression and its meaning. In this book, he proposed a number of explanations as to why certain facial and bodily behaviours communicate certain emotions. In doing so, Darwin had no doubt that the expressive behaviour that he described was part of an underlying emotional state, that is, that emotion expressions have communicative value specifically because they are outward manifestations of an inner state. Darwin assumed clear parallels and antecedents to human emotions in the emotions of animals and our humanoid ancestors, and consequently considered emotion expressions to be universal across human cultures.
Yet, right from the beginning, Darwin's view of emotion expressions as the visible part of an underlying emotional state was disputed and rejected by those who considered facial expressions as learned social signals that varied across cultures. Indeed, research in the first-half of the twentieth century produced inconsistent results regarding whether even members of the same culture could accurately interpret the facial expressions of their social group. This disparity in findings led Bruner and Tagiuri in their 1954 Handbook of social psychology review article to state that ‘ … the evidence for the recognizability of emotional expressions is unclear’ (p. 634). Research by Ekman and colleagues (e.g. Ekman et al. 1969, 1987; Ekman 1972) as well as Izard (e.g. Izard 1971) caused the pendulum to swing back towards Darwin's view in establishing that certain ‘basic’ emotion expressions are indeed universally recognized.
Yet, even if certain expressions are recognized as signalling certain emotions, this does not necessarily mean that the expressions are in fact the output of an underlying affective state. In this vein, Fridlund (1994) emphasized that for emotion expressions to be truly useful as a communicative signal they should be linked to the organism's social motives rather than to quasi-reflexive emotions, and concluded that emotion expressions should be considered as unrelated to an underlying emotional state. A series of studies published since Fridlund made this argument have shown that emotional facial expressions function both as symptoms of an underlying state and as communicative signals relevant to social motives (e.g. Hess et al. 1995; Jakobs et al. 1999a,b, 2001; Manstead et al. 1999).
The observation of—not always insubstantial—differences in emotion recognition rates between cultures led more recently to the notion of cultural dialects (Elfenbein & Ambady 2003; Elfenbein et al. 2007). Specifically, it has been argued that there is a universal language of emotion but with local dialects that differ subtly from each other. These dialect differences are small enough to allow high levels of recognizability across cultures, yet large enough so that judgements are faster and more accurate for perceivers who are familiar with these subtle variations.
In summary, to date there is little doubt regarding the fact that emotional facial expressions can be recognized across cultures. Also, observers consider them to be an honest signal of an underlying emotion (Fiske 2004). Yet, it is important to note that the stimuli used in research on the recognition of facial expressions of emotion generally have two features that set them apart from the expressions we see in the faces of the people we interact with on a daily basis. First, participants see the same expression on several faces and their success in recognition is averaged across these exemplars and second, the expressions that are used are intense to a degree rarely encountered in everyday life.
As regards the first issue, Wiggers (1982) noted that the same combination of facial actions on two different models yields different recognition rates. These observations led to the methodological advice to use more than one model in emotion expression research but no effort was made to understand the source of these across-model differences.
As regards the second issue, full-blown facial expressions are by far the exception in everyday interactions and seldom take the canonic form employed as stimuli by emotion researchers (e.g. Ekman et al. 1969). Indeed, facial expressions can be expressed partially or they can be the result of blends that convey different emotions at the same time (Ekman & O'Sullivan 1991). Hence, the facial expressions we commonly encounter are weak, elusive or blended, resulting in a signal that often is ambiguous and requires substantial interpretative work. Thus, the majority of the research referred to above neglects two pertinent elements of actual emotion recognition in interactions. First, the impact of the morphology of the face on how expressions appearing on any one face will be interpreted has not been explored. Second, the challenge posed by the more ambiguous expressions that most people show in most situations has not been adequately addressed. These will be discussed in more detail below.
There are two principal strategies for the decoding of emotion displays (Kirouac & Hess 1999). First, pattern matching can be used to draw inferences regarding an expresser's presumed emotional state using a strategy where specific features of the expression are associated with specific emotions (Buck 1984). Thus, upturned corners of the mouth or lowered brows are recognized as smiles or frowns and a perceiver can thus conclude that the individual is happy or angry, respectively. This approach breaks down when the features are either too weak to be classified or lead to contradictory conclusions—such as would be the case when a person both smiles and frowns at the same time. Pattern matching is the principal strategy tested in the studies referred to above, where participants typically rate the emotions from intense expressions shown by several unknown others.
The second decoding strategy depends upon the knowledge that the perceiver possesses regarding the sender and/or the social situation in which the interaction is taking place. This information permits the perceiver to take the perspective of the encoder and helps him or her to correctly infer the emotional state that the sender is most likely experiencing. Thus, learning that someone's car was vandalized would lead to the expectation that the person is probably angry. This in turn would influence expectations regarding the likely expression shown—depending also on knowledge about the temperament of the person. If the sender and the receiver know each other well, the receiver usually is aware of the sender's personality, beliefs, preferences and emotional style. This knowledge then permits the receiver to take the perspective of the sender and to deduce how the sender would most likely react in the given situation. Thus, we may expect more intense anger from a choleric person than from an easy-going person. But what happens when we do not know the other person well or at all?
Studies in which people are asked to judge the likely personality of complete strangers show that people can and do draw conclusions about a person's personality from no more information than what is provided by the face (e.g. Ambady et al. 1995). Yet more importantly, the face tells us a great deal about the social categories into which our interaction partners fit. That is, faces tell us the sex, age and race of the other person and this knowledge can be used by observers to predict the likely emotional reactions of the sender. Thus, for example, when imagining that an individual's car had been vandalized, participants predicted that a man would show anger but a woman sadness (Hess et al. 2005).
A further example of the impact on the decoding of such generalized beliefs based on social group membership inferred from facial appearance is provided by Matsumoto et al. (1999), who studied judgements of emotion expressions by the Americans and Japanese. The Americans are usually encouraged to show emotions, especially positive emotions and tend to show an emotion more intensely than is warranted by the underlying feeling state. This is not the case in Japan. Consequently, the Americans attribute less intense underlying emotional arousal to expressions of the same objective intensity than do the Japanese, that is, they ‘correct’ their estimate of a person's feeling state based on the belief that people are likely to exaggerate their expressions.
The social group context can impact not only on the knowledge used to interpret an expression but also on the specific nature of the face on which they are displayed. Thus, facial morphological differences between men and women or members of different racial groups may enhance or obscure some expressive elements and hence bias pattern matching. The facial morphology of women and younger individuals, for example, appears to enhance the cues associated with happiness, whereas those of men and older individuals enhance the cues associated with anger (Becker et al. 2007; Hess et al. 2009b). Further, certain facial configurations make neutral faces appear to show emotions. Thus, a shorter distance between the eyes and mouth, more typical for male faces, leads to the perception of an angrier face (Neth & Martinez 2009).
In summary, there are at least two important reasons as to why the same facial expressions shown by two individuals may not be interpreted in the same way. First, the beliefs that we have about the individuals may lead us to different conclusions regarding their likely underlying emotional state and second, facial features and facial expressions may interact such that pattern-matching errors are made.
As regards the latter, Darwin (1872/1965) first suggested that some emotion expressions may actually imitate morphological features. For example, he noted that piloerection and the utterance of harsh sounds by ‘angry’ animals are ‘voluntarily’ enacted to make the animal appear larger and hence a more threatening adversary (see pp. 95 and 104). This notion of a perceptual overlap between emotion expressions and certain trait markers, which then influences emotion communication, has been more recently taken up by Zebrowitz (Zebrowitz & Montepare 2006) as well as by Hess et al. (2007a,b, 2008, 2009a). Specifically, Hess et al. (2007a,b) proposed the notion that some aspects of facial expressive behaviour and morphological cues to dominance and affiliation are equivalent in their effects on emotional attributions, the functional equivalence hypothesis. In what follows, we will explore the impact of beliefs on the one hand and the impact of facial morphology on the other on perceptions of emotions in men and women.
People hold stereotypical beliefs about the emotions of others based on a number of characteristics, which are readily—albeit not always accurately (Ambady et al. 1995)—discernable from a person's face. In addition to beliefs about ethnicity, as in the Matsumoto et al. (1999) study described above, they also hold beliefs about the emotions of others based on their sex, age and personality.
Thus, for example, women are expected to smile more and in fact also do smile more. By contrast, men are expected to show more anger but do not seem to in fact do so (Fischer 1993; Brody & Hall 2000). These expectations are socialized early and can have dramatic consequences on the perception of emotion in others. Even children as young as 5 years tend to consider a crying baby as ‘mad’ when the baby is purported to be a boy but not when it is purported to be a girl (Haugh et al. 1980). Thus, the ‘knowledge’ that the baby was a boy or a girl, biased the perception of the otherwise ambiguous emotion display. In line with these expectations, expressions from a standardized set of expressions—assuring equivalence across genders—were rated as more intense when anger was shown by a male actor and when happiness was shown by a female actor than vice versa (Hess et al. 1997). That is, the stereotypical view that associates anger more strongly with men and happiness more strongly with women biases the decoding of emotional behaviour.
People also have beliefs about age and emotionality. In a recent study, we showed participants photos of individuals from four different age groups (18–29, 30–49, 50–69 and 70+) and asked them to indicate how likely they thought that the person shown in the photo would express each of four emotions (happiness, sadness, anger and fear) in everyday life. The responses differed with regard to both sex and age. Thus, as they get older, men were perceived to be less likely to show anger, whereas the reverse was the case for women. Men were also perceived as more likely to show sadness as they got older.
An individual's perceived social dispositions are another source of strong beliefs that can impact the decoding of emotional expressions. Certain relatively static facial features are strongly associated with dominance and affiliation. Specifically, a high forehead, a square jaw and thicker eyebrows have been linked to perceptions of dominance (e.g. Keating et al. 1981a,b; Zebrowitz 1997) whereas a rounded baby-face is both feminine and perceived as more approachable (Berry & Brownlow 1989) and warm (Berry & McArthur 1986), central aspects of an affiliative or nurturing orientation. These behavioural tendencies are also perceived as predictive of an individual's emotionality. Thus, dominant individuals are believed to be more likely to show anger than are submissive ones, whereas affiliative individuals are believed to be more likely to show happiness (Hess et al. 2005).
The beliefs about men's and women's emotionality and beliefs about the emotionality of dominant and affiliative individuals are not independent. Specifically, Hess et al. (2005) could show that some of the stereotypical beliefs about men's and women's emotions can in fact be traced to beliefs about dominant and affiliative individuals. They asked separate groups of participants to rate men's and women's neutral faces, either with regard to how dominant or affiliative they appeared, or with regard to the likelihood that the person in the photo would show various emotions in everyday life. Men's faces were perceived as more dominant in appearance and men were rated as more likely to show anger, disgust and contempt. By contrast, women's faces were rated as more affiliative in appearance and women were expected to be more likely to show happiness, surprise, sadness and fear.
Mediational analyses showed that the tendency to perceive women as more likely to show happiness, surprise, sadness and fear was in fact partially mediated by their higher level of perceived affiliation and lower level of perceived dominance. The tendency to perceive men as more prone to show anger, disgust and contempt was partially mediated by both their higher level of perceived dominance and their lower level of perceived affiliation.
These beliefs are also normative. For example, when presented with a vignette that describes a person who just learned that their car was vandalized, participants rated the person as very likely to be angry—regardless of whether the person was described as a man or a woman (Hess et al. 2005). However, even though a man was then expected to show his anger, a woman was expected to show sadness instead—but not if she was described as highly dominant. In this latter case, showing anger was expected for men and women equally. In a similar vein, men were expected to show less happiness unless they were described as high in affiliation, in which case they were expected to smile even more than women. Thus, the judgement of the appropriateness of showing anger or happiness was heavily dependent on the perceived dominance and affiliation of the protagonist, and not just the product of gender category membership per se.
In summary, some of the beliefs that people hold about men and women—and which influence the decoding of facial expressions shown by men and women—can in fact be traced to differences in facial appearance, specifically, to differences in perceived facial dominance and affiliation. These differences in turn are owing to the morphological variations that characterize the faces of men and women. Moreover, as we will show below, these differences between the facial structures of men and women have an even more direct impact on facial expression perception when it comes to pattern matching.
Specifically, the facial morphology that makes a face appear male or female and in turn dominant and affiliative interacts directly with the movement patterns that characterize specific emotional expressions. Importantly, some of the perceptual cues that mark anger expressions, such as lowered eyebrows and tight lips, mimic features also associated with dominance. On the other hand, high eyebrows and smiling in happiness expressions reinforce affiliative features. The perceptual overlap between facial expressions of anger and dominance on the one hand and facial expressions of happiness and affiliation on the other was recently demonstrated by Hess et al. (2009b). They used a double oddball task, where participants had to identify neutral expressions of highly dominant- and highly affiliative-appearing individuals embedded in either a series of angry, or happy, or fearful faces. In this type of task, participants are slower to identify faces as neutral if they share perceptual space (face space; Valentine 1991, 2001) with the emotional faces into which they are embedded. That is, the more anger and dominance look alike, the harder it is to identify a dominant neutral face within a series of angry faces. By contrast, the identification of the affiliative neutral faces is comparatively easy. The converse is the case for affiliative faces embedded in a series of happy expressions.
As predicted, participants were slower to identify dominant faces embedded in angry expressions and affiliative faces embedded in happy expressions than vice versa. This finding supports the notion that the perceptual markers for anger and dominance, as well as happiness and affiliation, have some appearance characteristics in common. This implies that for all intents and purposes, a highly dominant face looks angry even when no actual facial movement is present. By contrast, highly affiliative neutral faces look happy. Put another way, the facial configurations that create impressions of dominance and affiliation are the same that make a face appear to show anger and happiness in line with the findings of Neth & Martinez (2009). These perceptual similarities between dominance/anger and affiliation/happiness then can be expected to bias the perception of these emotions, especially when facial expressions are weak and ambiguous.
The perceptual overlap between dominant facial markers and expressive markers of anger on the one hand, and affiliative facial markers and expressive markers of happiness on the other, also implies that male and female faces will be reacted to differently. As mentioned above, men's faces are perceived as more dominant and women's as more affiliative. In fact, the high forehead, square jaw and thicker eyebrows that have been linked to perceptions of dominance (e.g. Keating et al. 1981a,b; Zebrowitz 1997) are also more typical for men's faces (Brown & Perrett 1993; Burton et al. 1993). On the other hand, a rounded baby-face with large eyes is more feminine (Brown & Perrett 1993; Burton et al. 1993), perceived as more approachable and warm (Berry & Brownlow 1989), and is more typical for women's faces.
Further, anger expressions signal dominance on the part of the expresser, whereas happiness expressions signal affiliation (Knutson 1996; Hess et al. 2000). In turn, perceptions of the dominance and affiliation tendencies of others are relevant to the approach/avoidance dimension. Specifically, in hierarchical primate societies such as ours, highly dominant alpha individuals pose a certain threat insofar as they can claim territory or possessions (e.g. food) from lower-status group members (Menzel 1973, 1974). Hence the presence of a perceived dominant other should lead to increased vigilance and preparedness for withdrawal (Coussi-Korbel 1994). By contrast, affiliation is related to nurturing behaviours and should lead to approach when the other is perceived to be high on this behavioural disposition.
Because anger, dominance and male sex markers on the one hand and happiness, affiliation and female sex markers on the other overlap perceptually in the face space and are functionally equivalent, anger shown by women and happiness shown by men can be expected to elicit different reactions from observers. Specifically, when anger is shown on a highly dominant face, the threat signal of the expression and the threat signal derived from facial morphology are congruent and reinforce each other. By contrast, when anger is expressed on a highly affiliative face, the two signals contradict each other and hence weaken the overall threat message. The converse is true for happiness expressions (Hess et al. 2007a,b). Following this line of argument, the female anger expression can be viewed as a combination of an appetitive face with a threatening expression. Male anger, on the other hand, represents a less ambiguous example of a threat stimulus. Conversely, female happiness is a clearer appetitive stimulus than male happiness.
Hess et al. (2007a,b) tested this hypothesis using a startle-reflex methodology. They assessed both the eye-blink reflex in response to an acoustic startle probe and the postauricular reflex in response to the same sound. The eye-blink reflex to a sudden acoustic probe is modulated by emotional state (e.g. Vrana et al. 1988; Lang et al. 1990; Lang 1995), such that when an individual is exposed to a threatening or withdrawal inducing stimulus, the reflex is potentiated. Conversely, a pattern opposite to that of the eye-blink reflex is found for the postauricular reflex, the muscle response that serves to pull the ears back and up (Berzin & Fortinguerra 1993). Specifically, individuals show an augmented postauricular reaction to an acoustic startle probe when exposed to appetitive stimuli (Benning et al. 2004).
Congruent with the notion that dominance and affiliation signals from the face and facial expressions of anger and happiness interact perceptually, eye-blink startle was potentiated for male anger faces compared with neutral and happy faces, as well as compared to female anger faces. By contrast, the postauricular reflex was potentiated for female happiness faces and attenuated during male anger faces, compared to neutral faces as well as to male happiness faces. Thus, anger, even though this expression signals threat (e.g. Aronoff et al. 1988), potentiated eye-blink startle only when shown by a man, that is, shown on a face suggesting social dominance. Conversely, the postauricular reflex was potentiated preferentially for female happiness expressions.
In brief, facial features and facial expressions interact when it comes to the perception of emotion expressions. The studies reported above focused on male and female faces because these represent a natural category differing in facial dominance and affiliation. But obviously individuals within each sex differ on these dimensions and hence we would expect, for example, anger to be more threatening when shown on a highly dominant female face and conversely male anger to be less so when shown on a highly affiliative face. It is important to note that not only do men and women differ with regard to these dimensions, but other groups do so as well. Thus, for example, age changes faces such that men's faces are perceived as increasingly dominant until a very old age, when they appear as more affiliative. Women's faces also increase in apparent dominance as they age. The impact of these age-related changes in facial appearance on emotional attributions is currently being investigated in our laboratories.
In summary, both beliefs and facial morphology have an impact on the perception of the facial movement involved in emotional expressions. In everyday life, these two sources of influence will often be confounded. Thus, as we have seen, male faces appear as generally more dominant, masculine and mature than female faces and hence perceptually overlap with anger expressions. Conversely, social roles are such that women are expected to feel less anger and more fear and happiness. Yet, whereas beliefs based on social roles are based on such relatively more malleable factors as the distribution of power and nurturing versus agentic roles between the genders, facial appearance-based effects are owing to the relative distribution of facial appearance cues associated with perceived dominance and affiliation across genders. That is, these two factors, albeit confounded in our reality, actually represent conceptually different explanatory factors.
For all intents and purposes, it is impossible to disentangle the unique contribution of gender differences in power, status, social roles and facial appearance with regard to perceived emotionality in our society. In Western countries, men tend to occupy powerful social positions in politics and business, and in many countries of this world, they exclusively occupy these positions. Women not only exclusively bear children but also overwhelmingly are responsible for their upbringing, thereby assigning themselves a nurturing role.
However, it is not uncommon in science fiction to question gender roles and to imagine worlds where such roles are different from ours. This may include the addition of genders other than male and female or the redistribution of child rearing tasks (e.g. Cogenitors, in Star Trek Enterprise episode no. 48). Hess et al. (submitted) therefore created a science-fiction scenario in which a planet (Deluvia) is inhabited by members of a race that has three genders: male, female and carer. They manipulated social roles by describing the male and female as exactly equal in social dominance, whereas a submissive and nurturing role was assigned to the carer who was described as being entirely responsible for the bearing and upbringing of the young. Facial appearances of the members of each gender were manipulated to be high, medium or low in facial cues to dominance. Participants read a description of Deluvia and its inhabitants and then rated the likelihood that a Deluvian would experience various emotions. The results showed that social roles and facial appearances had varying but comparable impact on these perceptions. Sex per se, however, did not influence the ratings significantly. These findings suggest that the beliefs we have about men's and women's emotionality are indeed a composite of the emotions that are associated with nurturing versus agentic social roles on the one hand, and the conclusions we draw about a person's emotional behaviour based on the social signals that facial morphology transmits on the other.
The line of research presented here shows that both the face and facial expressions of emotion have social signal value and that these signals interact in complex ways. Importantly, this means that when we perceive and react to the emotional facial expressions of others, it really matters who shows what and in which context. The appearance of the sender, what we know—or think we know—about them and the situation, and the expressive movements themselves, all contribute to this process. In the present context, we have focused on expressive movements captured in stills, that is, the departure from the neutral expression, however, the dynamic aspects of facial movement, such as the speed of onset and offset or accompanying head movements also add to the picture (Boker et al. in press).
The present article has focused on the behavioural tendencies of dominance and affiliation and their relation to sex on the one hand and the facial expressions of anger and happiness on the other. However, dominance and affiliation are not the only personality characteristics that can be deduced from the face and that could interact with our interpretations of emotion expressions. Thus, for example, Becker et al. (2007) found that perceived masculinity influences the decoding of anger expressions; also, facial maturity interacts with perceptions of fear (Marsh et al. 2005; Sacco & Hugenberg 2009). Specifically, Becker et al. (2007) proposed that anger has evolved to mimic masculinity, whereas happiness has evolved to mimic femininity. They could show that faces which vary in masculinity cues (as manipulated by the eyebrow ridge) are rated as more angry to the degree to which they are perceived as more masculine. That is, they found some physical overlap between masculine morphology and an angry appearance. Further, Marsh et al. (2005) found that fear shares perceptual overlap with baby-facedness, whereas anger shares overlap with a mature appearance, a finding more recently replicated by Sacco & Hugenberg (2009).
In a related vein, anger and happiness have also been associated with another evolutionarily important behavioural intention, trustworthiness (Todorov 2008). Thus, trustworthy faces that expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces that expressed anger were perceived as angrier than trustworthy faces (Oosterhof & Todorov 2009).
This slowly increasing list of evolutionarily important behavioural dispositions that people infer from facial traits and that interact directly with the perceptions of facial emotion expressions underlines the importance of emotions for social signalling. Specifically, the interpretation of facial features as signalling behavioural intentions is an extension of the signal value of facial expressions (Zebrowitz et al. 2003; Hess et al. 2007a,b, 2008; Todorov 2008). What this means in the context of decoding facial expressions is that the face is not a blank canvas but is like a musical instrument that imbues the expression with its own timbre.
The importance of facial morphology for the interpretation of and reactions to emotional facial expressions has relevance for the computational modelling of human faces. Specifically, humans tend to treat computers in these interactions largely as they would treat humans (Reeves & Nass 1996). In this framework, computer agents and robots have been designed that not only can interpret human emotions but also can signal emotions via facial expressions (e.g. Koda & Maes 1996; Breazeal 2003; Pelachaud & Bilvi 2003). However, the research presented above suggests that human beings do not restrict themselves to facial expressive information when judging the emotions of others. Rather they use all the available information provided by the face, and these additional sources of information can reinforce or obscure the emotional message transmitted by the facial expressions. It can be expected that this applies also to the avatars and agents used in human computer interfaces. Thus, an agent with a very square masculine face may not be a good choice if warmth and care are to be transmitted.
Preparation of this manuscript was supported by grant LX0990031 from the Australian Research Council to U.H., as well as a grant from the Social Science and Humanities Research Council to U.H. and a National Science Foundation grant no. 0544533 to R.E.K., U.H. and R.B.A.
One contribution of 17 to a Discussion Meeting Issue ‘Computation of emotions in man and machines’.