This study examined whether dysphoria influences the identification of non-ambiguous and ambiguous facial expressions of emotion. Dysphoric and non-dysphoric college students viewed a series of human faces expressing sadness, happiness, anger, and fear that were morphed with each other to varying degrees. Dysphoric and non-dysphoric individuals identified prototypical emotional expressions similarly. However, when viewing ambiguous faces, dysphoric individuals were more likely to identify sadness when mixed with happiness than non-dysphoric individuals. A similar but less robust pattern was observed for facial expressions that combined fear and happiness. No group differences in emotion identification were observed for faces that combined sadness and anger or fear and anger. Dysphoria appears to enhance the identification of negative emotion in others when positive emotion is also present. This tendency may contribute to some of the interpersonal difficulties often experienced by dysphoric individuals.
interpretation bias; depression; cognitive models; information processing
High levels of trait hostility are associated with wide-ranging interpersonal deficits and heightened physiological response to social stressors. These deficits may be attributable in part to individual differences in the perception of social cues. The present study evaluated the ability to recognize facial emotion among 48 high hostile (HH) and 48 low hostile (LH) smokers and whether experimentally-manipulated acute nicotine deprivation moderated relations between hostility and facial emotion recognition. A computer program presented series of pictures of faces that morphed from a neutral emotion into increasing intensities of happiness, sadness, fear, or anger, and participants were asked to identify the emotion displayed as quickly as possible. Results indicated that HH smokers, relative to LH smokers, required a significantly greater intensity of emotion expression to recognize happiness. No differences were found for other emotions across HH and LH individuals, nor did nicotine deprivation moderate relations between hostility and emotion recognition. This is the first study to show that HH individuals are slower to recognize happy facial expressions and that this occurs regardless of recent tobacco abstinence. Difficulty recognizing happiness in others may impact the degree to which HH individuals are able to identify social approach signals and to receive social reinforcement.
hostility; facial emotion recognition; smoking; nicotine
Aim: This study investigated brain areas involved in the perception of dynamic facial expressions of emotion.
Methods: A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional.
Results: Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex.
Conclusions: Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
facially expressed emotions; fMRI; dynamic facial expressions; biological motion; superior temporal sulcus
Background: It has been suggested that depressed patients have a "negative bias" in recognising other people's emotions; however, the detailed structure of this negative bias is not fully understood.
Objectives: To examine the ability of depressed patients to recognise emotion, using moving facial and prosodic expressions of emotion.
Methods: 16 depressed patients and 20 matched (non-depressed) controls selected one basic emotion (happiness, sadness, anger, fear, surprise, or disgust) that best described the emotional state represented by moving face and prosody.
Results: There was no significant difference between depressed patients and controls in their recognition of facial expressions of emotion. However, the depressed patients were impaired relative to controls in their recognition of surprise from prosodic emotions, judging it to be more negative.
Conclusions: We suggest that depressed patients tend to interpret neutral emotions, such as surprise, as negative. Considering that the deficit was seen only for prosodic emotive stimuli, it would appear that stimulus clarity influences the recognition of emotion. These findings provide valuable information on how depressed patients behave in complicated emotional and social situations.
In a previous study, we demonstrated that amygdala reactivity to masked negative facial emotions predicts negative judgmental bias in healthy subjects. In the present study, we extended the paradigm to a sample of 35 inpatients suffering from depression to investigate the effect of amygdala reactivity on automatic negative judgmental bias and clinical characteristics in depression.
Amygdala activity was recorded in response to masked displays of angry, sad and happy facial expressions by means of functional magnetic resonance imaging at 3 T. In a subsequent experiment, the patients performed an affective priming task that characterizes automatic emotion processing by investigating the biasing effect of subliminally presented emotional faces on evaluative ratings to subsequently presented neutral stimuli.
Significant associations between (right) amygdala reactivity and automatic negative judgmental bias were replicated in our patient sample (r = –0.59, p < 0.001). Further, negatively biased evaluative processing was associated with severity and longer course of illness (r = –0.57, p = 0.001).
Amygdala hyperactivity is a neural substrate of negatively biased automatic emotion processing that could be a determinant for a more severe disease course.
depressive disorder, major; magnetic resonance imaging, functional; amygdala; emotion
In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing.
context; face perception; emotion; event-related potentials; P1; N170
To examine the ontogeny of emotional face processing, event-related potentials (ERPs) were recorded from adults and 7-month-old infants while viewing pictures of fearful, happy, and neutral faces. Face-sensitive ERPs at occipital-temporal scalp regions differentiated between fearful and neutral/happy faces in both adults (N170 was larger for fear) and infants (P400 was larger for fear). Behavioral measures showed no overt attentional bias toward fearful faces in adults, but in infants, the duration of the first fixation was longer for fearful than happy faces. Together, these results suggest that the neural systems underlying the differential processing of fearful and happy/neutral faces are functional early in life, and that affective factors may play an important role in modulating infants’ face processing.
Brain Development; Electrophysiology; Emotion; Face Perception
Most of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.
Two independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: “fMRI AND happy faces,” “fMRI AND sad faces,” “fMRI AND fearful faces,” “fMRI AND angry faces,” “fMRI AND disgusted faces” and “fMRI AND neutral faces.” We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.
Of the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.
Although the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.
Our study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.
We investigated whether categorical perception and dimensional perception can co-occur while decoding emotional facial expressions. In Experiment 1, facial continua with endpoints consisting of four basic emotions (i.e., happiness-fear and anger-disgust) were created by a morphing technique. Participants rated each facial stimulus using a categorical strategy and a dimensional strategy. The results show that the happiness–fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster. In Experiment 2, we found a category boundary within other continua (i.e., surprise–sadness and excitement–disgust) with regard to the arousal and valence dimensions. These findings indicate that categorical perception and dimensional perception co-occurred when emotional facial expressions were rated using a dimensional strategy, suggesting a hybrid theory of categorical and dimensional accounts.
Hybrid theory of emotion; Dimension; Category; Facial expressions
There is some evidence that faces with a happy expression are recognized better than faces with other expressions. However, little is known about whether this happy-face advantage also applies to perceptual face matching, and whether similar differences exist among other expressions. Using a sequential matching paradigm, we systematically compared the effects of seven basic facial expressions on identity recognition. Identity matching was quickest when a pair of faces had an identical happy/sad/neutral expression, poorer when they had a fearful/surprise/angry expression, and poorest when they had a disgust expression. Faces with a happy/sad/fear/surprise expression were matched faster than those with an anger/disgust expression when the second face in a pair had a neutral expression. These results demonstrate that effects of facial expression on identity recognition are not limited to happy-faces when a learned face is immediately tested. The results suggest different influences of expression in perceptual matching and long-term recognition memory.
facial expression; identity recognition; face matching
Since the introduction of the associative network theory, mood-congruent biases in emotional information processing have been established in individuals in a sad and happy mood. Research has concentrated on memory and attentional biases. According to the network theory, mood-congruent behavioral tendencies would also be predicted. Alternatively, a general avoidance pattern would also be in line with the theory. Since cognitive biases have been assumed to operate strongly in case of social stimuli, mood-induced biases in approach and avoidance behavior towards emotional facial expressions were studied. 306 females were subjected to a highly emotional fragment of a sad or a happy movie, to induce either a sad mood or a happy mood. An Approach-Avoidance Task was implemented, in which single pictures of faces (with angry, sad, happy, or neutral expression) and non-social control pictures were presented. In contrast to our expectations, mood states did not produce differential behavioral biases. Mood-congruent and mood-incongruent behavioral tendencies were, however, present in a subgroup of participants with highest depressive symptomatology scores. This suggests that behavioral approach-avoidance biases are not sensitive to mood state, but more related to depressive characteristics.
Mood; Approach; Avoidance; Depression; Facial expressions
Children with narrow phenotype bipolar disorder (NP-BD; i.e., history of at least one hypomanic or manic episode with euphoric mood) are deficient when labeling face emotions. It is unknown if this deficit is specific to particular emotions, or if it extends to children with severe mood dysregulation (SMD; i.e., chronic irritability and hyperarousal without episodes of mania). Thirty-nine NP-BD, 31 SMD, and 36 control subjects completed the emotional expression multimorph task, which presents gradations of facial emotions from 100% neutrality to 100% emotional expression (happiness, surprise, fear, sadness, anger, and disgust). Groups were compared in terms of intensity of emotion required before identification occurred and accuracy. Both NP-BD and SMD youth required significantly more morphs than controls to label correctly disgusted, surprised, fearful, and happy faces. Impaired face labeling correlated with deficient social reciprocity skills in NP-BD youth and dysfunctional family relationships in SMD youth. Compared to controls, patients with NP-BD or SMD require significantly more intense facial emotion before they are able to label the emotion correctly. These deficits are associated with psychosocial impairments. Understanding the neural circuitry associated with face-labeling deficits has the potential to clarify the pathophysiology of these disorders.
Schizophrenia patients demonstrate impaired emotional processing that may be due, in part, to impaired facial emotion recognition. This study examined event-related potential (ERP) responses to emotional faces in schizophrenia patients and controls to determine when, in the temporal processing stream, patient abnormalities occur.
16 patients and 16 healthy control participants performed a facial emotion recognition task. Very sad, somewhat sad, neutral, somewhat happy, and very happy faces were each presented for 100 ms. Subjects indicated whether each face was “Happy”, “Neutral”, or “Sad”. Evoked potential data were obtained using a 32-channel EEG system.
Controls performed better than patients in recognizing facial emotions. In patients, better recognition of happy faces correlated with less severe negative symptoms. Four ERP components corresponding to the P100, N170, N250, and P300 were identified. Group differences were noted for the N170 “face processing” component that underlies the structural encoding of facial features, but not for the subsequent N250 “affect modulation” component. Higher amplitude of the N170 response to sad faces was correlated with less severe delusional symptoms. Although P300 abnormalities were found, the variance of this component was explained by the earlier N170 response.
Patients with schizophrenia demonstrate abnormalities in early visual encoding of facial features that precedes the ERP response typically associated with facial affect recognition. This suggests that affect recognition deficits, at least for happy and sad discrimination, are secondary to faulty structural encoding of faces. The association of abnormal face encoding with delusions may denote the physiological basis for clinical misidentification syndromes.
Schizophrenia; Emotion; Face Recognition; Event-Related Potentials; N170
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent.
Crossmodal interaction; emotion; facial expressions; laughter
With emergence of new technologies, there has been an explosion of basic and clinical research on the affective and cognitive neuroscience of face processing and emotion perception. Adult emotional face stimuli are commonly used in these studies. For developmental research, there is a need for a validated set of child emotional faces. This paper describes the development of the NIMH Child Emotional Faces Picture Set (NIMH-ChEFS), a relatively large stimulus set with high quality, color images of the emotional faces of children. The set includes 482 photos of fearful, angry, happy, sad and neutral child faces with two gaze conditions: direct and averted gaze. In this paper we describe the development of the NIMH-ChEFS and data on the set’s validity based on ratings by 20 healthy adult raters. Agreement between the a priori emotion designation and the raters’ labels was high and comparable with values reported for commonly used adult picture sets. Intensity, representativeness, and composite “goodness” ratings are also presented to guide researchers in their choice of specific stimuli for their studies. These data should give researchers confidence in the NIMH-ChEFS’s validity for use in affective and social neuroscience research.
face processing; emotion perception; face stimuli sets; developmental psychopathology; methodology
We recently showed that, in healthy individuals, emotional expression influences memory for faces both in terms of accuracy and, critically, in memory response bias (tendency to classify stimuli as previously seen or not, regardless of whether this was the case). Although schizophrenia has been shown to be associated with deficit in episodic memory and emotional processing, the relation between these processes in this population remains unclear. Here, we used our previously validated paradigm to directly investigate the modulation of emotion on memory recognition. Twenty patients with schizophrenia and matched healthy controls completed functional magnetic resonance imaging (fMRI) study of recognition memory of happy, sad, and neutral faces. Brain activity associated with the response bias was obtained by correlating this measure with the contrast subjective old (ie, hits and false alarms) minus subjective new (misses and correct rejections) for sad and happy expressions. Although patients exhibited an overall lower memory performance than controls, they showed the same effects of emotion on memory, both in terms of accuracy and bias. For sad faces, the similar behavioral pattern between groups was mirrored by a largely overlapping neural network, mostly involved in familiarity-based judgments (eg, parahippocampal gyrus). In contrast, controls activated a much larger set of regions for happy faces, including areas thought to underlie recollection-based memory retrieval (eg, superior frontal gyrus and hippocampus) and in novelty detection (eg, amygdala). This study demonstrates that, despite an overall lower memory accuracy, emotional memory is intact in schizophrenia, although emotion-specific differences in brain activation exist, possibly reflecting different strategies.
amygdala; faces; facial expression; functional neuroimaging; sad; happy; symptomatology; recollection; familiarity
The ability to recognize emotions in facial expressions relies on an extensive neural network with the amygdala as the key node as has typically been demonstrated for the processing of fearful stimuli. A sufficient characterization of the factors influencing and modulating amygdala function, however, has not been reached now. Due to lacking or diverging results on its involvement in recognizing all or only certain negative emotions, the influence of gender or ethnicity is still under debate.
This high-resolution fMRI study addresses some of the relevant parameters, such as emotional valence, gender and poser ethnicity on amygdala activation during facial emotion recognition in 50 Caucasian subjects. Stimuli were color photographs of emotional Caucasian and African American faces.
Bilateral amygdala activation was obtained to all emotional expressions (anger, disgust, fear, happy, and sad) and neutral faces across all subjects. However, only in males a significant correlation of amygdala activation and behavioral response to fearful stimuli was observed, indicating higher amygdala responses with better fear recognition, thus pointing to subtle gender differences. No significant influence of poser ethnicity on amygdala activation occurred, but analysis of recognition accuracy revealed a significant impact of poser ethnicity that was emotion-dependent.
Applying high-resolution fMRI while subjects were performing an explicit emotion recognition task revealed bilateral amygdala activation to all emotions presented and neutral expressions. This mechanism seems to operate similarly in healthy females and males and for both in-group and out-group ethnicities. Our results support the assumption that an intact amygdala response is fundamental in the processing of these salient stimuli due to its relevance detecting function.
Unconscious processing of stimuli with emotional content can bias affective judgments. Is this subliminal affective priming merely a transient phenomenon manifested in fleeting perceptual changes, or are long-lasting effects also induced? To address this question, we investigated memory for surprise faces 24 hours after they had been shown with 30-ms fearful, happy, or neutral faces. Surprise faces subliminally primed by happy faces were initially rated as more positive, and were later remembered better, than those primed by fearful or neutral faces. Participants likely to have processed primes supraliminally did not respond differentially as a function of expression. These results converge with findings showing memory advantages with happy expressions, though here the expressions were displayed on the face of a different person, perceived subliminally, and not present at test. We conclude that behavioral biases induced by masked emotional expressions are not ephemeral, but rather can last at least 24 hours.
subliminal priming; memory; emotion; facial expressions; consciousness; awareness; affect
In the aging literature it has been shown that even though emotion recognition performance decreases with age, the decrease is less for happiness than other facial expressions. Studies in younger adults have also revealed that happy faces are more strongly attended to and better recognized than other emotional facial expressions. Thus, there might be a more age independent happy face advantage in facial expression recognition. By using a backward masking paradigm and varying stimulus onset asynchronies (17–267 ms) the temporal development of a happy face advantage, on a continuum from low to high levels of visibility, was examined in younger and older adults. Results showed that across age groups, recognition performance for happy faces was better than for neutral and fearful faces at durations longer than 50 ms. Importantly, the results showed a happy face advantage already during early processing of emotional faces in both younger and older adults. This advantage is discussed in terms of processing of salient perceptual features and elaborative processing of the happy face. We also investigate the combined effect of age and neuroticism on emotional face processing. The rationale was previous findings of age-related differences in physiological arousal to emotional pictures and a relation between arousal and neuroticism. Across all durations, there was an interaction between age and neuroticism, showing that being high in neuroticism might be disadvantageous for younger, but not older adults’ emotion recognition performance during arousal enhancing tasks. These results indicate that there is a relation between aging, neuroticism, and performance, potentially related to physiological arousal.
emotion; faces; aging; masking; happy; fearful; positivity bias; neuroticism
Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups.
Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition.
Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.
Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen’s Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, which can facilitate statistical study of large populations in disorders affecting facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.
Facial Expressions; Facial Action Coding System; action units; Computerized Method
The two halves of the brain are believed to play different roles in emotional processing, but the specific contribution of each hemisphere continues to be debated. The right-hemisphere hypothesis suggests that the right cerebrum is dominant for processing all emotions regardless of affective valence, whereas the valence specific hypothesis posits that the left hemisphere is specialized for processing positive affect while the right hemisphere is specialized for negative affect. Here, healthy participants viewed two split visual-field facial affect perception tasks during functional magnetic resonance imaging, one presenting chimeric happy faces (i.e. half happy/half neutral) and the other presenting identical sad chimera (i.e. half sad/half neutral), each masked immediately by a neutral face. Results suggest that the posterior right hemisphere is generically activated during non-conscious emotional face perception regardless of affective valence, although greater activation is produced by negative facial cues. The posterior left hemisphere was generally less activated by emotional faces, but also appeared to recruit bilateral anterior brain regions in a valence-specific manner. Findings suggest simultaneous operation of aspects of both hypotheses, suggesting that these two rival theories may not actually be in opposition, but may instead reflect different facets of a complex distributed emotion processing system.
FMRI; neuroimaging; faces; emotion; affect
The ability to accurately perceive emotions is crucial for effective social interaction. Many questions remain regarding how different sources of emotional cues in speech (e.g., prosody, semantic information) are processed during emotional communication. Using a cross-modal emotional priming paradigm (Facial affect decision task), we compared the relative contributions of processing utterances with single-channel (prosody-only) versus multi-channel (prosody and semantic) cues on the perception of happy, sad, and angry emotional expressions. Our data show that emotional speech cues produce robust congruency effects on decisions about an emotionally related face target, although no processing advantage occurred when prime stimuli contained multi-channel as opposed to single-channel speech cues. Our data suggest that utterances with prosodic cues alone and utterances with combined prosody and semantic cues both activate knowledge that leads to emotional congruency (priming) effects, but that the convergence of these two information sources does not always heighten access to this knowledge during emotional speech processing.
Emotion biases feature prominently in cognitive theories of depression and are a focus of psychological interventions. However, there is presently no stable neurocognitive marker of altered emotion–cognition interactions in depression. One reason may be the heterogeneity of major depressive disorder. Our aim in the present study was to find an emotional bias that differentiates patients with melancholic depression from controls, and patients with melancholic from those with non-melancholic depression. We used a working memory paradigm for emotional faces, where two faces with angry, happy, neutral, sad or fearful expression had to be retained over one second. Twenty patients with melancholic depression, 20 age-, education- and gender-matched control participants and 20 patients with non-melancholic depression participated in the study. We analysed performance on the working memory task using signal detection measures. We found an interaction between group and emotion on working memory performance that was driven by the higher performance for sad faces compared to other categories in the melancholic group. We computed a measure of “sad benefit”, which distinguished melancholic and non-melancholic patients with good sensitivity and specificity. However, replication studies and formal discriminant analysis will be needed in order to assess whether emotion bias in working memory may become a useful diagnostic tool to distinguish these two syndromes.
Depression; Melancholia; Working memory; Facial expression; Emotion
Emotions are expressed more clearly on the left side of the face than the right: an asymmetry that probably stems from right hemisphere dominance for emotional expression (right hemisphere model). More controversially, it has been suggested that the left hemiface bias is stronger for negative emotions and weaker or reversed for positive emotions (valence model). We examined the veracity of the right hemisphere and valence models by measuring asymmetries in: (i) movement of the face; and (ii) observer's rating of emotionality. The study uses a precise three-dimensional (3D) imaging technique to measure facial movement and to provide images that simultaneously capture the left or right hemifaces. Models (n = 16) with happy, sad and neutral expressions were digitally captured and manipulated. Comparison of the neutral and happy or sad images revealed greater movement of the left hemiface, regardless of the valence of the emotion, supporting the right hemisphere model. There was a trend, however, for left-sided movement to be more pronounced for negative than positive emotions. Participants (n = 357) reported that portraits rotated so that the left hemiface was featured, were more expressive of negative emotions whereas right hemiface portraits were more expressive for positive emotions, supporting the valence model. The effect of valence was moderated when the images were mirror-reversed. The data demonstrate that relatively small rotations of the head have a dramatic effect on the expression of positive and negative emotions. The fact that the effect of valence was not captured by the movement analysis demonstrates that subtle movements can have a strong effect on the expression of emotion.