This study examined whether dysphoria influences the identification of non-ambiguous and ambiguous facial expressions of emotion. Dysphoric and non-dysphoric college students viewed a series of human faces expressing sadness, happiness, anger, and fear that were morphed with each other to varying degrees. Dysphoric and non-dysphoric individuals identified prototypical emotional expressions similarly. However, when viewing ambiguous faces, dysphoric individuals were more likely to identify sadness when mixed with happiness than non-dysphoric individuals. A similar but less robust pattern was observed for facial expressions that combined fear and happiness. No group differences in emotion identification were observed for faces that combined sadness and anger or fear and anger. Dysphoria appears to enhance the identification of negative emotion in others when positive emotion is also present. This tendency may contribute to some of the interpersonal difficulties often experienced by dysphoric individuals.
interpretation bias; depression; cognitive models; information processing
High levels of trait hostility are associated with wide-ranging interpersonal deficits and heightened physiological response to social stressors. These deficits may be attributable in part to individual differences in the perception of social cues. The present study evaluated the ability to recognize facial emotion among 48 high hostile (HH) and 48 low hostile (LH) smokers and whether experimentally-manipulated acute nicotine deprivation moderated relations between hostility and facial emotion recognition. A computer program presented series of pictures of faces that morphed from a neutral emotion into increasing intensities of happiness, sadness, fear, or anger, and participants were asked to identify the emotion displayed as quickly as possible. Results indicated that HH smokers, relative to LH smokers, required a significantly greater intensity of emotion expression to recognize happiness. No differences were found for other emotions across HH and LH individuals, nor did nicotine deprivation moderate relations between hostility and emotion recognition. This is the first study to show that HH individuals are slower to recognize happy facial expressions and that this occurs regardless of recent tobacco abstinence. Difficulty recognizing happiness in others may impact the degree to which HH individuals are able to identify social approach signals and to receive social reinforcement.
hostility; facial emotion recognition; smoking; nicotine
Aim: This study investigated brain areas involved in the perception of dynamic facial expressions of emotion.
Methods: A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional.
Results: Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex.
Conclusions: Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
facially expressed emotions; fMRI; dynamic facial expressions; biological motion; superior temporal sulcus
We used filtered low spatial frequency images of facial emotional expressions (angry, fearful, happy, sad, or neutral faces) that were blended with a high-frequency image of the same face but with a neutral facial expression, so as to obtain a “hybrid” face image that “masked” the subjective perception of its emotional expression. Participants were categorized in three groups of participants: healthy control participants (N = 49), recovered previously depressed (N = 79), and currently depressed individuals (N = 36), All participants were asked to rate how friendly the person in the picture looked. Simultaneously we recorded, by use of an infrared eye-tracker, their pupillary responses. We expected that depressed individuals (either currently or previously depressed) would show a negative bias and therefore rate the negative emotional faces, albeit the emotions being invisible, as more negative (i.e., less friendly) than the healthy controls would. Similarly, we expected that depressed individuals would overreact to the negative emotions and that this would result in greater dilations of the pupil's diameter than those shown by controls for the same emotions. Although we observed the expected pattern of effects of the hidden emotions on both ratings and pupillary changes, both responses did not differ significantly among the three groups of participants. The implications of this finding are discussed.
depression; pupillometry; subliminal perception; facial emotions; face hybrids
Background: It has been suggested that depressed patients have a "negative bias" in recognising other people's emotions; however, the detailed structure of this negative bias is not fully understood.
Objectives: To examine the ability of depressed patients to recognise emotion, using moving facial and prosodic expressions of emotion.
Methods: 16 depressed patients and 20 matched (non-depressed) controls selected one basic emotion (happiness, sadness, anger, fear, surprise, or disgust) that best described the emotional state represented by moving face and prosody.
Results: There was no significant difference between depressed patients and controls in their recognition of facial expressions of emotion. However, the depressed patients were impaired relative to controls in their recognition of surprise from prosodic emotions, judging it to be more negative.
Conclusions: We suggest that depressed patients tend to interpret neutral emotions, such as surprise, as negative. Considering that the deficit was seen only for prosodic emotive stimuli, it would appear that stimulus clarity influences the recognition of emotion. These findings provide valuable information on how depressed patients behave in complicated emotional and social situations.
In a previous study, we demonstrated that amygdala reactivity to masked negative facial emotions predicts negative judgmental bias in healthy subjects. In the present study, we extended the paradigm to a sample of 35 inpatients suffering from depression to investigate the effect of amygdala reactivity on automatic negative judgmental bias and clinical characteristics in depression.
Amygdala activity was recorded in response to masked displays of angry, sad and happy facial expressions by means of functional magnetic resonance imaging at 3 T. In a subsequent experiment, the patients performed an affective priming task that characterizes automatic emotion processing by investigating the biasing effect of subliminally presented emotional faces on evaluative ratings to subsequently presented neutral stimuli.
Significant associations between (right) amygdala reactivity and automatic negative judgmental bias were replicated in our patient sample (r = –0.59, p < 0.001). Further, negatively biased evaluative processing was associated with severity and longer course of illness (r = –0.57, p = 0.001).
Amygdala hyperactivity is a neural substrate of negatively biased automatic emotion processing that could be a determinant for a more severe disease course.
depressive disorder, major; magnetic resonance imaging, functional; amygdala; emotion
Individuals with bipolar disorder demonstrate abnormal social function. Neuroimaging studies in bipolar disorder have shown functional abnormalities in neural circuitry supporting face emotion processing, but have not examined face identity processing, a key component of social function. We aimed to elucidate functional abnormalities in neural circuitry supporting face emotion and face identity processing in bipolar disorder.
Twenty-seven individuals with bipolar disorder I currently euthymic and 27 healthy controls participated in an implicit face processing, block-design paradigm. Participants labeled color flashes that were superimposed on dynamically changing background faces comprising morphs either from neutral to prototypical emotion (happy, sad, angry and fearful) or from one identity to another identity depicting a neutral face. Whole-brain and amygdala region-of-interest (ROI) activities were compared between groups.
There was no significant between-group difference looking across both emerging face emotion and identity. During processing of all emerging emotions, euthymic individuals with bipolar disorder showed significantly greater amygdala activity. During facial identity and also happy face processing, euthymic individuals with bipolar disorder showed significantly greater amygdala and medial prefrontal cortical activity compared with controls.
This is the first study to examine neural circuitry supporting face identity and face emotion processing in bipolar disorder. Our findings of abnormally elevated activity in amygdala and medial prefrontal cortex (mPFC) during face identity and happy face emotion processing suggest functional abnormalities in key regions previously implicated in social processing. This may be of future importance toward examining the abnormal self-related processing, grandiosity and social dysfunction seen in bipolar disorder.
Affective disorders; amygdala; bipolar disorder; face processing; functional magnetic resonance imaging; identity processing; medial prefrontal cortex; morph; self-processing; social processing
In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing.
context; face perception; emotion; event-related potentials; P1; N170
To examine the ontogeny of emotional face processing, event-related potentials (ERPs) were recorded from adults and 7-month-old infants while viewing pictures of fearful, happy, and neutral faces. Face-sensitive ERPs at occipital-temporal scalp regions differentiated between fearful and neutral/happy faces in both adults (N170 was larger for fear) and infants (P400 was larger for fear). Behavioral measures showed no overt attentional bias toward fearful faces in adults, but in infants, the duration of the first fixation was longer for fearful than happy faces. Together, these results suggest that the neural systems underlying the differential processing of fearful and happy/neutral faces are functional early in life, and that affective factors may play an important role in modulating infants’ face processing.
Brain Development; Electrophysiology; Emotion; Face Perception
Most of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.
Two independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: “fMRI AND happy faces,” “fMRI AND sad faces,” “fMRI AND fearful faces,” “fMRI AND angry faces,” “fMRI AND disgusted faces” and “fMRI AND neutral faces.” We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.
Of the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.
Although the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.
Our study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.
We investigated whether categorical perception and dimensional perception can co-occur while decoding emotional facial expressions. In Experiment 1, facial continua with endpoints consisting of four basic emotions (i.e., happiness-fear and anger-disgust) were created by a morphing technique. Participants rated each facial stimulus using a categorical strategy and a dimensional strategy. The results show that the happiness–fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster. In Experiment 2, we found a category boundary within other continua (i.e., surprise–sadness and excitement–disgust) with regard to the arousal and valence dimensions. These findings indicate that categorical perception and dimensional perception co-occurred when emotional facial expressions were rated using a dimensional strategy, suggesting a hybrid theory of categorical and dimensional accounts.
Hybrid theory of emotion; Dimension; Category; Facial expressions
In this paper, the role of self-reported anxiety and degree of conscious awareness as determinants of the selective processing of affective facial expressions is investigated. In two experiments, an attentional bias toward fearful facial expressions was observed, although this bias was apparent only for those reporting high levels of trait anxiety and only when the emotional face was presented in the left visual field. This pattern was especially strong when the participants were unaware of the presence of the facial stimuli. In Experiment 3, a patient with right-hemisphere brain damage and visual extinction was presented with photographs of faces and fruits on unilateral and bilateral trials. On bilateral trials, it was found that faces produced less extinction than did fruits. Moreover, faces portraying a fearful or a happy expression tended to produce less extinction than did neutral expressions. This suggests that emotional facial expressions may be less dependent on attention to achieve awareness.The implications of these results for understanding the relations between attention, emotion, and anxiety are discussed.
There is some evidence that faces with a happy expression are recognized better than faces with other expressions. However, little is known about whether this happy-face advantage also applies to perceptual face matching, and whether similar differences exist among other expressions. Using a sequential matching paradigm, we systematically compared the effects of seven basic facial expressions on identity recognition. Identity matching was quickest when a pair of faces had an identical happy/sad/neutral expression, poorer when they had a fearful/surprise/angry expression, and poorest when they had a disgust expression. Faces with a happy/sad/fear/surprise expression were matched faster than those with an anger/disgust expression when the second face in a pair had a neutral expression. These results demonstrate that effects of facial expression on identity recognition are not limited to happy-faces when a learned face is immediately tested. The results suggest different influences of expression in perceptual matching and long-term recognition memory.
facial expression; identity recognition; face matching
Children with narrow phenotype bipolar disorder (NP-BD; i.e., history of at least one hypomanic or manic episode with euphoric mood) are deficient when labeling face emotions. It is unknown if this deficit is specific to particular emotions, or if it extends to children with severe mood dysregulation (SMD; i.e., chronic irritability and hyperarousal without episodes of mania). Thirty-nine NP-BD, 31 SMD, and 36 control subjects completed the emotional expression multimorph task, which presents gradations of facial emotions from 100% neutrality to 100% emotional expression (happiness, surprise, fear, sadness, anger, and disgust). Groups were compared in terms of intensity of emotion required before identification occurred and accuracy. Both NP-BD and SMD youth required significantly more morphs than controls to label correctly disgusted, surprised, fearful, and happy faces. Impaired face labeling correlated with deficient social reciprocity skills in NP-BD youth and dysfunctional family relationships in SMD youth. Compared to controls, patients with NP-BD or SMD require significantly more intense facial emotion before they are able to label the emotion correctly. These deficits are associated with psychosocial impairments. Understanding the neural circuitry associated with face-labeling deficits has the potential to clarify the pathophysiology of these disorders.
Since the introduction of the associative network theory, mood-congruent biases in emotional information processing have been established in individuals in a sad and happy mood. Research has concentrated on memory and attentional biases. According to the network theory, mood-congruent behavioral tendencies would also be predicted. Alternatively, a general avoidance pattern would also be in line with the theory. Since cognitive biases have been assumed to operate strongly in case of social stimuli, mood-induced biases in approach and avoidance behavior towards emotional facial expressions were studied. 306 females were subjected to a highly emotional fragment of a sad or a happy movie, to induce either a sad mood or a happy mood. An Approach-Avoidance Task was implemented, in which single pictures of faces (with angry, sad, happy, or neutral expression) and non-social control pictures were presented. In contrast to our expectations, mood states did not produce differential behavioral biases. Mood-congruent and mood-incongruent behavioral tendencies were, however, present in a subgroup of participants with highest depressive symptomatology scores. This suggests that behavioral approach-avoidance biases are not sensitive to mood state, but more related to depressive characteristics.
Mood; Approach; Avoidance; Depression; Facial expressions
Electrophysiological correlates of the processing facial expressions were investigated in subjects performing the rapid serial visual presentation (RSVP) task. The peak latencies of the event-related potential (ERP) components P1, vertex positive potential (VPP), and N170 were 165, 240 and 240 ms, respectively. The early anterior N100 and posterior P1 amplitudes elicited by fearful faces were larger than those elicited by happy or neutral faces, a finding which is consistent with the presence of a ‘negativity bias’. The amplitude of the anterior VPP was larger when subjects were processing fearful and happy faces than when they were processing neutral faces; it was similar in response to fearful and happy faces. The late N300 and P300 not only distinguished emotional faces from neutral faces but also differentiated between fearful and happy expressions in lag2. The amplitudes of the N100, VPP, N170, N300, and P300 components and the latency of the P1 component were modulated by attentional resources. Deficient attentional resources resulted in decreased amplitude and increased latency of ERP components. In light of these results, we present a hypothetical model involving three stages of facial expression processing.
faces; attentional blink; three stages of facial expressions processing
Schizophrenia patients demonstrate impaired emotional processing that may be due, in part, to impaired facial emotion recognition. This study examined event-related potential (ERP) responses to emotional faces in schizophrenia patients and controls to determine when, in the temporal processing stream, patient abnormalities occur.
16 patients and 16 healthy control participants performed a facial emotion recognition task. Very sad, somewhat sad, neutral, somewhat happy, and very happy faces were each presented for 100 ms. Subjects indicated whether each face was “Happy”, “Neutral”, or “Sad”. Evoked potential data were obtained using a 32-channel EEG system.
Controls performed better than patients in recognizing facial emotions. In patients, better recognition of happy faces correlated with less severe negative symptoms. Four ERP components corresponding to the P100, N170, N250, and P300 were identified. Group differences were noted for the N170 “face processing” component that underlies the structural encoding of facial features, but not for the subsequent N250 “affect modulation” component. Higher amplitude of the N170 response to sad faces was correlated with less severe delusional symptoms. Although P300 abnormalities were found, the variance of this component was explained by the earlier N170 response.
Patients with schizophrenia demonstrate abnormalities in early visual encoding of facial features that precedes the ERP response typically associated with facial affect recognition. This suggests that affect recognition deficits, at least for happy and sad discrimination, are secondary to faulty structural encoding of faces. The association of abnormal face encoding with delusions may denote the physiological basis for clinical misidentification syndromes.
Schizophrenia; Emotion; Face Recognition; Event-Related Potentials; N170
Social anxiety is characterized by fear of evaluative interpersonal situations. Many studies have investigated the perception of emotional faces in socially anxious individuals and have reported biases in the processing of threatening faces. However, faces are not the only stimuli carrying an interpersonal evaluative load. The present study investigated the processing of emotional body postures in social anxiety. Participants with high and low social anxiety completed an attention-shifting paradigm using neutral, angry and happy faces and postures as cues. We investigated early visual processes through the P100 component, attentional fixation on the P2, structural encoding mirrored by the N170, and attentional orientation towards stimuli to detect with the P100 locked on target occurrence. Results showed a global reduction of P100 and P200 responses to faces and postures in socially anxious participants as compared to non-anxious participants, with a direct correlation between self-reported social anxiety levels and P100 and P200 amplitudes. Structural encoding of cues and target processing were not modulated by social anxiety, but socially anxious participants were slower to detect the targets. These results suggest a reduced processing of social postural and facial cues in social anxiety.
Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms.
emotion detection; expression categorization; face-inversion effect; awareness; face processing
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent.
Crossmodal interaction; emotion; facial expressions; laughter
With emergence of new technologies, there has been an explosion of basic and clinical research on the affective and cognitive neuroscience of face processing and emotion perception. Adult emotional face stimuli are commonly used in these studies. For developmental research, there is a need for a validated set of child emotional faces. This paper describes the development of the NIMH Child Emotional Faces Picture Set (NIMH-ChEFS), a relatively large stimulus set with high quality, color images of the emotional faces of children. The set includes 482 photos of fearful, angry, happy, sad and neutral child faces with two gaze conditions: direct and averted gaze. In this paper we describe the development of the NIMH-ChEFS and data on the set’s validity based on ratings by 20 healthy adult raters. Agreement between the a priori emotion designation and the raters’ labels was high and comparable with values reported for commonly used adult picture sets. Intensity, representativeness, and composite “goodness” ratings are also presented to guide researchers in their choice of specific stimuli for their studies. These data should give researchers confidence in the NIMH-ChEFS’s validity for use in affective and social neuroscience research.
face processing; emotion perception; face stimuli sets; developmental psychopathology; methodology
We recently showed that, in healthy individuals, emotional expression influences memory for faces both in terms of accuracy and, critically, in memory response bias (tendency to classify stimuli as previously seen or not, regardless of whether this was the case). Although schizophrenia has been shown to be associated with deficit in episodic memory and emotional processing, the relation between these processes in this population remains unclear. Here, we used our previously validated paradigm to directly investigate the modulation of emotion on memory recognition. Twenty patients with schizophrenia and matched healthy controls completed functional magnetic resonance imaging (fMRI) study of recognition memory of happy, sad, and neutral faces. Brain activity associated with the response bias was obtained by correlating this measure with the contrast subjective old (ie, hits and false alarms) minus subjective new (misses and correct rejections) for sad and happy expressions. Although patients exhibited an overall lower memory performance than controls, they showed the same effects of emotion on memory, both in terms of accuracy and bias. For sad faces, the similar behavioral pattern between groups was mirrored by a largely overlapping neural network, mostly involved in familiarity-based judgments (eg, parahippocampal gyrus). In contrast, controls activated a much larger set of regions for happy faces, including areas thought to underlie recollection-based memory retrieval (eg, superior frontal gyrus and hippocampus) and in novelty detection (eg, amygdala). This study demonstrates that, despite an overall lower memory accuracy, emotional memory is intact in schizophrenia, although emotion-specific differences in brain activation exist, possibly reflecting different strategies.
amygdala; faces; facial expression; functional neuroimaging; sad; happy; symptomatology; recollection; familiarity
The ability to recognize emotions in facial expressions relies on an extensive neural network with the amygdala as the key node as has typically been demonstrated for the processing of fearful stimuli. A sufficient characterization of the factors influencing and modulating amygdala function, however, has not been reached now. Due to lacking or diverging results on its involvement in recognizing all or only certain negative emotions, the influence of gender or ethnicity is still under debate.
This high-resolution fMRI study addresses some of the relevant parameters, such as emotional valence, gender and poser ethnicity on amygdala activation during facial emotion recognition in 50 Caucasian subjects. Stimuli were color photographs of emotional Caucasian and African American faces.
Bilateral amygdala activation was obtained to all emotional expressions (anger, disgust, fear, happy, and sad) and neutral faces across all subjects. However, only in males a significant correlation of amygdala activation and behavioral response to fearful stimuli was observed, indicating higher amygdala responses with better fear recognition, thus pointing to subtle gender differences. No significant influence of poser ethnicity on amygdala activation occurred, but analysis of recognition accuracy revealed a significant impact of poser ethnicity that was emotion-dependent.
Applying high-resolution fMRI while subjects were performing an explicit emotion recognition task revealed bilateral amygdala activation to all emotions presented and neutral expressions. This mechanism seems to operate similarly in healthy females and males and for both in-group and out-group ethnicities. Our results support the assumption that an intact amygdala response is fundamental in the processing of these salient stimuli due to its relevance detecting function.
Unconscious processing of stimuli with emotional content can bias affective judgments. Is this subliminal affective priming merely a transient phenomenon manifested in fleeting perceptual changes, or are long-lasting effects also induced? To address this question, we investigated memory for surprise faces 24 hours after they had been shown with 30-ms fearful, happy, or neutral faces. Surprise faces subliminally primed by happy faces were initially rated as more positive, and were later remembered better, than those primed by fearful or neutral faces. Participants likely to have processed primes supraliminally did not respond differentially as a function of expression. These results converge with findings showing memory advantages with happy expressions, though here the expressions were displayed on the face of a different person, perceived subliminally, and not present at test. We conclude that behavioral biases induced by masked emotional expressions are not ephemeral, but rather can last at least 24 hours.
subliminal priming; memory; emotion; facial expressions; consciousness; awareness; affect
Previous research has suggested that children do not rely on prosody to infer a speaker's emotional state because of biases toward lexical content or situational context. We hypothesized that there are actually no such biases and that young children simply have trouble in using emotional prosody. Sixty children from 5 to 13 years of age had to judge the emotional state of a happy or sad speaker and then to verbally explain their judgment. Lexical content and situational context were devoid of emotional valence. Results showed that prosody alone did not enable the children to infer emotions at age 5, and was still not fully mastered at age 13. Instead, they relied on contextual information despite the fact that this cue had no emotional valence. These results support the hypothesis that prosody is difficult to interpret for young children and that this cue plays only a subordinate role up until adolescence to infer others’ emotions.