|Home | About | Journals | Submit | Contact Us | Français|
Numerous studies have demonstrated that Huntington's disease mutation-carriers have deficient explicit recognition of isolated facial expressions. There are no studies, however, which have investigated the recognition of facial expressions embedded within an emotional body and scene context. Real life facial expressions are typically embedded in contexts which may dramatically change the emotion recognized in the face. Moreover, a recent study showed that the magnitude of the contextual bias is modulated by the similarity between the actual expression of the presented face and the facial expression that would typically fit the context, e.g. disgust faces are more similar to anger than to sadness faces and, consequently, are more strongly influenced by contexts expressing anger than by contexts expressing sadness. Since context effects on facial expression perception are not explicitly controlled, their pattern serves as an implicit measure of the processing of facial expressions. In this study we took advantage of the face-in-context design to compare explicit recognition of face-expressions by Huntington's disease mutation-carriers, with evidence for processing the expressions deriving from implicit measures. In an initial experiment we presented a group of 21 Huntington's disease mutation-carriers with standard tests of face-expression recognition. Relative to controls, they displayed deficits in recognizing disgust and anger faces despite intact recognition of these emotions from non-facial images. In a subsequent experiment, we embedded the disgust faces on images of people conveying sadness and anger as expressed by body language and additional paraphernalia. In addition, sadness and anger faces were embedded on context images conveying disgust. In both cases participants were instructed to categorize the facial expressions, ignoring the context. Despite the deficient explicit recognition of isolated disgust and anger faces, the perception of the emotions expressed by the faces was affected by context in Huntington's disease mutation-carriers in a similar manner as in control participants. Specifically, they displayed the same sensitivity to face–context pairings. These findings suggest that, despite their impaired explicit recognition of facial expressions, Huntington's disease mutation-carriers display relatively preserved processing of the same facial configurations when embedded in context. The results also show intact utilization of the information elicited by contextual cues about faces expressing disgust even when the actually presented face expresses a different emotion. Overall, our findings shed light on the nature of the deficit in facial expression recognition in Huntington's disease mutation-carriers as well as underscore the importance of context in emotion perception.
Over a decade ago studies showed that individuals with Huntington's disease display impaired recognition of emotion from facial expressions (Jacobs et al., 1995). Subsequent research suggested that these individuals actually have a disproportionate deficit in the recognition of disgust facial expressions even at the pre-symptomatic stage (e.g. Sprengelmeyer et al., 1996, 1998, 2006, 2007; Gray et al., 1997; Wang et al., 2003; Montagne et al., 2006).
In addition, structural MRI studies of preclinical Huntington's disease mutation-carriers (HDMC) demonstrated a reduction of grey matter in the left striatum and bilateral insula (Thieben et al., 2002) which have been implicated in the processing of disgust stimuli in several studies (e.g. Phillips et al., 1997, 1998; Sprengelmeyer et al., 1998; Calder et al., 2000; Krolak-Salmon et al., 2003; Wicker et al., 2003). Furthermore, reduced fMRI activation to disgust stimuli in the striatum and insula has been noted in pre clinical Huntington's disease (Hennenlotter et al., 2004). Taken together, these studies were viewed as supporting the notion of specific underlying neural systems and structures that mediate disgust processing (e.g. Calder et al., 2001; Calder and Young, 2005; Adolphs, 2006; Sprengelmeyer et al., 2006),
Other studies, however, extended the finding of a disgust-specific deficit in symptomatic and pre-symptomatic Huntington's disease patients showing that the recognition of all negative facial expressions was impaired with disproportionate deficits occurring with fear faces (Milders et al., 2003). Recently, a large-scale HDMC study with pre-clinical and manifest Huntington's disease found a generalized deficit in the recognition of all negative facial expressions, with no specific deficits for any emotion (Johnson et al., 2007).
Yet, regardless of whether the deficit is accentuated for disgust, or more generalized, the above review suggests ample evidence demonstrating impaired recognition of facial expressions even at the pre-clinical stages of the disease. In this article, we present a novel approach in which we examine HDMC processing of facial expressions embedded in perceptual/emotional contexts. We hypothesized that this approach can shed additional light on the characteristics of the face expression recognition deficits of HDMC.
Revealing the characteristics of the face expression recognition deficits of HDMC may be of clinical interest for several reasons. Proficient recognition of emotion from facial expressions predicts successful social functioning in healthy individuals (e.g. Corden et al., 2006; Marsh et al., 2007). In the case of Huntington's disease, misrecognizing facial expressions of others (e.g. recognizing a worried face as an angry face) might lead to elevated aggression which is a frequent psychiatric manifestation of the disease (Naarding et al., 2001; Timman et al., 2004). Furthermore, deficits in facial expression recognition manifest as one of the very early clinical symptoms of Huntington's disease, with performance worsening as the disease progresses (Johnson et al., 2007). Hence, tracking the course of facial expression recognition might also serve as a sensitive indicator of the mutation-carrier's clinical status. It is not surprising, therefore, that the facial expression recognition deficits in Huntington's disease have raised much interest during recent years.
Notwithstanding the importance of the aforementioned studies, they were all limited by constraining their investigations to the recognition of isolated facial expressions devoid of any context. Real life facial expressions, however, are typically embedded in a rich and informative context. Nevertheless, the influence of context on the processing of facial expression perception in HDMC has not previously been investigated.
The fact that previous research on facial expression perception has relied mostly on isolated faces is, in itself, not surprising. This methodological choice has been guided by the notion that basic facial expressions are universal (Ekman, 1993) and categorically discrete signals of emotion (Etcoff and Magee, 1992; Young et al., 1997). Consequently, these facial signals were assumed to be directly mapped to specific emotional categories (Buck, 1994; Keltner et al., 2003). Specifically, it has been posited that the recognition of basic facial expressions and their assignment to emotion categories is relatively immune to context influence (e.g. Ekman and O'Sullivan, 1988; Nakamura et al., 1990; Buck, 1994). Although the attribution of specific emotions according to the dimensional view is assumed to depend on situational information (Carroll and Russell, 1996), the values of valence and arousal that are read-out from the face are assumed to be generally immune to contextual influence (Russell and Bullock, 1986; Russell, 1997).
In contrast to this prevalent view, recent investigations have shown that facial expressions are influenced by context more than had previously been assumed (e.g. Meeren et al., 2005; Van den Stock et al., 2007; Aviezer et al., 2008a, b). In fact, under certain conditions, context can dramatically shift the emotional category recognized in basic facial expressions. For example, Aviezer and colleagues ‘planted’ prototypical pictures of disgust faces on bodies of models conveying different emotions (such as anger and sadness). Their results showed that placing a face in context induces striking changes in the recognition of emotional categories from the facial expressions. Importantly, that study revealed that a given facial expression is not uniformly influenced by all emotional contexts. Rather, the magnitude of contextual influence is strongly correlated with the degree of similarity between the expression of the target face (i.e. the face being presented) and the facial expression that is typically associated with the emotional context. The more similar these two faces are the stronger is the influence. For example, disgust faces are highly similar to anger faces, and less so to sadness faces (Susskind et al., 2007). And indeed, an anger context results in a striking contextual influence on disgust faces, whereas an equally powerful and recognizable sad context induces much weaker effects on the same disgust faces (Aviezer et al., 2008a). We coined this pattern of results the ‘similarity effect’. Interestingly, intention is not a prerequisite for this effect to occur and participants are not aware of the influence of the context. In fact, recent experiments in our lab have shown that participants cannot ignore the context even if they are explicitly instructed to do so, and even if they are motivated via a monetary reward (Aviezer et al.., manuscript under review). Hence, the similarity effect can serve as an implicit measure of the processing of the face, the context, and their integration.
In the present study we compared explicit measures of emotion recognition and the implicit measure described above. If HDMC patients show a normal similarity effect, in other words, if our implicit measure of expression processing shows that these patients are not impaired, then it would seem that the participants’ problem with facial expressions of emotions lies in the process of explicit recognition of the emotion from the face, and not in the early low level stages of face-expression and/or emotion perception.
In Experiment 1, we used a series of tasks in order to characterize the functioning of the HDMC group on standard visual tests of acuity and face perception, as well as on a self-report disgust questionnaire. In addition, we assessed emotion recognition from images of isolated facial expressions as well as from images of emotional (but faceless) body language and scenes. In Experiment 2 we investigated the influence of these emotional contexts on the perception of basic facial expressions.
Twenty-five HDMC (14 females) were recruited from the Huntington's disease Clinic at North York General Hospital, Ontario, Canada. The HDMC group inclusion was based on positive gene testing. Participants were included only if they had a CAG repeat size of 36 or greater in one of their Huntingtin alleles and if they were asymptomatic or had only mild Huntington's disease symptoms. Two participants were excluded in the visual acuity screening (i.e. acuity < 20/20) and two patients did not complete the study. Hence, the following report refers to the remaining 21 HDMC (12 females).
The motor subsection of the Unified Huntington's Disease Rating Scale (UHDRS: Huntington's Study Group, 1996) was used to provide an assessment of motor impairments. The UHDRS motor subsection contains 31 items, each ranging from 0 (no pathology) to 4 (severe pathology). Hence, a maximum score of 124 would indicate severe and unequivocal symptoms of Huntington's disease. Table 1 summarizes the full clinical characteristics of each of the participants in the HDMC group. The average sum score of the HDMC participants in the UHDRS was 9.18/124 (SD = 10.01), indicating relatively mild and non-specific motor symptoms, if any.
Previous work has established that pre-diagnosed HDMC vary significantly in a host of symptomatic signs including motor scores, cognitive measures, striatal volumes and facial expression recognition (e.g. Johnson et al., 2007; Paulsen et al., 2008). One factor which may explain this variance is the proximity to Huntington's disease diagnosis, which can be estimated based on the age and number of CAG repeats (Langbehn et al., 2004). Using the model developed by Langbehn and colleagues, we obtained the probability of clinical Huntington's disease diagnosis within the next 5 years, based on the patient's age and CAG repeat size for each HDMC (Table 1). It should be noted that the age of the pre-clinical Huntington's disease participants in our sample is higher that the age reported in several papers of pre-manifest Huntington's disease. This most probably reflects the fact that we were particularly interested in pre-symptomatic participants, and hence we selected our patients a priori, based on clinical status criteria and not by age, a strategy that should be taken into consideration when generalizing from our results to younger groups of pre-clinical HDMC.
The mean CAG repeat size of the HDMC was 42.8 (SD = 3.7) and the mean probability of Huntington's disease onset within 5 years was 0.37 (SD = 0.23).
The control group consisted of 27 participants (15 females), free of neurological or psychiatric disorders. They were recruited through the Baycrest community register of volunteers from the Toronto area. The control group was matched with the HDMC group in gender ratio and age (HDMC: M = 48.3, SD = 10.1; controls: M = 49.2, SD = 10, P > 0.1).
All participants signed a written informed consent, were debriefed after being tested, and were compensated for their time and travel expenses. The study was approved by the ethics boards of Baycrest and North York General Hospital.
Since our study focused on recognition of facial expressions, we wished to examine the performance of the HDMC on tasks of visual acuity and face perception ability. To this end, we used the Snellen visual acuity chart and the Benton Facial Recognition Test (BFRT) (Benton et al., 1983). In the BFRT, participants have to match an unfamiliar target face with simultaneously presented photographs of the same person shown in different lighting and orientation among a group of distracter faces. No time limit is imposed on this test.
Given the previously reviewed interest in the emotion of disgust, we examined the conceptual understanding and self reported experience of disgust using the Disgust Scale questionnaire (Haidt et al., 1994). This scale consists of 32 items encompassing eight domains of disgust elicitors such as: food, animals, body products, injuries, etc. To illustrate, participants are required to read statements such as ‘You see maggots on a piece of meat in an outdoor garbage pail’, and indicate if they would find that situation ‘Not Disgusting’, ‘Slightly Disgusting’, or ‘Very Disgusting’. Higher scores indicate heightened sensitivity to the emotion of disgust.
Stimuli for this task included images of models (one male and one female) positioned in scenes conveying prototypical emotions via body language and additional paraphernalia. The displayed emotions were disgust, sadness and anger (Fig. 1). The disgust image portrayed a model in a revolted body pose handling a pair of dirty underwear. The sadness image portrayed a model standing near a grave in a heart-broken pose. The anger image portrayed a model in a threatening pose waving an angry fist. These images have been previously shown to be highly and equally recognizable indicators of their respective emotion categories (Aviezer et al., 2008a, b). Importantly, the faces were cut out from these images so that they were not available for deducing the emotion of the scene. In Experiment 2, these images served as the context in which the faces were embedded.
The images were shown one at a time on a computer monitor and participants were requested to choose from among six options listed below each image (anger, sadness, fear, disgust, surprise and happiness) the one that best described the emotion that the person would be feeling. No time limit was imposed.
An adapted version of the Ekman 60 Faces test (Young et al., 2002) was used to assess facial expression perception. In this version, participants viewed 10 different models from the Ekman and Friesen (1976) series, each displaying expressions corresponding to the basic emotions of anger, disgust, happiness and sadness. The 40 faces were shown one at a time, in random order, on a computer monitor. Viewers were requested to choose from the options appearing below each face (anger, sadness, fear, disgust, surprise, happiness) the emotion that best described the facial expression. No time limit was imposed.
Scores on the Benton Facial Recognition Test, did not differ between the HDMC (M = 45.7, SD = 6.1) and controls (M = 45.8, SD = 5.3), (P > 0.5, Table 1). Thus, any differences in the subsequent tests of emotion perception did not directly result from poor visual acuity or impaired face identity processing.
The average disgust scores of the HDMC (M = 16.4, SD = 6.5) and controls (M = 18.9, SD = 5.9) did not differ significantly, t(44) = 1.3, P > 0.15. Both groups fell within the average range (M = 14.95; SD = 5.11) for healthy participants (Johnson et al., 2007).
Recognition of the three different emotion categories (anger, sadness, disgust) was compared between the groups in a 2 (Group) × 3 (Emotion) mixed ANOVA. There was no significant effect of Emotion category and no interaction (both F values < 1.00). As can be observed in Fig. 2, the HDMC participants displayed intact recognition of the emotional body scenes and were no different than controls in recognizing the emotion portrayed by the images. Accordingly, the main effect of the Group was not significant [F(1, 43) = 1.8; P > 0.17]. Overall, our findings indicate that our sample of HDMC displayed intact recognition of emotion from non-facial visual stimuli.
Recognition of the four different facial expressions (anger, disgust, happiness and sadness) was compared in a 2 (Group) × 4 (Expression) mixed ANOVA (Fig. 3). The analysis revealed a significant effect of Expression category, [F(3,138) = 29.4; P < 0.0001], a significant effect of the Group, [F(1,46) = 5.9; P < 0.01], and no interaction [F(3,138) = 1.9; P > 0.1].
Despite the absence of a Group × Expression interaction, planned comparisons revealed no differences in the recognition rates of sad and happy expressions [t(46) = 0.5, P > 0.5] and [t(46) =.3, P > 0.7], respectively. In contrast, the HDMC group was significantly worse than the control group at recognizing facial expressions of disgust, [t(46) = 2.1, P < 0.03], and anger, [t(46) = 2, P < 0.05].
The values of Huntington's disease diagnosis probability within the next 5 years were distributed in a symmetrical and non-skewed manner (mean = 0.37, median = 0.36, mode = 0.36, skewness = 0.05). Consequently, we split the sample of HDMC into two groups: high probability Huntington's disease (M = 0.56, SD = 0.14) and low probability (M = 0.19, SD = 0.13).
Participants in the high probability group (M = 42.8, SD = 6.6) were less accurate than the low probability group (M = 48.8, SD = 4.4) in the Benton facial recognition task, [t(17) = 2.3, P < 0.04], although both groups performed within normal limits. Similarly, a trend was observed suggesting that overall the high probability group was somewhat poorer at recognizing isolated facial expressions than the low probability group [F(1,18) = 3.5, P = 0.076], although performance in both HDMC sub groups was poorer than that of controls. In contrast, the groups did not differ in their recognition of isolated context images (F < 0.3), or their disgust sensitivity scores (t < 1).
In Experiment 1 we aimed at characterizing the performance of HDMC in a range of visual tests, as well as in tasks of emotion-recognition and emotion self-report. In accordance with other studies, HDMC were no different than healthy participants in tasks that assessed, face identity perception and self report of disgust (e.g. Gray et al., 1997; Sprengelmeyer et al., 2006; Johnson et al., 2007).
HDMC participants were also similar to controls in perceiving anger, sadness and disgust from emotion-portraying bodies and scenes. This outcome adds to similar findings of intact perception of emotion from visual scenes portraying happiness, sadness and fear in Huntington's disease patients (Hayes et al., 2007). While Hayes et al. have shown that Huntington's disease patients recognized images of body mutilation as fearful or saddening rather than disgusting, they were just as likely as controls to recognize images of cockroaches and human faeces as disgusting. Although, a recent note by de Gelder et al. (2008) reported impaired recognition of anger body language in Huntington's disease, differences in the stimuli as well as in the stage of Huntington symptoms makes the discrepancy in findings difficult to interpret. Interestingly these authors did not test disgust, as it cannot be conveyed unequivocally using pure body language without additional paraphernalia.
When presented with a task requiring the recognition of isolated facial expressions, the performance of HDMC was only slightly below, indeed, comparable to controls for sad and happy facial expressions. In contrast, the HDMC were significantly worse than controls at recognizing facial expressions of disgust and anger. Although the purpose of the current study was not to resolve the disgust-specific versus generalized deficit debate in HDMC and, indeed, the absence of a Group × Expression interaction prohibits strong conclusions in that direction, the current results are more in line with the notion that the deficit is not limited to disgust (Johnson et al., 2007).
As shown in previous studies, pre-manifest HDMC performance may vary as a function of the proximity to Huntington's disease diagnosis. HDMC participants with a higher probability of diagnosis within the next 5 years were less accurate in the Benton facial recognition task, and a trend suggested that, overall, they were poorer at recognizing isolated facial expressions. These results highlight the fact that a group of homogenously appearing pre-manifest Huntington's disease participants may in fact vary as a function of estimated proximity to diagnosis. In general, the findings are in line with prior reports concerning the relation between probability of Huntington's disease onset and facial expression recognition (Johnson et al., 2007).
In conclusion, the data in Experiment 1 are in agreement with previous findings, namely, that despite normal visual acuity, intact face identity processing and normal conceptual understanding of emotion, HDMC have difficulties in explicitly recognizing facial expressions of emotion, with particular difficulties in recognizing disgust and anger. However, like previous work in the field, the present findings focused on the explicit recognition of isolated facial expressions devoid of any context. Hence, it is of interest to examine the effects of context on the recognition of facial expressions in a group of HDMC.
The results of Experiment 1 suggest that Huntington's disease mutation-carriers are impaired at the explicit recognition of facial expressions from isolated faces. However, it remains unclear at what level of processing this impairment occurs. Does it originate at the low-level perceptual processing of the facial configuration, or, alternatively, at the higher levels of mapping the facial expression to the emotional category? As the processing of disgust faces in HDMC has previously raised much theoretical interest (e.g. Sprengelmeyer et al., 1996, 2007; Gray et al., 1997; Wang et al., 2003; Montagne et al., 2006), we focused the investigation in Experiment 2 on this emotion. Specifically, we explored the characteristics of the HDMC deficit in two ways: (i) by presenting disgust faces in non-disgust (anger and sadness) contexts and (ii) by presenting non-disgust (anger and sadness) faces in disgust contexts.
Our previous work has demonstrated that the recognition of disgust faces is differently influenced by various emotional contexts (i.e. the similarity effect). For example, the tendency to misrecognize a disgusted face as angry when embedded in an anger-expressing context is higher than the tendency to misrecognize it as sad in an equally strong sadness-expressing context (Aviezer et al., 2008a). This is probably because the similarity between the expression of disgust and of anger in the face is more similar than the expression of disgust and sadness.
We hypothesized that if the HDMC deficit in recognizing disgust faces results from poor low-level perceptual processing of the face expressions then the interaction between face and context should not reveal the similarity effect, simply because one of the inputs to this process (i.e. the presented face) is damaged. In contrast, if HDMC patients' low-level perceptual processing of the face is intact, and so is their ability to use this information in the integration process, then they should not differ from controls in the pattern of miscategorizing the disgust faces as portraying the emotions induced by the context.
A second question of interest has to do with the ability of Huntington's disease mutation-carriers to activate face-related aspects of the representation of disgust. To this end we embedded faces expressing anger and sadness in contexts expressing disgust. According to our model, the strength of the context effects is (at least partly) determined by the similarity between the target facial expression and the expression of the face typically associated with the emotional context. Note that the latter expressions never appear in our stimuli. This suggests that the representation of face-related aspects of the emotional context takes part in the process of integrating context and face. Hence, if Huntington's disease mutation-carriers suffer from deficiencies in the activation of face-related aspects of the representation of disgust, or from deficiencies in the representation itself, then their behaviour should not show the similarity effect. In contrast, if they do not suffer from these deficiencies, their recognition patterns should consistently resemble those of control participants.
The same 21 HDMC and 27 control participants who were tested and included in Experiment 1 were also tested in this experiment.
Images of 10 models (five females) posing facial expressions of disgust, anger and sadness were chosen from Ekman and Friesen's (1976) standardized database. These 30 faces were identical to those used in the test of isolated face expression recognition in Experiment 1.
For context images we used pictures of models that appeared in emotional situations portraying anger, sadness and disgust. These context images were identical to the sadness, anger and disgust images used previously in the test of emotional recognition of scenes and body language. To reiterate, these emotional context images were all accurately recognized with no differences between the groups or contextual emotion categories (see Experiment 1). Face-body composites were created by seamlessly combining the facial expressions and the bodies with Photoshop 7.0 in realistic head-body proportions. Seen from a distance of 60 cm, the images occupied 13° × 6° of the visual field.
Two groups of face–body compound stimuli were created: (i) facial expressions of disgust embedded in scene context including bodies conveying anger (Fig. 4A) and sadness (Fig. 4B) and (ii) facial expressions of anger and sadness embedded in disgust contexts (Fig. 4C and Fig. 4d, respectively). All stimuli were presented randomly in a within-participant design.
For disgust faces on non-disgust contexts, a 2 × 2 repeated measures design was used with Context expression (anger, sadness) as a within-subject factor, and Group (HDMC, controls) as a between-groups factor. The dependent variable was Contextual influence, that is, the percentage of times the faces were (mis)recognized as expressing the context's emotion (rather than any other emotion). This measure allows one to directly assess the tendency of participants to shift from the original, intended emotional facial category (i.e. the one proposed by Ekman and Friesen, 1976), to the new emotional facial category which would typically be expected by the context.
For non-disgust faces on disgust contexts, a 2 × 2 repeated measures design was used with Face expression (anger, sadness) as a within-subject factor, and Group (HDMC, controls) as a between-groups factor. The same dependent measure of Contextual influence used in the previous design, was also used here.
All face-context composites were shown on a computer monitor one at a time in random order and viewers were requested to choose from a list of basic emotions presented below each image the label that best describes the facial expression (happiness, surprise, fear, sadness, disgust, or anger). No time limits were imposed on performance.
Both groups were far more likely to recognize the disgust faces as conveying anger in the anger context than as conveying sadness in the sadness context (Fig. 5 and Table 2). This pattern was corroborated by a significant main effect of Context [F(1,46) = 172.9, P < 0.0001]. Importantly, the two groups did not differ in their categorizations, [F(1,46) = 0.149, P > 0.7], and there was no Group by Context interaction, [F(1,46) = 0.4, P > 0.5].
Pair-wise comparisons between the HDMC and control groups for each context category confirmed that the groups did not differ for either the anger contexts [t(46) = 0.7, P > 0.4] or the sadness contexts [t(46) = 0.07, P > 0.9].
The HDMC and control participants displayed virtually identical patterns of contextual influence for both the anger and sadness facial expressions embedded in the disgust context. The effects of Face (F < 0.6), Group (F < 0.04), and their interaction (F < 0.05) were all non-significant (Fig. 6 and Table 3).
In Experiment 2 we examined the recognition of disgust faces in non-disgust contexts, as well as of non-disgust faces in disgust contexts.
In the first experimental session we compared the recognition of disgust faces embedded in anger versus sadness contexts. Both HDMC and controls were strongly influenced by the context. Most importantly, the groups were similarly affected by the subtle face–context interactions, as evidenced by their comparable similarity effect (i.e. both groups were far more likely to misrecognize disgust faces in an anger context as ‘anger’ than they were to misrecognize disgust faces in a sadness context as ‘sad’).
Recall that, like controls, HDMC displayed intact mapping of sadness, anger and disgust when these emotions were conveyed by non-facial contexts. The fact that these contexts exerted similar influence on HDMC and controls emotion recognition suggests that the former were able to perceive disgust faces, and use them in the process of face–context integration, as well as healthy controls.
It is important to note that in neither group could the findings be explained by demand characteristics or by a strategy of merely ignoring the faces. Any of these alternative explanations would have resulted in equally strong context effects for both contextual conditions. Both sadness and anger contexts are ‘incongruent’ with the emotion of the disgust face, and both emotional contexts were equally recognizable in themselves. However, the two conditions differed in the degree of similarity between the disgust face (target) and the facial expressions typically associated with each of the contexts. Consequently the different context conditions differed in the extent to which they affected participants’ categorizations. Replicating our previous study (Aviezer et al., 2008a) the magnitude of the expected context effect was strongly affected by this similarity, and there were no differences between groups. This, we suggest, serves as evidence for intact low-level processing of disgust faces, as well as intact ability to use these facial expressions in the process of integrating face and context into an emotion judgment.
In the second experimental session we presented non-disgust faces in disgust contexts. Both the HDMC and the control groups displayed conspicuous and comparable contextual influence for both conditions. This pattern suggests that HDMC have intact ability to activate the representation of disgust upon perceiving disgust context. Similarly, it implies that HDMC function like controls as far as integrating disgust contexts with non-disgust faces.
The main purpose of the current study was to investigate the influence of context on the perception of facial expressions in HDMC. Specifically, based on our previous work on face-based emotion perception from faces, our working hypothesis was that the patterns of contextual influence may serve as an implicit measure of the processing and utilization of facial expressions of emotions.
To this end, the current study used implicit measures to examine the processing of facial expressions of emotions in general—and of disgust in particular—among HDMC. The overall pattern of results indicated that despite the deficient explicit recognition of isolated disgust and anger faces, the perception of the emotions expressed by the faces was affected by context in HDMC in a highly similar manner as in control participants. Specifically, HDMC displayed the same sensitivity to face–context pairings as controls did.
These data yield a consistent picture, namely, our implicit measures suggest that HDMC are not impaired at the low-level processing of facial expressions. Neither are they impaired at processes that use this information during emotion categorization. Rather, the current results suggest that the impairment of HDMC has to do with the explicit recognition of facial expressions, more than it has to do with the perception or general processing of the facial configuration.
In Experiment 1, we replicated the classic finding in which Huntington's disease mutation-carriers, with otherwise normal vision, show impairments in the explicit recognition of emotions from isolated faces (e.g. Jacobs et al., 1995; Sprengelmeyer et al., 1996; Johnston et al., 2007). Although the goal of our study was not to address the disgust-specific versus generalized deficit debate, our data were less consistent with the disgust specific view. Compared to controls, HDMC were significantly worse at recognizing isolated disgust and anger faces, but no significant differences were found between the groups for the recognition of sadness and happiness. Note, however, that the facial expression of disgust and anger are similar, which is a caveat for the above interpretation. Furthermore, since investigating contextual influence was our primary interest, we diverged from the more traditional paradigm in which six or seven basic emotions are presented. Hence, it is possible that if more emotions had been presented, we would have noticed additional deficits in the HDMC group.
Did the Huntington's disease mutation-carriers perform the worst at recognizing anger and disgust because of a general task difficulty effect? (e.g. Rapcsak et al., 2000; Milders et al., 2003). Although possible, it should be noted that results from several studies suggest that disgust and anger facial expressions are actually easier to recognize than sadness expressions (e.g. Young et al., 2002; Johnson et al., 2007). Furthermore, Johnson et al. (2007) found no evidence for emotion selective deficits in their large sample of Huntington's disease participants as would be expected if some emotions are more difficult to recognize than others. Hence, the link between task difficulty and neuropsychological deficits is complex in this case and cannot easily account for the current pattern of results.
In addition to presenting isolated facial expressions, we also examined the recognition of emotions in scenes conveying sadness, anger and disgust. Overall, our findings indicated that our sample of HDMC displayed intact recognition of emotion from non-facial visual stimuli, suggesting that their observed impairments with faces may reflect a specific problem with the processing of emotional information expressed by faces.
Prior research on facial expressions in Huntington's disease has focused exclusively on isolated facial expressions, devoid of any context. Although the recognition of isolated facial expressions is of theoretical interest, this approach lacks the ecological validity of viewing the expression in the context in which it typically appears (Russell, 1997; Feldman-Barrett et al., 2007). Furthermore, recent investigations have highlighted the importance of the context by demonstrating that facial expressions are far less resilient to contextual influence than previously assumed (e.g. Meeren et al., 2005; Van den Stock et al., 2007; Aviezer et al., 2008a, b).
Therefore, in Experiment 2 we examined the influence of context (including bodily expressions and paraphernalia) on the perception of disgust facial expressions in HDMC, by implanting faces conveying disgust on the equally recognizable contexts of anger and sadness.
Although the HDMC were impaired relative to controls at explicitly recognizing disgust from isolated faces, the magnitude and pattern of the contextual influence of both groups was nearly identical. Most importantly, HDMC were sensitive to the degree of similarity between the target face expression (the face being judged), and the facial expression typically associated with the emotional context.
Although Experiment 1 showed that isolated disgust faces were poorly recognized by the HDMC, the emotional categorization of these same faces was not indiscriminately swayed by the context images. Rather, like controls, the HDMC were far more influenced by the angry context than by the sad context. Hence our implicit measures suggest that while the HDMC were impaired at the explicit recognition of disgust, their low level perception of facial expressions of disgust, as well as their ability to use this information in later processes, is intact.
Furthermore, we found evidence that contexts conveying disgust similarly affected the recognition of anger and sadness faces in both HDMC and control groups. Specifically, the recognition of anger and sadness expressions was dramatically influenced by the disgust context in a manner which was highly comparable between groups. This resulted in both groups in a striking inflation of mis-categorization errors which occur quite rarely when the faces are presented in isolation.
In concert, the present findings shed light on some of the underlying causes of the deficit in disgust face recognition by HDMC. Particularly, they suggest that this deficit does not occur due to a low-level impairment in the perceptual processing of the facial features or configuration, or loose associations between the physical features of the disgust facial configuration and its representation. Had the low-level perceptual processes been impaired, the face expression of disgust would not have been consequential for the categorization of disgust faces. Furthermore, had the association between the emotion and the representation been disrupted in HDMC, they would not have been able to activate the representation of the disgust context.
This pattern suggests that the impaired recognition of emotion from disgust facial expressions in Huntington's disease mutation carriers results from impaired ability to explicitly map the intact perceptual representation of the face to the emotional category of disgust. Similar interpretations have been offered to explain the deficits associated with impaired face recognition in some patients with prosopagnosia (e.g. Sergent and Signoret, 1992; de Gelder et al. 2000), impaired object recognition in some patients with visual object agnosia (e.g. Feinberg et al., 1995; Aviezer et al. 2007) and impaired word recognition in some patients with acquired dyslexia (e.g. Feinberg et al., 1995; Buchanan et al.., 2003).
Evidence that difficulties observed in HDMC occur at a higher-level mapping stage, rather than at a low-level perception stage, might have interesting clinical implications. For example, HDMC participants might be able to benefit from training that is targeted at correctly linking the facial configuration to its respective emotional category. Learning to identify the explicit link between facial expressions and emotional scenes and body language, might prove particularly fruitful as a method of alleviating the ramifications of such deficits in real life situations.
This work was funded by a CIHR grant to Moscovitch, an NIMH grant R01 MH 64458 to Bentin, and an ISF grant 846/03 to Hassin. We thank Alice Kim, and Lucy McGarry for their skilful help in coordinating and running the experiments. Most importantly, we thank the participants from North York General Hospital for their helpful cooperation and interest in our study.