|Home | About | Journals | Submit | Contact Us | Français|
Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD.
One of the most basic elements of emotional functioning, and one of the components most critical to social behavior, is the recognition of the emotional states of others (Darwin, 1872/1965). In Parkinson’s disease (PD), facial emotion identification deficits have been reported in several studies (Dujardin et al., 2004; Jacobs, Shuren, Bowers, & Heilman, 1995; Kan, Kawamura, Hasegawa, Mochizuki, & Nakamura, 2002; Lawrence, Goerendt, & Brooks, 2007; Sprengelmeyer et al., 2003), though not in all (Adolphs, Schul, & Tranel, 1998; Pell & Leonard, 2005). Where deficits are found, there is as yet little consensus as to whether or not they apply to the recognition of specific emotions. Kan and colleagues (2002) noted deficits in the recognition of fear and disgust in medicated PD participants. Sprengelmeyer et al. (2003) found that the recognition of anger and fear was disrupted in medicated PD participants, and that the recognition of fear, sadness, disgust, and anger was impaired in unmedicated PD patients. Dujardin and colleagues (2004) observed that unmedicated PD participants were less accurate than healthy participants in perceiving facial expressions of anger, sadness, and disgust. More recently, Lawrence et al. (2007) reported that the recognition of anger was impaired in PD patients who had been temporarily removed from dopamine replacement therapy.
It is generally argued that abnormalities in facial emotion recognition in PD arise from losses of dopaminergic neurons resulting in dysfunction of fronto-subcortical systems in PD (e.g., Dujardin et al., 2004; Lawrence et al., 2007; Sprengelmeyer et al., 2003). With growing evidence that dissociable neural substrates are involved in the recognition of different emotions (Adolphs, 2002; Posamentier & Abdi, 2003), gaining greater clarity on emotion recognition impairments in PD would help in ascertaining which neural substrates may underlie the disruption of emotion recognition. The inconsistency in the literature may be related to the variability in medication status of PD patients across studies; however, medication status may not explain these inconsistencies entirely. Notably, it has been suggested that the typical methods of investigating emotion recognition, particularly in neurologic patient populations, may result in artifactual findings that are related to task difficulty factors rather than to impairments in emotion recognition abilities arising from disruption of specific neuroanatomical structures (Rapcsak et al., 2000). Inconsistent findings in the PD literature may reflect differences in difficulty levels across emotions within a study. This effect may be compounded by the differences in the difficulty level of stimuli presented (within each category of emotion), which likely vary among studies.
A second source of inconsistency may come from the confounding of emotion recognition abilities with unrelated task requirements. Executive function impairments, a hallmark of frontal dysfunction, are commonly noted in non-demented PD patients (e.g., Zgaljardic, Borod, Foldi, & Mattis, 2003). Frontally-mediated impairments – mainly decision making and categorization abilities – may have an impact on PD patients’ performances on measures of facial emotion recognition. PD patients have shown impairments on tasks of explicit decision making ability (e.g., the Game of Dice task), which correlated with decreases in executive functioning abilities (Brand et al., 2004), and on tests of implicit decision making (e.g., Iowa Gambling Task), in which PET imaging revealed decreased activation in the orbitofrontal cortex (Thiel et al., 2003). Several studies have indicated that the ability to learn categorization rules are impaired in PD (e.g., Ashby, Noble, Filoteo, Waldron, & Ell, 2003; Knowlton, Mangels, & Squire, 1996; Maddox, Aparicio, Marchant, & Ivry, 2005; Maddox & Filoteo, 2001; Price, 2006). Most recently, Filoteo and colleagues (2007) suggested that the presence of the extraneous stimuli features may impair PD patients’ performances on categorization tasks due to increased demands on selective attention processes.
Taken together, these findings are highly relevant to studies of emotion identification that require participants to categorize facial expressions. The methods most often used to assess emotion recognition abilities require that the participant call upon several skills known to be disrupted in PD. Without employing a suitable control task to studies of facial emotion perception, it is difficult to ascertain whether PD participants’ poor performance on tasks of emotion recognition is due to an inability to identify emotions, or whether their difficulties arise from deficits in decision making, categorization skills, or the ability to identify the most salient features demarcating category boundaries. Impairments in the recognition of specific emotions may be related to the latter three possibilities, as any could result in increased error rates.
To help resolve these issues, the present study examined the facial emotion recognition abilities of PD and healthy normal control participants relative to performance on a non-emotional categorization test with comparable task requirements. Landscapes were chosen because they provided a sufficient number of image categories, and, like faces, they are mono-oriented and are composed of several smaller elements that can be individually assessed and integrated when categorizing the image.
A further aim of this study was to examine the relation between emotion recognition and body side of motor onset in PD. Motor symptoms in PD usually have a unilateral onset, and this pattern generally persists throughout the progression of the disease (Lee et al., 1995). There is evidence to suggest that this asymmetry is associated with reduced dopamine levels and abnormalities in the dopamine receptors of the contralateral hemisphere (Innis et al., 1993; Kempster, Gibb, Stern, & Lees, 1989), which persist even after motor symptoms have progressed to a more bilateral presentation (Antonini et al., 1995). Because differences in side of motor onset are associated with dysfunction of hemisphere-specific cognitive abilities in PD (e.g., Amick, Grace, & Chou, 2006; Amick, Schendan, Ganis, & Cronin-Golomb, 2006), in addition to the fact that the right hemisphere is thought to be more active than the left in processing emotional material, we examined the performance of PD participants with right and left motor symptom onset separately. We also examined whether men and women with PD display differences in emotion recognition, as men and women with PD may experience different disease effects (e.g., Haaxma et al., 2007; Shulman, 2007), and numerous studies have shown male-female differences in emotion recognition abilities (e.g., Hall, Carter, Horgan, & Fischer, 2000; Hall & Matsumoto, 2004; Thayer & Johnsen, 2000).
A final aim was to assess whether impairments in emotion recognition, if identified, are associated with increased difficulties in interpersonal relationships. Research in other neuropsychiatric patient populations suggests that abnormalities in facial emotion recognition correlate with declines in interpersonal interactions (Kornreich et al., 2002; Shimokawa et al., 2001). We hypothesized that similar reductions in social interaction may occur in PD patients. Based on reports that female PD patients, compared to male PD patients, tend to endorse more difficulties on overall quality of life measures, which also address social functioning (Behari, Srivastava, & Pandey, 2005; Kuopio, Marttila, Helenius, Toivonen, & Rinne, 2000; Shulman, 2007), we hypothesized that female PD patients in our group would report more interpersonal difficulties than male PD patients.
Participants included 20 individuals with idiopathic PD (10 men, 10 women) and 23 healthy control (HC) participants (11 men, 12 women). The groups did not differ with respect to age or education and were not demented. All attained scores above the cutoff of 27 on the Mini-Mental Status Exam (Folstein, Folstein, & McHugh, 1975). PD participants additionally scored above our cutoff of 136 on the Dementia Rating Scale-2 (DRS-2) (Jurica, Leitten, & Mattis, 2001), which is well above the clinical cutoff for dementia. See Table 1 for details of participant characteristics. Participants with PD were recruited from the Parkinson’s Disease Clinic at the Boston Medical Center and through support groups; the HC group was recruited from the community. All participants were right-handed except for 2 PD and 1 HC individuals. All were required to be native speakers of English. We excluded participation on the basis of history of uncorrected abnormal vision or hearing; psychiatric illness (including diagnosis of depression or anxiety); neurological illness other than PD; history of intracranial surgery, alcoholism or other drug abuse, eye disease; and use of psychoactive medications, except for use of antidepressants and anxiolytics in the PD group due to common usage. The research was approved by Boston University’s Institutional Review Board. All individuals gave their informed consent and were financially compensated for their time.
PD participants were staged by their neurologist according to a measure of motor disability (Hoehn & Yahr, 2001). At the time of testing, all PD participants were in stages II–III (mild to moderate bilateral disability); the average duration of disease was 7.3 years (SD = 4.2). Nine PD participants had right body side onset of motor symptoms (RPD: 4 men, 5 women) and 11 had left-side onset (LPD: 6 men, 5 women). RPD and LPD did not differ in age, education, mental status (MMSE or DRS-2), duration of disease diagnosis, or Hoehn and Yahr scores, nor did male and female PD participants differ on these variables. All PD participants were receiving daily dopamine replacement therapy and/or dopamine receptor agonists; all were tested while being administered their anti-parkinsonian medications (i.e., during their “on state”).
Current levels of depression and anxiety were estimated using the Beck Depression Inventory, 2nd ed. (BDI) (Beck, Steer, & Brown, 1996) and the Beck Anxiety Inventory (BAI) (Beck, Epstein, Brown, & Steer, 1988). The PD and HC groups were statistically different with respect to reported depression (t  = 3.03, p < .01), and anxiety symptoms (t  = 3.18, p = .001). It is important to note that the mean BDI scores for both groups were in the clinically normal range (0–13), and their mean BAI scores were in the minimal to mild range (0–15). RPD and LPD participants did not differ in their ratings on the BDI or BAI, nor did male and female PD or HC participants show differences on these measures.
Each participant demonstrated binocular near acuity of 20/40 or better at a distance of 45.7 cm. Basic visual processing was further assessed using the near distance Functional Acuity Contrast Test (FACT) (Ginsburg, 1996). The FACT provides a measure of contrast sensitivity across five spatial frequencies (1.5, 3, 6, 12, and 18 cycles per degree), by assessing the degree of contrast at which an individual can correctly detect the orientation of sinusoidal gratings at each spatial frequency from a distance of 45.7 cm. PD and HC participants did not differ in their performance on acuity or contrast sensitivity measures; similarly, there was no difference between RPD and LPD participants’ performances on these measures (all p values > .05). Male and female PD participants did not differ on acuity (t  = .97, p = .34).
The 16-item version of the Benton Test of Facial Recognition (Benton, Sivan, Hamsher, Varney, & Spreen, 1994) was administered to assess general face perception abilities. This task presents participants with target photographs of unfamiliar faces and requires that they identify images of the target in a selection of six photographs that have been taken from different angles or under different lighting conditions. The PD and HC groups’ raw scores did not differ (t  = −0.42, p = .68). A one-way analysis of variance (ANOVA) was used to compare the RPD, LPD, and HC groups’ performances. No group differences were observed (F [2, 39] = .45, p = .64). Group means (and standard deviations) were as follows: PD = 22.5 (2.1); RPD = 22.0 (2.4); LPD = 22.8 (1.9); HC = 22.7 (2.0).
The landscape database was designed to be comparable in difficulty level and recognizability to the facial images used in the emotion recognition task (Ekman & Friesen, 1976), as judged by 28 healthy, non-depressed, non-anxious, independent observers (9 men, 19 women), who had a mean age of 19.5 years (range: 18–26) and a mean education of 14.1 years (range: 11–21). We presented 366 landscape images from an inventory of commercially available color images (Corel Stock Photography library), which we modified into black-and-white format. Human figures were digitally removed from all images in order to reduce the possibility that any aspect of the landscape images would activate neural pathways specifically associated with human figure or face perception. Along with the set of landscape images we presented 110 altered photos from the Ekman and Friesen Pictures of Facial Affect database. Facial images depicted the facial area without distracting elements such as hair, jewelry, and ears.
Order of presentation of the image sets was counterbalanced. Images were presented either on the computer screen, one at a time, or on 8 ½″ by 11″ paper, one image per page. The young observers were asked to categorize the facial emotion from seven labels (Angry, Disgust, Fear, Happy, Neutral, Sad, and Surprise) and to categorize the type of landscape presented from a list of landscape labels (Arctic, Canyon, City, Countryside, Desert, Forest, Lake, Mountain, Plain, Shore, Town, Tropical, Valley, and Other) displayed below the target stimulus.
We selected twelve facial images (6 male, 6 female) from each emotion category with the highest identification accuracy rates in our young observers. For the landscape images, we selected those that were judged as being representative of the following seven categories: Canyon, City, Forest, Mountain, Shore, Town, and Tropical. We calculated the percent correct recognition for each landscape image, where the correct category of classification was determined by its mode. Each facial image was matched to a landscape image according to identification accuracy rates of the young observers. We noted that facial images of certain emotions were somewhat difficult to distinguish from each other (e.g., Fear and Surprise; Disgust and Anger). Pairs of emotions that were difficult to differentiate were matched to pairs of landscape categories that were similarly difficult to discern (e.g., Surprise and Fear: Shore and Tropical; Anger and Disgust: City and Town).
Participants completed a word-definition matching task to insure that all participants were able to adequately conceptualize the meaning of the terms used for responses in the emotion recognition and landscapes categorization tasks. All participants were able to correctly match emotion and landscape words with their definitions. Presentation of the following tasks was counterbalanced across participants.
In the facial emotion recognition task, participants viewed 70 black-and-white photographs taken from the described database. Ten (5 male, 5 female) images were selected from each of the following six prototypic emotion categories: Angry, Disgust, Fear, Happy, Sad, and Surprise, plus Neutral. Participants were asked to select the emotion represented in the face from seven emotion labels (Angry, Disgust, Fear, Happy, Neutral, Sad, and Surprise), which were displayed below each face.
In the landscape categorization task, participants viewed 70 black-and-white photographs taken from the described database. The image set included 10 images from each of the following categories: Canyon, City, Forest, Mountain, Shore, Town, and Tropical. The set included images in portrait and landscape orientations. Participants were asked to select the landscape category that best described the image from the list of seven landscape categories (Canyon, City, Forest, Mountain, Shore, Town, and Tropical), which was provided below each image (see Figure 1).
Both tasks were performed in a darkened room in which participants were dark adapted for 10 minutes to insure maximal performance. Stimuli were viewed on a 17-in. (43.2 cm) monitor. A chin rest and padded forehead bar maintained a constant distance of 45 cm from the screen. Facial images subtended a vertical visual angle of 19.3 to 25.6° and a horizontal visual angle of 13.0 to 18.8°. Landscape images subtended a vertical visual angle of 17.2 to 25.6° and a horizontal visual angle of 17.2 to 26.1°. SuperLab was used to present the stimuli in a randomized order. There was no time limit on how long each image could be viewed. Responses were given orally and were entered into the computer by the experimenter. The percentage of images correctly identified was calculated for each image category.
We administered a test of prosodic emotion (i.e., auditory emotion) recognition to all participants. We utilized a modified version of the prosodic portion of the New York Emotion Battery (Borod et al., 1998; Borod, Welkowitz, & Obler, 1992). We administered only those sentences that expressed the emotions of Anger, Disgust, Fear, Happy, Sad, and Positive Surprise. All stimuli were presented at the highest volume setting on the audiotape player, which was set at a standard distance of 1 m away from the participant. All participants denied having any difficulty hearing the auditory stimuli.
For each of the six prosodic emotions, three tape-recorded sentences were presented in one of two pseudo-randomized orders (Borod et al., 1992). The content of each sentence was neutral, whereas the prosody of the sentence varied based on emotion. Each sentence was presented twice in succession, after which the participant identified the emotion expressed from a list of emotions (Anger, Disgust, Fear, Happy, Sad, and Positive Surprise) presented on an 8.5-in. by 11-in. card. Responses were given orally and recorded by the examiner. The overall accuracy of identification was calculated (total number correct out of 18), as well as the accuracy of identification for each emotion (total number correct out of 3).
We administered the 64-item version of the Inventory of Interpersonal Problems questionnaire (IIP), which is a self-report instrument that was designed to assess levels of distress in relation to interpersonal interactions (Horowitz, Alden, Wiggins, & Pincus, 2000; Horowitz, Rosenberg, Baer, Ureno, & Villasenor, 1988). This instrument has been successfully used to identify interpersonal difficulties in neuropsychiatric patients (e.g., Kornreich et al., 2002; Pelc, Kornreich, Foisy, & Dan, 2006). The IIP consists of 8 subscales (controlling, vindictive/angry, distant, socially avoidant, nonassertive, exploitable/overly accommodating, self-sacrificing, and needy), each measured by 8 items answered on a 5-point Likert scale, ranging from 0 (indicating that the activity or behavior is “not at all” difficult) to 4 (indicating that the activity is “extremely” difficult). Accordingly, higher scores indicate higher rates of distress. For each item, participants indicated how much difficulty they experienced with respect to any of their significant relationships. Raw scores for each of the 8 individual scales were used in the analyses.
The analyses that follow consist of mixed design ANOVAs, followed by posthoc analyses when appropriate, and Pearson correlations using a conservative alpha level of .01. For clarity, only those analyses revealing significant effects of interest are reported.
Our primary aim was to compare the PD and HC groups’ ability to identify facial emotions to their ability to classify non-emotional images of equal difficulty level. We used paired samples t-tests to compare the HC group’s accuracy scores on the matched facial and landscape image categories. Accuracy scores on matched categories were comparable, with the exception of accuracy scores for Fearful images, which were lower than the accuracy scores for the matched landscape category for this expression (i.e., Tropical) (t  = −3.23, p < .01) (see Table 2). We determined that Town images were a better match for Fearful images, as the HC group’s accuracy scores did not differ significantly for these categories (t  = −.45, p = .66).
Mixed design ANOVAs with factors of group (PD, HC) and stimuli (matched emotion and landscapes categories) were conducted to compare the PD and HC groups’ ability to correctly identify images from each category of facial expression to its matched landscape category. For Angry faces and City landscapes, there was a significant interaction between group and stimuli (F [1, 41] = 4.54, p < .05). Posthoc tests indicated that PD participants were less accurate than HC participants at identifying Anger (t  = −2.68, p < .05). For Surprise and Shore images, there was a significant group by stimuli interaction (F [1, 41] = 5.25, p < .05). Posthoc tests showed that PD participants were significantly less accurate than HC participants at identifying facial expressions of Surprise (t  = −3.10, p < .01). The above findings were replicated when group performances on the emotion recognition task alone were analyzed using a more conventional method, by comparing their abilities to recognize emotions across all emotion types (see Figure 2).
To compare the two groups’ overall accuracy at identifying emotions versus landscapes, we conducted a mixed design ANOVA with factors of group (PD, HC) and stimuli (all emotions combined, all landscapes combined). The analysis showed a significant main effect of group (F [1, 41] = 4.91, p < .05). Posthoc tests revealed a trend for the PD participants to be less accurate than HC at identifying emotional expressions (t  = −1.96, p = .057).
The RPD, LPD, and HC groups’ performances on the emotion recognition task were compared against their performances on the landscape categorization task by performing mixed design ANOVAs followed by posthoc analyses when appropriate. Significant findings revealed that compared to the HC group, the LPD group displayed specific impairments in the recognition of both facial emotion (Angry: F [2, 40] = 4.26, p < .05; posthoc, p < .05) and landscapes (Forest: F [2, 40] = 3.65, p < .05; posthoc, p < .05). Compared to the HC and LPD groups, the RPD group displayed a specific impairment in the recognition of facial emotion only (Surprise: F [2, 40] = 10.26, p < .001; posthoc, p < .01, p < .05, respectively).
We conducted mixed design ANOVAs with factors of group (PD, HC), gender (male, female), and stimuli (matched emotion and landscapes categories). Posthoc analyses conducted on the significant interaction effect between group and gender (F [1, 39] = 15.61, p < .001) revealed that men with PD were significantly less accurate at identifying Fearful images than were women with PD (t  = −3.22, p < .01) and HC men (t  = −3.52, p < .01). By contrast, HC women were less accurate than HC men at identifying Fearful expressions (t  = 2.22, p < .05) (see Figure 4).
In the PD group, there was a significant negative correlation between BDI and BAI scores and recognition of Sad expressions (r = −.68, p = .001; r = −.73, p < .001, respectively). BDI was also negatively correlated with emotion recognition across all facial expressions combined (r = −.592, p < .01). None of the correlations for BDI and BAI were significant in the HC group (all r’s < .35, p’s > .10). Two separate mixed design analyses of covariance were performed to examine whether participants’ depression and anxiety scores affected emotion recognition. Although a correlation existed between emotion recognition accuracy and BAI in the PD group, the covariate of BAI was not significant (p = .07). After controlling for depression ratings, the PD group was still less accurate at identifying Angry expressions (F [1, 40] = 3.65, p < .05) (one-tailed), and expressions of Surprise (F [1, 40] = 8.18, p < .01) (one-tailed).
There was no significant correlation between facial emotion recognition ability and any of the following for either the PD or HC group: age, education, acuity, contrast sensitivity, facial recognition ability (Benton Test of Facial Recognition), or performance on the test of emotional prosody; nor additionally for PD in regard to motor symptom severity (Hoehn and Yahr stage) or disease duration (all r’s < .33, p’s > .01).
Performance on the emotional prosody recognition test was assessed by conducting a mixed design ANOVA with factors of group (PD, HC) and prosodic emotions (Angry, Disgust, Fear, Happy, Sad, Pleasant Surprise). Significant group differences were not observed (main effect of group: F [1, 41] = 1.13, p = .29; group by prosodic emotion interaction: F [5, 205] = .19, p = .97). Results of the ANOVA comparing RPD, LPD, and HC on this measure showed no group differences (main effect of group: F [2, 40] = 1.24, p = .30; group by emotion interaction: F [10, 200] = .99, p = .45).
We hypothesized that PD participants would report higher rates of interpersonal difficulties than HC participants and that female PD participants would report higher rates of interpersonal distress than male PD participants. We conducted a mixed design ANOVA with factors of group (PD, HC), gender (male, female), and IIP scale. Results revealed a significant group by scale by gender interaction (F [3.2, 124.4] = 3.22, p < .05). Individual two-way ANOVAs, with group and gender as between subjects factors, identified a significant main effect of group (F [1, 39] = 4.34, p < .05, one-tailed) and significant group by gender interaction (F [1, 39] = 3.2, p < .05, one-tailed) for scale 5 (difficulties with self-assertion;) as well as a significant group by gender interaction for scale 6 (over-accommodating behavior; F [1, 39] = 3.34, p < .05, one-tailed). Because men and women are known to report different levels of difficulty on the IIP (Horowitz et al., 2000), we compared male PD and HC participants separately from female PD and HC participants in posthoc comparisons using independent groups t-tests. On scale 5 we found that, compared to HC, PD participants reported higher rates of distress (t  = 2.06, p < .05), which appeared to be driven by women with PD who reported higher rates of distress than HC women (t  =2.47, p < .05). Significant findings revealed that for scales 6, women with PD reported significantly higher rates of distress than did HC women (t  = 2.47, p < .05; t  = 2.16, p < .05, respectively). Mean group ratings on the IIP scales are presented in Table 3.
Correlations were computed to investigate the association between reports of interpersonal problems with facial emotion recognition abilities in the PD and HC groups. In HC, emotion recognition abilities were not significantly correlated with reports of interpersonal problems (p > .05 for all correlations). In PD participants, recognition of Fearful faces was negatively correlated with ratings of anger and frustration in social relationships (scale 2, r = −.685, p < .01) and ratings of difficulties with social connectedness (scale 3, r = −.603, p < .01). PD participants’ ability to recognize Sad faces was negatively correlated with levels of controlling behaviors (scale 1, r = −.593, p ≤ .01), difficulties related to social connections with others (scale 3, r = −.594, p ≤ .01), and ratings of the desire to engage or connect with others (scale 8, r = −.624, p < .01). PD participants’ ability to recognize emotions in general (i.e., their overall performance on the emotion recognition task across all emotions) was negatively correlated with rates of frustration in interpersonal relationships (scale 2, r = −.579, p < .01) and difficulties related to social connections with others (scale 3, r = −.725, p < .01).
In HC men, emotion recognition abilities did not correlate significantly with reports of interpersonal problems. In men with PD, the ability to recognize Fearful faces was negatively correlated with rates of discomfort in social situations (scale 4, r = −.801, p < .01). In women with PD, the ability to recognize Sad expressions was negatively correlated with ratings of difficulties with social connections (scale 3, r = −.829, p < .01). Recognition of Sad expressions was also negatively correlated with ratings of the desire to engage or connect with others (scale 8, r = −.747, p ≤ .01). The ability to recognize emotional faces, across all facial expressions combined, was negatively correlated with rates of frustration in interpersonal relationships (scale 2, r = −.810, p < .01).
The current study examined the ability to classify expressions of facial emotion in non-demented PD and HC participants relative to their ability to categorize a set of non-emotional stimuli with comparable task requirements. We controlled for two basic processes that underlie successful performance on tasks of facial emotion recognition that employ a forced-choice response: (1) differences in difficulty levels across emotions, and (2) decision making and categorization abilities. These are important steps in assessing emotion recognition abilities in PD patients in particular, considering reports that facial emotion recognition impairments in PD can be attributed to difficulty factors (Rapcsak et al., 2000) and evidence that suggests individuals with PD display impairments in decision making and categorization abilities (e.g., Brand et al., 2004; Filoteo et al., 2007).
We found that PD participants in this study displayed impairments in the recognition of anger and surprise facial expressions. Further, men with PD were less accurate than HC men and women with PD in the recognition of fearful expressions. We also observed a trend for the PD participants to be less accurate at identifying emotional expressions overall. These results are in line with previous studies that have reported impaired facial emotion recognition abilities in medicated PD patients (e.g., Blonder, Gur, & Gur, 1989; Sprengelmeyer et al., 2003). With one exception, PD participants did not display impairments on the non-emotional categorization task, despite what was predicted based on findings from studies assessing categorization abilities in PD (e.g., Ashby et al., 2003; Knowlton et al., 1996; Maddox et al., 2005; Maddox & Filoteo, 2001; Price, 2006). One reason for this may be that many of these studies assessed the ability to categorize stimuli using more abstract categorization rules, which are often determined by the researcher and must be learned by the participant during the study, whereas in the present study we tested participants’ abilities to identify stimuli using their own previously determined categorization rules for what aspects of a stimulus best determine its category membership. Nevertheless, it is likely that difficulties in decision making skills and categorization abilities contribute to some of the variability in performance on these tasks, as we observed that LPD participants displayed minor difficulties in identifying non-emotional stimuli. This finding only underscores the necessity of including a non-emotional control measure in studies of emotion recognition.
The impairments in facial emotion recognition displayed by our PD participants were unrelated to factors such as age, education level, motor symptom severity, and duration of disease. This is consistent with previous reports suggesting that emotion recognition is unrelated to motor impairment in early PD (e.g., Dujardin et al., 2004). We included participants who exhibited mild to moderate bilateral motor disability (Hoehn & Yahr stage II–III). More severely affected individuals who coincidentally may have more extensive difficulties in mood and cognition (including decision making and categorization) may well show a different pattern of performance.
Another finding from our study is that difficulties in emotion recognition in PD appeared to be limited to the visual domain, as the PD and HC groups did not differ in their ability to recognize prosodic emotion. Though studies using methods different from ours are mixed in regard to whether emotional prosody is (Pell & Leonard, 2003) or is not affected in PD (Benke, Bosch, & Andree, 1998), our findings suggest that even when emotional prosody recognition is normal, impairments in visually-mediated facial emotion recognition can still be detected in the same individuals. Our PD sample was normal in regard to acuity and contrast sensitivity using a non-threshold measure (FACT) and in regard to basic face recognition (Benton Test of Facial Recognition). The latter is consistent with reports of impaired facial emotion recognition abilities in PD patients in the absence of observable difficulties in facial perception (Lawrence et al., 2007; Sprengelmeyer et al., 2003). One potential visually-based source of dysfunction may involve abnormalities in visual attention or scanning processes, which have been reported to interfere with the recognition of facial expressions in neuropsychiatric and neurologically normal older individuals (Adolphs et al., 2005; Wong, Cronin-Golomb, & Neargarder, 2005).
The finding that anger recognition was disrupted in individuals with PD in our sample is in line with previous studies that have reported impairments in anger recognition in this population (e.g., Dujardin et al., 2004; Lawrence et al., 2007; Sprengelmeyer et al., 2003). It seems that a likely source of this impairment is disruption in the ventral striatum, which is centrally located within the basal ganglia’s limbic loop. Models describing the organization of this loop suggest that the orbitofrontal cortex, anterior cingulate, entorhinal cortex, amygdala, and hippocampus all project onto the ventral striatum, which in turn projects to the ventral pallidum. Output information is then relayed to dorsal medial thalamic nuclei, as well as the hypothalamus and amygdala (Alexander & Crutcher, 1990; Alexander, Crutcher, & DeLong, 1990; Alexander, DeLong, & Strick, 1986; Parent, 1990). Projections from the ventral pallidum and dorsal medial thalamic nucleus extend to prefrontal and orbital frontal regions (Heimer, de Olmos, Alheid, & Zaborszky, 1991). Considering the placement of the ventral striatum within the limbic loop and the strong dopaminergic projections to the ventral striatum via the ventral tegmental area, which along with the substantia nigra is affected in PD (Braak et al., 2003), it appears likely that dysfunction in the ventral striatum may contribute to the breakdown of information integration leading to impairments in anger recognition in PD (e.g., Lawrence et al., 2007; Sprengelmeyer et al., 2003). This idea is supported by findings from lesion and fMRI studies suggesting a role of the ventral striatum in processing angry expressions (Calder, Keane, Lawrence, & Manes, 2004; Phillips et al., 1999).
Results from imaging studies also point to the involvement of other regions in anger recognition, as it has been shown that the perception of angry faces is associated with increased activation in orbitofrontal regions (Blair, Morris, Frith, Perrett, & Dolan, 1999), the anterior cingulate (Blair et al., 1999; Sprengelmeyer, Rausch, Eysel, & Przuntek, 1998), and the amygdala (Adams, Gordon, Baird, Ambady, & Kleck, 2003; Britton, Taylor, Sudheimer, & Liberzon, 2006; Hariri, Tessitore, Mattay, Fera, & Weinberger, 2002; Whalen et al., 2001). Further, impairments in anger recognition have been observed in patients with lesions of the orbitofrontal cortex (Blair & Cipolotti, 2000) and the amygdala (e.g., Calder et al., 1996; Sato et al., 2002). In light of these findings, one might envision that dysfunction in regions other than the ventral striatum, such as in the orbitofrontal cortex or the amygdala, may also result in anger recognition impairments in PD. Support for this possibility is found in studies reporting reduced dopaminergic binding sites in the orbitofrontal cortex and amygdala in early PD (Ouchi et al., 1999), neuropathological changes in the amygdala of PD patients (Braak et al., 1994; Mattila, Rinne, Helenius, & Roytta, 1999) and reduced amygdala activity in persons with PD in response to viewing angry faces (Tessitore et al., 2002).
The present data indicate that body side of motor onset has a differential effect on PD patients’ emotion recognition abilities. Compared with Blonder and colleagues (1989), who reported no LPD/RPD differences, we presented a greater number of facial emotion stimuli, which may have allowed for increased sensitivity to group differences. Our finding that LPD participants were less accurate than HC at identifying angry facial expressions suggests a specific contribution of the right hemisphere to the recognition of angry expressions. This is consistent with studies implicating a greater right than left hemisphere contribution to the perception of negative emotions (e.g., Adolphs, Jansari, & Tranel, 2001), and observations of greater activation of limbic structures in the right than left hemisphere (i.e., right orbitofrontal cortex, cingulate gyrus, and amygdala) when viewing angry faces (Blair et al., 1999; Hariri et al., 2002; Sprengelmeyer et al., 1998; Yang et al., 2002).
We found that PD participants were less accurate than HC at identifying facial expressions of surprise, and that this was driven by the RPD group. To our knowledge, this is the first study to report a specific impairment in the recognition of surprise facial expressions in PD. Our controlled design allows us to rule out the possibility that this finding is secondary to difficulty factors. Ours is not the first study to report impairments in the recognition of surprised expressions in patients with fronto-subcortical dysfunction. Individuals with Huntington’s disease (Sprengelmeyer et al., 1996), which also affects the basal ganglia, as well as individuals with Korsakoff’s syndrome (Montagne, Kessels, Wester, & de Haan, 2006), which is characterized by frontolimbic pathology, display impairments in recognizing facial expressions of surprise.
There is substantial evidence that the left hemisphere is more important than the right in mediating positive emotions (Ahern & Schwartz, 1979; Davidson & Fox, 1982; Natale, Gur, & Gur, 1983; Reuter-Lorenz & Davidson, 1981; for review see Silberman & Weingartner, 1986). Disruption of left-hemisphere processes in RPD patients, possibly involving fronto-subcortical structures, may result in increased negative views of facial expressions of surprise. We suggest that surprise expressions would be more susceptible than happy expressions due to the ambiguous valence of surprise expressions. Viewing surprised facial expressions in a more negative light, the perceiver would likely be more prone to misinterpret surprised expressions as fearful, which is the pattern that we observed in RPD participants.
We also observed that men with PD, compared to women with PD and HC men, displayed specific impairments in the recognition of fearful expressions. It is possible that male PD patients experience greater pathology in regions that support the recognition of fearful facial expressions, which relies heavily on the amygdala (Adolphs, Tranel, Damasio, & Damasio, 1994; Phillips et al., 1997; Sato et al., 2002; Thomas et al., 2001), but research is needed to investigate this idea. Taken together, these findings underscore the need to consider male-female differences and body side of motor-symptom onset when assessing emotion recognition abilities in individuals with PD.
Compared to the HC group, PD participants reported higher levels of depression and anxiety symptoms, which is in line with studies showing increased rates of depression and anxiety in PD (Cummings & Masterman, 1999; Richard et al., 2004; Richard, Schiffer, & Kurlan, 1996). Although our analyses revealed that PD participants’ impairments in the recognition of anger and surprise were unrelated to anxiety scores, and were evident even after controlling for depression, we found that higher rates of depression in PD participants were related to decreases in the ability to recognize fearful and sad expressions.
Many report that depression does not increase one’s ability to identify negative expressions (i.e., sad expressions) (e.g., Gur et al., 1992; Leppanen, Milders, Bell, Terriere, & Hietanen, 2004; Mikhailova, Vladimirova, Iznak, Tsusulkovskaya, & Sushko, 1996), and facial emotion recognition impairments have been identified in patients with depression (Feinberg, Rifkin, Schaffer, & Walker, 1986; Rubinow & Post, 1992). This research generally supports our current findings, which suggest that even low levels of depression symptoms may be associated with decreases in facial emotional recognition in PD. This finding is compelling, as it suggests that some of the emotion recognition difficulties experienced by patients with PD might be remediated with improved treatment of psychiatric symptoms.
Another new finding of our study is of group and gender differences in reports of interpersonal difficulties in PD. We found that PD participants reported greater difficulties with self-assertion than did HC participants. This effect appeared to be driven by women with PD who reported greater difficulties with self-assertion as well as higher rates of distress related to engaging in overly accommodating behaviors in comparison to HC women. These findings are in line with studies showing that compared to men, women with PD report greater reductions in social functioning on quality of life measures (e.g., Behari et al., 2005; Kuopio et al., 2000). Notably, we observed that in PD participants, but not in HC participants, impairments in the ability to recognize facial emotions were associated with increases in interpersonal difficulties, including higher reports of distress regarding complaints of frustration in social relationships, feelings of social disconnection, desire to connect with others, feelings of social discomfort, and use of domineering or controlling behaviors.
This study demonstrates the utility of employing a simple control measure that allows the researcher to assess for the presence of emotion-specific impairments and confirm the absence of artifactual findings that could arise secondary to differences in task difficulty in studies of facial emotion recognition abilities. Using this method, we observed that non-demented individuals with PD displayed specific impairments in facial emotion recognition, mainly for anger (driven by individuals with LPD) and surprise (driven by individuals with RPD). We further found that men with PD demonstrated specific difficulties in the recognition of fearful facial expressions. We suggest that impairments in emotion recognition in individuals with PD may be related to the dysfunction of the ventral striatum, amygdala, and prefrontal cortices. Additionally, we report a relation between mood and emotion recognition abilities that suggests that improvements in the treatment of depression and anxiety symptoms may help to alleviate some, though probably not all, of the emotion recognition difficulties that individuals with PD experience. Such interventions may be of considerable value to these patients, as our data also indicate a relation between emotion recognition impairments and increased levels of distress in PD patients’ social relationships.
This work was supported by a Ruth L. Kirschstein National Research Service Award from the National Institute on Aging (grant F31 AG026166-02) (UC), by an American Psychological Association Minority Fellowship through the U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration (grant T06 SM13833) (UC), by a Clara Mayo Research Award from the Department of Psychology, Boston University (UC), and by National Institute of Neurological Disorders and Stroke grant R01 NS050446-01A2 (ACG). The study was presented in part at the annual meetings of the International Neuropsychological Society, 2007 and 2008. We thank all of the individuals who participated in this study. Marie Saint-Hilaire, MD, and the Neurology team at Boston University Medical Center Neurology Associates provided valuable aid in our recruitment efforts. We are also grateful to Cheryl Matthews, BS, Kelly O’Keefe, MA, Lena Tsui, MA, Roxanne Istrate, MA, Amanda Sacino, BS, Allison Applebaum, MA, Sigurros Davidsdottir, PhD, Tom Laudate, MA, and Bruce Reese, MA, for their assistance with this project.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.