PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of scanLink to Publisher's site
 
Soc Cogn Affect Neurosci. 2017 August; 12(8): 1334–1341.
Published online 2017 April 28. doi:  10.1093/scan/nsx068
PMCID: PMC5597863

Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia

Abstract

Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images.

Keywords: prosopagnosia, facial expressions, bubbles, eye movements

Introduction

Acquired prosopagnosia is characterized by a deficit in recognizing familiar faces despite the absence of low-level visual impairment or severe cognitive deficits (e.g. Rossion, 2014). Prosopagnosia is generally observed after bilateral occipito-temporal lesions (e.g. Damasio et al., 1990) and less frequently following unilateral right damage (e.g. Landis et al., 1986; Pancaroglu et al., 2016; see Mattson et al., 2000 for a case with unilateral left lesion). Although rare, pure prosopagnosia (i.e. a selective impairment for facial identification) offers rich, unique and valuable insight on the brain’s face processing mechanisms.

In the last decade, three research groups have reported an impairment in information use from the eye region during face identification in acquired prosopagnosia (Bukach et al., 2008; Caldara et al., 2005; Pancaroglu et al., 2016). Using Bubbles (Gosselin and Schyns, 2001), a classification image (CI) technique that isolates the diagnostic information for visual tasks, Caldara and colleagues (2005) first reported that PS—one of the purest cases of prosopagnosia—relied almost exclusively on the mouth area rather than the eye region—the most diagnostic feature for face identification (Butler et al., 2010; Gosselin and Schyns, 2001; Sekuler et al., 2004). On a similar line, Bukach and colleagues (2008) have shown that two prosopagnosic patients—LR and LH—show normal sensitivity to the mouth but severe impairments in eye-based face discrimination. The systematic observation of a deficit in the visual processing of the eye region during face identification in these three patients suggests that this deficit might be central in prosopagnosia. Relatedly, Pancaroglu and colleagues (2016) reported a large cohort of 11 patients with acquired prosopagnosia and found that an impairment for eye processing was more typical of patients with occipitotemporal lesions than those with anterior temporal lesions.

Importantly, however, the eye region is also highly diagnostic in other face processing tasks, such as gender categorization (Dupuis-Roy et al., 2009), dominance judgment (Dotsch and Todorov, 2012; Robinson et al., 2014) and recognition of specific facial expressions, e.g. fear and anger (Smith et al., 2005). Interestingly, an impairment in the recognition of static facial expressions has been observed in many cases of acquired prosopagnosia (e.g. Bowers et al., 1985; De Gelder et al., 2000; De Renzi and Di Pellegrino, 1998; Humphreys et al., 2007, 1993; see however Fox et al., 2011), raising the question of whether the deficit in processing information from the eye region observed in many patients is task-specific (i.e. face identification) or if it is general. In the latter scenario, prosopagnosic patients should show a similar perceptual bias (the use of the mouth) in other tasks, such as the recognition of facial expressions of emotion. Yet, this question remains to be addressed.

To this aim, we examined PS visual information processing strategies in facial expression recognition using Bubbles and eye-tracking. PS performance was also assessed with a homemade version of the facial expression megamix (Young et al., 1997) and compared with the performance of control participants while they categorized the same whole-face stimuli as well as stimuli in which only the lower half of the face was made available. Finally, we verify if PS’s impairment in recognizing facial expressions was related to her lack of spontaneous fixation on the eye region of faces.

Experiment 1—visual processing strategies measured with bubbles and eye-tracking

Materials and methods

Participants

PS was born in 1950. She sustained a closed head injury in 1992, causing lesions of the lateral part of the occipital and temporal lobes, bilaterally. On neuropsychological assessments, she shows highly impaired performance on face identification tasks (see Rossion et al., 2003 for details). Her performance with non-face objects is within the normal range (see Busigny et al., 2010). Interestingly, Richoz and colleagues (2015) recently reported an impairment in PS for categorizing many static facial expressions. However, she showed normal performance to effectively decode facial expressions from dynamic faces, raising the possibility that the face system relies on distinct representational systems for identifying static and dynamic expressions, or dissociable cortical pathways to access them.

Two (one female; Mage = 60.5, SDage = 0.7) and twelve (nine females; Mage = 59.9, SDage = 2.3) control participants were tested in the Bubbles and eye-tracking tasks, respectively. Although two control participants may appear to be few in the Bubbles task, previous studies have revealed very similar results using Bubbles in expression recognition tasks despite methodological differences such as the sample size (e.g. 14 participants in Smith et al., 2005vs 41 participants in Blais et al., 2012), the stimulus database (e.g. California Facial Expression database in Smith et al., 2005vs STOIC database in Blais et al., 2012) and the stimulus duration (e.g. until participants’ response in Smith et al., 2005vs 500 ms in Blais et al., 2012). Most importantly, as will be revealed below, the control results in the Bubbles task are near identical to already published data using the same categories of facial expressions (e.g. Smith and Merlusca, 2014). All the control participants were age- and education-level-matched with PS, were healthy, with normal or corrected-to-normal vision, and had no neurological or psychiatric history.

Stimuli

The stimuli were created using twelve unfamiliar faces (six women) from the Karolinska database (Lundqvist et al., 1998). The faces displayed happy, fear or neutral expressions. Happy and fear were chosen here considering previous reports (see Smith et al., 2005) indicating that the most important diagnostic information for these two emotions are clearly distinct, i.e. the mouth for happy, and the eyes for fear. Also, considering the saliency of the happy expression, neutral was selected to ensure that the task could not be achieved by using a happy/not happy strategy, and because both features are diagnostic for neutral.

Bubbles procedure

The Bubbles technique consists of randomly sampling, on each trial, a subset of the visual information contained in the stimulus. On the basis of the logic that the probability of a correct response will increase when the useful information is available, and will decrease when the useful information is masked, the technique allows to infer what visual information is useful for the task at hand. In the present experiment, the visual information was randomly sampled in the Cartesian space of the faces, as well as in the spatial frequency domain (see Figure 1 for details).

Fig. 1.
Creation of a bubblized stimulus using an exemplar from the Karolinska face database (Lundqvist et al., 1998). The original stimulus (A) was first bandpass filtered into five non-overlapping spatial frequency bands (B) using the Pyramid toolbox for Matlab ...

All participants were first given a period of 10 min to familiarize with the stimuli. This was necessary to overcome PS’s difficulty and insure that the trial-by-trial accuracy is modulated by the bubbles’ positions rather than by other factors. Accurate labeling was checked using a practice block with fully visible stimuli. PS completed 60 blocks of 180 trials with Bubbles, and control participants each completed 50 blocks of 180 trials. The image remained on screen until the participant responded. The number of bubbles was adjusted on a trial-by-trial basis using QUEST (Watson and Pelli, 1983) for reaching an accuracy level of 70% for each emotion.

Eye-tracking procedure

The stimuli were the same as with Bubbles, although they were not bubblized. Each participant completed 180 trials (five repetitions of each stimulus). The participant had to fixate a dot presented in the center of the screen before the stimulus was displayed. A face was then shown pseudo-randomly in one of the four quadrants of the computer screen. The face remained on the screen until the participant responded. The dominant eye of each participant was tracked using the Eyelink II head-mounted eye-tracker sampling at 500 Hz. Calibration and validation were run at the beginning of the experiment, and every 30 trials thereafter.

Data analysis and results

Bubbles results

During the Bubbles task, PS needed on average 148, 25, and 48 bubbles for fear, happy and neutral, respectively. The controls needed an average of 30, 22 and 24 bubbles to maintain the same accuracy. These disparities between PS and the controls suggest a clear deficit for PS in the recognition of fear, and possibly also of neutral (see Royer et al., 2015 for evidence of a link between the number of bubbles and performance in whole-face tasks).

To pinpoint the visual information used by PS and the controls, a CI was computed for each participant, expression and spatial frequency band. A weighted sum of all the bubbles masks presented was calculated using the accuracy of the participant transformed into Z-score values as weights. The CIs were transformed into Z-scores using the mean and the standard deviation of the null hypothesis, calculated using permutation. For the control participants, group CIs were computed by summing the individual CIs and dividing this sum by the square root of the number of participants. The Pixel test (Chauvin et al., 2005) was applied to the CIs to determine the critical Z-score value for statistical significance (P < 0.05). The patient’s and the control participants’ CIs as well as the significant differences between groups are displayed in Figure 2.

Fig. 2.
Information used to discriminate facial expressions in control subjects (left panel) or PS (center panel). The right panel represents information that is significantly more important for controls than PS. No information reaches significance in the opposite ...

The results of the control group were almost identical to those obtained by Smith and Merlusca (2014) on 10 young adults but with a different face database. The controls preferentially used the eye region (including the forehead) and, to a smaller extent, the mouth area, to recognize fear; they used the eye and the mouth areas to recognize neutral; and they used only the mouth area to recognize happy. PS used mostly the mouth to recognize all three facial expressions, with the exception that she used the eye region in the lowest spatial frequency band for neutral. The difference CIs are particularly informative: Compared with PS, control participants made a higher utilization of the eye area for fear (including the forehead) and neutral. Control participants also made a greater use of the mouth for happy, suggesting that when a single visual feature conveys most of the diagnostic information, they focused on this feature and made better use of it.

A visual inspection of the CIs suggests that, compared with controls, PS makes a greater utilization of the mouth area for fear categorization. Although this does not reach statistical significance in this specific analysis, it is somewhat similar to the observation that PS’ Z-scores in the mouth area were higher than those of controls in face identification (see Figure 5 in Caldara et al., 2005). To investigate this more thoroughly, we disposed of the spatial frequency dimension by collapsing it prior to smoothing. The CIs were then smoothed using a Gaussian kernel with a full-width-half-maximum (FWHM) of 28.3 pixels (see Royer et al., 2016 for the same approach). Apart from these details, this analysis was identical to the one described above. The resulting CIs (see Figure 3) reveal that PS indeed makes a greater utilization of the mouth for the categorization of fear. The fact that the difference CI reaches statistical significance suggests that PS’s utilization of the mouth area is consistent across spatial frequency bands. Conversely, the fact that no pixels reach the threshold for PS for neutral is probably explainable by variability in her visual extraction strategy across spatial frequency bands.

Fig. 3.
Visual information significantly linked to accuracy combined across all spatial frequency bands for all three facial expressions. The significant portions of the CIs (depicted as heat maps of Z-scores) are superimposed on one of the faces used in the ...
Fig. 5.
Mean accuracy rate in the facial expression megamix (70% and 90% trials) for PS, control participants in the ‘whole face condition’ and control participants in the ‘eyes only condition’.

Eye-tracking results

PS had longer reaction times than control participants (MPS = 1760.5 ms, MControls = 1067.5 ms, SDControls = 209.3 ms; t(11) = 3.46, P < 0.05) but her accuracy did not significantly differ from controls, which was fairly high for all participants (MPS = 97.2%, MControls = 97.4%, SDControls = 2.7%; t(11) = −0.05, ns). PS’s high performance with these stimuli was not surprising, as she had been extensively exposed to them with Bubbles beforehand. On average, PS made more fixations on each trial than control participants (MPS = 5.02, MControls = 3.13, SDControls = 0.76; t(11) = 2.61, P < 0.05).

Since previous studies have not found an effect of facial expression on eye-tracking results (Jack et al., 2009; Roy et al., 2010; Vaidya et al., 2014) the maps were produced by summing the three facial expressions. For display purposes, the maps were smoothed using a Gaussian window (FWHM of 28.3 pixels). Figure 4 displays (A) the 5% most fixated pixels, on average, by the twelve controls; (B) the 5% most fixated pixels by PS; and (C) the difference between the controls and PS. An ROI analysis was also performed on the non-smoothed maps to compare the relative amount of time spent by control participants and PS on the eyes and mouth (i.e. the proportion of time spent on the eyes minus the proportion of time spent on the mouth; see Figure 5D for the selected ROI). A modified t-test, more suitable for single subject analyzes (Crawford and Howell, 1998), was applied on this difference and shows that the proportion of time spent by PS fixating the eyes, relative to the mouth, was significantly lower than that of control participants [t(11) = -2.75, P < 0.05]. These observations both corroborate the presence of an abnormal perceptual bias in PS during facial expression recognition and rule out a possible confound that our Bubbles data are solely the result of the use of degraded images.

Fig. 4.
(A) The 5% most fixated pixels, on average, by the twelve control participants, (B) the 5% most fixated pixels by PS and (C) the significant difference between the controls and PS. Panel (D) Region of Interest (ROI) used for the analysis in the eye-tracking ...

Experiment 2—facial expression megamix

Method

Participants

PS and two separate groups of four (two females; Mage = 59.8, SDage = 3.6) and six (four females; Mage = 59.0, SDage = 3.4) age-matched control participants voluntarily took part in this experiment. All the control participants were healthy, with normal or corrected to normal vision, and had no neurological or psychiatric history.

Material and stimuli

The stimuli were created using two identities from the Picture Of Facial Affect database (POFA; Ekman and Friesen, 1976, 1978). The base faces expressed each of the six basic emotions (i.e. anger, disgust, fear, sad, surprise and happy), plus neutral. The stimuli were gray-scaled and spatially aligned on the location of the eyes, nose and mouth. For each identity, morphs of all the possible pairwise combinations (21) of expressions were created using Fantamorph (http://www.fantamorph.com). Each expression pair was blended by increments of 20%, resulting in proportions of 90-10, 70-30, 50-50, 30-70, and 10-90 percent for any pair of two emotions (see Humphreys et al., 2007 for similar choices), totaling 210 stimuli. A second set was also created by removing the upper part of the face from each of these 210 morphs, using the tip of the nose as the cutting point. PS and the first group of control participants were presented with the ‘whole-face’ set, and the second group of control participants was presented with the ‘no eyes’ set.

Procedure

Each participant performed 14 blocks of 75 trials. On each trial, one of the 210 morphs was randomly selected and displayed on the center of the screen until the participant responded. The task was to categorize the emotion expressed by the face as fast and as accurately as possible.

Data analysis and results

Comparison of PS’s performance with the ‘whole face condition’ control participants: Average accuracy of each participant and PS (see Figure 5) was calculated for each expression using only the 70% and 90% blends (i.e. the blends in which one expression was dominant). PS’s performance was compared with that of the control participants using a modified t-test on each facial expression. A Bonferroni correction was applied on the statistical threshold, which was set at alpha <0.007. PS’s performance was altered for all of the tested expressions (all P’s < 0.007), except for happy and disgust (P > 0.05, ns). Notably, we observed a trend for an alteration at recognizing disgust [t(3) = 3.74, P = .01], which did not survive the Bonferroni correction.

These results show that PS is impaired in facial expression recognition compared with age-matched controls. Interestingly, the only two facial expressions with which PS’s performance is within normal limits are facial expressions where the mouth is the most diagnostic region (i.e. happy and disgust). Conversely, for the four facial expressions on which PS showed the strongest impairment (i.e. anger, fear, sad and surprise), three (i.e. fear, sad and anger) are the ones for which the eye area (and/or the forehead) is known to be the most important. The strong visual similarity between surprise and fear and the fact that the appearance of the mouth changes from one face database to the other may possibly explain PS’s impairment with surprise. Overall, this is consistent with the idea that PS’s facial expression recognition performance was driven by the mouth region, and that she does not efficiently extract information from the upper part of the face.

Comparison of PS with the ‘no eyes condition’ control group: Second, we compared PS’s performance, who had access to the whole face, with that of the control group who only had access to the mouth area. Our results indicate that PS’s performance only differed from that of control participants with the sad expression (P < 0.007), for which her performance was actually higher than that of the control participants, and with the surprise expression (P < 0.007), for which her performance was lower (all other P’s > 0.28). These results further support the idea that PS’s facial expression recognition performance was driven by the mouth region, and that she neglected the upper part of the face during this task.

Experiment 3—looking into the eyes

Eye tracking results (Exp. 1) raise the possibility that PS’s facial expression recognition impairment results from a failure to direct her gaze to the eyes.

Method

Participants

PS and twelve (eight females; Mage = 60.7, SDage = 2.8) age-matched controls participated in this experiment.

Material and stimuli

For PS, eye movements and fixations were measured and recorded with the oculomotor system EyeLink 1000 (SR Research). Only the dominant eye was tracked, but viewing was binocular. Eye movements of controls were not recorded. The stimuli were posed facial expression images taken from the POFA database. Between 16 and 22 identities (half females) were selected for each six basic emotions plus neutral.

Procedure

Exp. 3a. PS was explicitly asked to look at the eyes while she attempted to recognize the expression (see Adolphs et al., 2005 for a similar procedure). Before the stimulus could be displayed, PS was asked to fixate a cross. A face was then presented on either the left or the right side of the screen and remained until the participant’s response. To ensure that PS could concentrate on the eye area and not on the keyboard, she was asked to name the emotion expressed by the face. Exp. 3b. The procedure was similar to Exp. 3a, but PS was instructed to look inside an imaginary rectangle that surrounded the eye area. The stimulus was visible while she looked within the area delimited by this rectangle, but disappeared whenever she looked outside the area. The red rectangle (Figure 6, right panel) appeared only when she was looking outside of the area and served to help her redirect her gaze towards the correct area. The rest of the procedure was identical to Exp. 3a. For control participants, viewing was completely free and no explicit instructions were given. The rest of the procedure was identical to Exp. 3a and 3b.

Fig. 6.
Eye fixation pattern of PS when she was asked to look in the eyes (eyes instructions) and when she was forced to look within a red rectangle surrounding the eye area (gaze contingent). The rectangle only appeared when she was not looking within the targeted ...

Data analysis and results

As instructed (Exp. 3a.), PS indeed looked at the eyes while she attempted to recognize the facial expressions (Figure 6, left panel). Proportion of correct responses was calculated for each expression (Table 1). Modified t-tests were conducted. A Bonferroni correction was applied on the statistical threshold, which was set at alpha <0.007. Richoz et al. (2015) demonstrated that, in a free viewing condition, PS’s performance was impaired compared with age-matched control participants with fear, sad, anger and surprise, whereas her performance was within the normal range for neutral, happy and disgust. When PS was instructed to look at the eyes, she remained impaired with sad [t(11) = −4.4, P < 0.001], and surprise [t(11) = −7.7, P < 0.001], and showed a trend at being impaired with anger [t(11) = −3.04, P = 0.01]. She was also impaired with neutral [t(11) = −6.1, P < 0.001] which is consistent with our results in Exp. 2. Although her performance was not significantly lower than that of controls for fear, a closer inspection at her pattern of responses can explain her lack of impairment. In fact, she had a strong tendency to respond fear when the expression of surprise was presented. On 68% on the trials where the expression of surprise was displayed, PS responded fear. Taken together, these results suggest that she was impaired at discriminating both the expressions of surprise and fear.

Table 1.
Accuracy rate of PS in a facial expression categorization task when she was instructed to look at the eye area, and when she was forced to look at the eye area. Control participants had no specific instructions regarding where they should look

When forced to look within a red rectangle surrounding the eye area (Exp. 3b), PS looked between both eyes (Figure 6, right panel). PS was impaired with the expressions of surprise [t(11) = −6.5, P < 0.001] and neutral [t(11) = −6.9, P < 0.001], and she showed a trend at being impaired with the anger [t(11) = −3.04, P = 0.01] and sad [t(11) = −2.5, P = 0.02] expressions. Again, her performance was not significantly lower than that of controls with fear, but her pattern of responses suggests that she was impaired at discriminating both fear and surprise. Indeed, she responded fear on 59% of the trials displaying the expression of surprise.

General discussion

The present study aimed to verify whether the deficit in processing information from the eye region during face identification extends to the recognition of facial expressions of emotion, in PS, a pure case of prosopagnosia. To achieve this goal, we conducted a single-case study on PS, an acquired prosopagnosic patient who shows a deficit in processing the eye area in the context of face identification. PS is also known to have a marked impairment in recognizing facial expressions of emotion from visual stimuli that do not contain dynamic information (Richoz et al., 2015). In this context, the observation of an eye processing deficit in facial expression recognition in PS would offer a parsimonious explanation of why this patient (and possibly most if not all prosopagnosic patients with similar posterior brain lesion) suffer from a deficit in static facial expression categorization. Hence, we showed in Exp. 1 that PS preferentially used the mouth area when categorizing facial expressions, even when the eye area contains most of the diagnostic visual information for categorizing a specific expression (e.g. fear). Of course, using only the mouth does not impoverish facial expression recognition as much as face identification, since the mouth offers the most diagnostic information for facial expression categorization (Blais et al., 2012) but not for face identification. Using eye-tracking, we also demonstrated that PS fixated the eye area less than control participants, and directed most of her fixations towards the mouth region, a pattern of results that is very similar to what is observed in PS during face identification (Orban de Xivry et al., 2008). In Exp. 2, we showed that if only the lower half of the face is shown to normal control participants, they obtained a performance pattern that is similar to what is observed with PS when asked to categorize whole-face facial expressions. Lastly, we showed that PS’s performance for facial expression recognition remains impaired even when she is explicitly instructed to look at the eyes, or when she is forced to look at the eye area using a gaze contingent paradigm.

This last observation suggests that PS’s impairment in categorizing facial expressions and processing the eye area cannot be explained by an attentional bias towards the mouth region (or away from the eye region), but has a more profound origin, likely linked to facial information extraction. A similar bias and pattern of performance has indeed been observed in SM, a patient with bilateral amygdala damage with severe impairment in the recognition of fear (Adolphs et al., 2005). However, SM performance in recognizing facial expression improved to a level comparable to normal controls when she was instructed to look at the eyes. Another difference is the fact that SM’s performance is impaired only for facial expression recognition whereas she is normal in other face processing tasks such as identification or gender categorization (Adolphs et al., 1994) which suggests that she is able to extract the information contained in the eyes for these tasks. Using Bubbles, it was indeed shown that SM processes the eyes in high spatial frequency during a gender discrimination task. In strong contrast with what is observed in SM, PS’s eye-based impairment generalizes to both identification and facial expression recognition. This observation is in line with the proposition that the Occipital Face Area (OFA—a region lesioned in PS) is causally engaged in extracting facial features (e.g. Duchaine and Yovel, 2015) from static faces during identification (Pitcher et al., 2014) and recognition of expressions (Pitcher, 2014). Patients with occipitotemporal lesions have indeed strong deficits in discriminating changes in feature position or shape in the eye region (Pancaroglu et al., 2016). Note that PS suffers from a lesion in the OFA and shows a strong impairment in using information from the left eye of faces (from the observer viewpoint). This observation corroborates neuroimaging findings showing an involvement of the posterior right hemisphere in extracting the contralateral eye both in macaques (Issa and DiCarlo, 2012) and in humans (Rousselet et al., 2014) during face recognition. It is worth noting that the processing of dynamic stimuli has been related to the right posterior Superior Temporal Sulcus (pSTS) during face identification (Pitcher et al., 2014) and expression recognition (Pitcher, 2014) in healthy observers. The right pSTS is spared in PS and she has normal performance in decoding dynamic facial expressions.

The question of whether most, if not all, prosopagnosic patients suffer from a generalized eye-based impairment similar to what PS demonstrates, or if only patients suffering from posterior lesions will show the same profile remains nevertheless open. But at least, our present work suggests the existence of a common perceptual basis, i.e. an impaired processing of the eyes, in order to explain the coexistence of facial identification and facial expression recognition deficits in PS. Of course, since our study was conducted on a single case, we must remain cautious before generalizing our observations to all cases of prosopagnosia regardless their lesion site. On this point, Pancaroglu and colleagues (2016) results offer interesting insights. The authors propose that an impairment in processing the eye region would be more frequently observed in patients with posterior lesions, while this deficit would be relatively uncommon in patients with more anterior lesions. Although interesting, it is important to note that the task used to draw this conclusion did not explicitly require the identification of a face, but only the discrimination of changes in feature position, shape or external contours. It remains possible that patients with more anterior lesions are able to discriminate information changes in the eye region, without being able to pair this information with their altered facial representation, thus preventing accurate face recognition. Within this framework, a memory deficit (either in the face identity representation itself or in its’ retrieval; see Ulrich et al., 2016) can be directly linked to an inability to effectively use eye information, but only when these representations are important to carry out a given task. Future work could evaluate if PS’s impairment in processing the eye region generalizes to patients with temporal lobe lesions, and if these same patients experience similar difficulties in facial expression recognition tasks. Interestingly, in contrast with what is observed in PS using eye-tracking, most of the prosopagnosic patients’ reported by Pancaroglu et al. (2016) spent more time scanning the eyes than the mouth, and this even without any explicit instructions to do so. This shows that a propensity to avoid looking at the eye area is not necessary for observing a deficit in processing the eyes, but might be related to an inability to use information from this region. This is consistent with our observation (Exp. 3) that PS’ performance does not improve dramatically when her attention is directed towards the eye region and that her deficit in processing the eyes does not come from her tendency to fixate more the lower part of the face.

Our study bridges the gap between an eye-based deficit and the frequently reported deficit in facial expression recognition in prosopagnosia. This suggests that, at least at the level of the extraction of information conveyed by the eyes, both tasks share some common perceptual bases. A recent study using EEG showed that normal participants are unable to selectively attend to identity or to facial expression even when required to do so (Fisher et al., 2016). More specifically, their data indicate that facial expressions interfere with the processing of identity, and that identity interferes with the processing of facial expression, even when the former is unrelated to the task at hand. Similar to our proposition, these results suggest that these two tasks share some common perceptual mechanisms, at least while using static face images.

How can we reconcile the present results with the fact that PS’ dynamic mental models of facial expressions revealed an appropriate use of all facial features (Richoz et al., 2015)? In fact, PS reaches normal categorization performance for all basic expressions, including fear, when confronted with her ‘optimal’ dynamic representational models and those of healthy observers. This observation emphasizes that PS is probably able to use the temporal properties provided by dynamic cues to categorize facial expressions. The results of the current study suggest that she does not normally extract visual information from the eyes when facial dynamics are lacking from the visual stimulation. Dynamic visual cues might activate a distinct visual route through the right pSTS. Future work might clarify this hypothesis by measuring the extent to which the eye movement pattern and the feature utilization observed in PS would be significantly modulated by the presentations of dynamic, as compared with static expressions.

Conclusion

The results of the present study suggest the existence of a generalized impairment in information use from the eye region in acquired prosopagnosic patients, at least in patients with posterior cerebral lesion such as PS. We suggest that the deficits observed in acquired prosopagnosia are rooted at the level of facial information use. The eye region contains a great amount of information—their color and position, their shape, eyelashes, eyebrows, pupil, etc.—which may be difficult to integrate and process by patients having a fragile face processing system. Regardless of this potential explanation, along with other studies, our data strongly support the hypothesis that the role of the OFA (a region lesioned in PS) is to extract facial features that will be used for both face identification and facial expression in static images. However, dynamic facial expressions remain recognizable for PS, as they rely on other visual information and functional routes.

Conflict of interest. None declared.

Acknowledgements

We are grateful to PS for her devoted participation in this research project as well as to all our healthy participants. This study was supported by a grant from the Natural Sciences and Engineering Research Council of Canada to D.F. (1607508).

References

  • Adolphs R., Gosselin F., Buchanan T.W., Tranel D., Schyns P.G., Damasio A.R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature, 433, 68–72. [PubMed]
  • Adolphs R., Tranel D., Damasio H., Damasio A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669–72. [PubMed]
  • Blais C., Roy C., Fiset D., Arguin M., Gosselin F. (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50, 2830–8. [PubMed]
  • Bowers D., Bauer R.M., Coslett H.B., Heilman K.M. (1985). Processing of faces by patients with unilateral hemisphere lesions: I. Dissociation between judgments of facial affect and facial identity. Brain and Cognition, 4(3), 258–72. [PubMed]
  • Bukach C.M., Grand R., Kaiser M.D., Bub D.N., Tanaka J.W. (2008). Preservation of mouth region processing in two cases of prosopagnosia. Journal of Neuropsychology, 2(1), 227–44. [PubMed]
  • Busigny T., Graf M., Mayer E., Rossion B. (2010). Acquired prosopagnosia as a face-specific disorder: ruling out the general visual similarity account. Neuropsychologia, 48(7), 2051–67. [PubMed]
  • Butler S., Blais C., Gosselin F., Bub D., Fiset D. (2010). Recognizing famous people. Attention, Perception, & Psychophysics, 72(6), 1444–9. [PubMed]
  • Caldara R., Schyns P., Mayer E., Smith M.L., Gosselin F., Rossion B. (2005). Does prosopagnosia take the eyes out of face representations? Evidence for a defect in representing diagnostic facial information following brain damage. Journal of Cognitive Neuroscience, 17(10), 1652–66. [PubMed]
  • Chauvin A., Worsley K.J., Schyns P.G., Arguin M., Gosselin F. (2005). Accurate statistical tests for smooth classification images. Journal of Vision, 5(9), 1. [PubMed]
  • Crawford J.R., Howell D.C. (1998). Comparing an individual's test score against norms derived from small samples. The Clinical Neuropsychologist, 12(4), 482–6.
  • Damasio A.R., Tranel D., Damasio H. (1990). Face agnosia and the neural substrates of memory. Annual Review of Neuroscience, 13(1), 89–109. [PubMed]
  • De Gelder B., Pourtois G., Vroomen J., Bachoud-Lévi A.C. (2000). Covert processing of faces in prosopagnosia is restricted to facial expressions: evidence from cross-modal bias. Brain and Cognition, 44(3), 425–44. [PubMed]
  • De Renzi E., Di Pellegrino G. (1998). Prosopagnosia and alexia without object agnosia. Cortex, 34(3), 403–15. [PubMed]
  • Dotsch R., Todorov A. (2012). Reverse correlating social face perception. Social Psychological and Personality Science, 3(5), 562–71.
  • Duchaine B., Yovel G. (2015). A revised neural framework for face processing. Annual Review of Vision Science, 1, 393–416. [PubMed]
  • Dupuis-Roy N., Fortin I., Fiset D., Gosselin F. (2009). Uncovering gender discrimination cues in a realistic setting. Journal of Vision, 9(2), 8–1. [PubMed]
  • Ekman P., Friesen W.V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting Psychologists Press.
  • Ekman P., Friesen W.V. (1978). Facial Action Coding System. Palo Alto, CA: Consulting Psychologists Press.
  • Fisher K., Towler J., Eimer M. (2016). Facial identity and facial expression are initially integrated at visual perceptual stages of face processing. Neuropsychologia, 80, 115–25. [PubMed]
  • Fox C.J., Hanif H.M., Iaria G., Duchaine B.C., Barton J.J. (2011). Perceptual and anatomic patterns of selective deficits in facial identity and expression processing. Neuropsychologia, 49(12), 3188–200. [PubMed]
  • Gosselin F., Schyns P.G. (2001). Bubbles: a technique to reveal the use of information in recognition. Vision Research, 41, 2261–71. [PubMed]
  • Humphreys K., Avidan G., Behrmann M. (2007). A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia. Experimental Brain Research, 176(2), 356–73. [PubMed]
  • Humphreys G.W., Donnelly N., Riddoch M.J. (1993). Expression is computed separately from facial identity, and it is computed separately for moving and static faces: neuropsychological evidence. Neuropsychologia, 31(2), 173–81. [PubMed]
  • Issa E.B., DiCarlo J.J. (2012). Precedence of the eye region in neural processing of faces. Journal of Neuroscience, 32(47), 16666–82. [PMC free article] [PubMed]
  • Jack R.E., Blais C., Scheepers C., Schyns P., Caldara R. (2009). Cultural confusions show that facial expressions are not universal. Current Biology, 19, 1543–8. [PubMed]
  • Landis T., Cummings J.L., Christen L., Bogen J.E., Imhof H.G. (1986). Are unilateral right posterior cerebral lesions sufficient to cause prosopagnosia? Clinical and radiological findings in six additional patients. Cortex, 22(2), 243–52. [PubMed]
  • Lundqvist D., Flykt A., Öhman A. (1998). The Karolinska directed emotional faces (KDEF). CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institute, 91–630.
  • Mattson A.J., Levin H.S., Grafman J. (2000). A case of prosopagnosia following moderate closed head injury with left hemisphere focal lesion. Cortex, 36(1), 125–37. [PubMed]
  • Orban de Xivry J.J., Ramon M., Lefevre P., Rossion B. (2008). Reduced fixation on the upper area of personally familiar faces following acquired prosopagnosia. Journal of Neuropsychology, 2(1), 245–68. [PubMed]
  • Pancaroglu R., Hills C.S., Sekunova A., Viswanathan J., Duchaine B., Barton J.J. (2016). Seeing the eyes in acquired prosopagnosia. Cortex, 81, 251–65. [PubMed]
  • Pitcher D. (2014). Facial expression recognition takes longer in the posterior superior temporal sulcus than in the occipital face area. Journal of Neuroscience, 34(27), 9173–7. [PubMed]
  • Pitcher D., Duchaine B., Walsh V. (2014). Combined TMS and fMRI reveal dissociable cortical pathways for dynamic and static face perception. Current Biology, 24(17), 2066–70. [PubMed]
  • Richoz A.R., Jack R.E., Garrod O.G., Schyns P.G., Caldara R. (2015). Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression. Cortex, 65, 50–64. [PubMed]
  • Robinson K., Blais C., Duncan J., Forget H., Fiset D. (2014). The dual nature of the human face: there is a little of Jekyll and a little of Hyde in all of us. Frontiers in Psychology, 139–45. [PMC free article] [PubMed]
  • Rousselet G.A., Ince R.A., van Rijsbergen N.J., Schyns P.G. (2014). Eye coding mechanisms in early human face event-related potentials. Journal of Vision, 14(13), 7. [PubMed]
  • Rossion B. (2014). Understanding face perception by means of prosopagnosia and neuroimaging. Frontiers in Bioscience (Elite Edition) 6, 308–17. [PubMed]
  • Rossion B., Caldara R., Seghier M., Schuller A.M., Lazeyras F., Mayer E. (2003). A network of occipito‐temporal face‐sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. Brain, 126(11), 2381–95. [PubMed]
  • Roy C., Blais C., Fiset D., Gosselin F. (2010). Visual information extraction for static and dynamic facial expression of emotions: an eye-tracking experiment. Journal of Vision, 10(7), 531.
  • Royer J., Blais C., Gosselin F., Duncan J., Fiset D. (2015). When less is more: impact of face processing ability on recognition of visually degraded faces. Journal of Experimental Psychology: Human Perception and Performance, 41(5), 1179–83. [PubMed]
  • Royer J., Blais C., Barnabé-Lortie V., Carré M., Leclerc J., Fiset D. (2016). Efficient visual information for unfamiliar face matching despite viewpoint variations: it’s not in the eyes! Vision Research, 123, 33–40. [PubMed]
  • Sekuler A.B., Gaspar C.M., Gold J.M., Bennett P.J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14(5), 391–96. [PubMed]
  • Simoncelli E.P. (1999). Image and Multi-Scale Pyramid Tools [Computer Software]. New York: Author.
  • Smith M., Cottrell G., Gosselin F., Schyns P.G. (2005). Transmitting and decoding facial expressions of emotions. Psychological Science, 16, 184–89. [PubMed]
  • Smith M.L., Merlusca C. (2014). How task shapes the use of information during facial expression categorizations. Emotion, 14(3), 478–87. [PubMed]
  • Ulrich P.I., Wilkinson D.T., Ferguson H.J., et al. (2016). Perceptual and memorial contributions to developmental prosopagnosia. The Quarterly Journal of Experimental Psychology, 70(2) 1–18. [PubMed]
  • Vaidya A.R., Jin C., Fellows L.K. (2014). Eye spy: the predictive value of fixation patterns in detecting subtle and extreme emotions from faces. Cognition, 133(2), 443–56. [PubMed]
  • Watson A.B., Pelli D.G. (1983). QUEST: a Bayesian adaptive psychometric method. Perception and Psychophysics, 33(2), 113–20. [PubMed]
  • Young A.W., Rowland D., Calder A.J., Etcoff N.L., Seth A., Perrett D.I. (1997). Facial expression megamix: tests of dimensional and category accounts of emotion recognition. Cognition, 63(3), 271–313. [PubMed]

Articles from Social Cognitive and Affective Neuroscience are provided here courtesy of Oxford University Press