We used a sensitive and data-driven technique, called “bubbles” [4
], to assess how viewers make use of information from faces. The approach is conceptually related to reverse correlation and shows participants only small regions of a randomly sampled face space (). Using this technique, it is possible to identify statistically the extent to which specific facial features contribute to judgments about the faces. In our prior studies with this method [3
], we asked subjects to discriminate between two emotions, fear and happiness, because these emotions are distinguished by particular facial features [8
], and because there is evidence that their recognition can be differentially impaired following brain pathology [6
]. To provide comparisons with prior results, we used the identical task here.
Figure 1 Construction of the stimuli. On the far left is one of the base images we began with (an expression of happiness or of fear from the Pictures of Facial Affect ). The base face was decomposed and randomly filtered so as to reveal only small parts of (more ...)
We contrasted three groups of participants (): parents who had a child with autism and met criteria for the aloof component of the Broad Autism Phenotype (“BAP+”); those who had a child with autism but were negative for the aloof component (“BAP−”); and those who had a neurotypical child that was not autistic (“Controls”). All three groups had similar mean performance accuracy on the Bubbles Task (82.2%, 82.2%, 83.5% for Controls, BAP−, BAP+, respectively) as well as reaction times (2.0, 1.7, 1.8 sec), and very similar mean number of bubbles (across all spatial frequency bands); moreover, since the bubble locations were randomly and homogeneously distributed, all subjects saw similar parts of the face revealed in the trials, on average. There were no significant differences on any of these measures between groups (all ps>0.1 from uncorrected t-tests).
Means and S.D. of the participant groups for age, gender, and full-scale IQ.
Despite the essentially identical stimuli and overall performances, the three groups differed in their performance strategies. We calculated classification images that revealed the effective information a viewer uses to perform the task. Classification images from the Control group looked similar to those that have been published previously for psychiatrically and neurologically healthy participants, showing substantial use of the eye region of the face (). Comparisons of the classification images revealed statistically significant differences between the three groups (all images thresholded at P<0.001, corrected). Compared to the Control group, both the BAP+ group () and the BAP− group () made less use of the eyes, an effect most notable for the right eye region. When contrasting the two BAP groups, we found that the BAP+ group made more use of the mouth than the BAP− group () whereas the BAP− group made more use of the eye region than did the BAP+ group ().
Figure 2 Classification images showing the use of facial information. A: Controls (parents of a child without autism). B: Difference between Controls and BAP+ (the image shows the region of the face used more by controls). C: Difference between Controls and BAP−. (more ...)
Figure 3 Classification difference image between BAP groups and between BAP and autism. A: Information used more by BAP+ than by BAP−. B: Information used more by BAP− than by BAP+. For comparison purposes, panels C and D reproduce prior published (more ...)
Overall, these findings indicate that BAP+ participants did not show the normal pattern of dependence upon the eyes for judgements of emotions. Instead, they relied more heavily on the mouth. The pattern in use of facial information we found here bears a striking similarity to what we have reported previously in individuals with autism [3
] (reproduced in ). Direct difference classification images between the present data and our previously published data from autism subjects [3
] showed that both BAP groups still used the eyes more than did autism subjects, although as expected the BAP+ less so than the BAP− group (). Since our groups had different gender ratios (see and Methods), we also repeated all the above analyses solely on male participants; the overall pattern of results in the classification images remained significant, verifying that the group differences in the use of facial information that we report were not driven by different gender ratios between the groups (see Supplementary Figure 1
To provide additional quantification to the results from the classification images we calculated and contrasted SSIM scores, which focus specifically on the relative use of information from the eyes or the mouth in faces (see Methods). The use of information from either the right eye region or the left eye region was highest in the control group, somewhat lower in the BAP− group, and lowest in the BAP+ group (right eye region SSIM: 0.24, 0.22, 0.20; left eye region SSIM: 0.27, 0.26, 0.24), consistent with what would be expected from the classification images. Also consistent with the classification images was the SSIM from the mouth, which was lowest in the controls and higher in the two BAP groups (0.29, 0.33, 0.33) (). By contrast, the SSIM values for the whole face or the nose did not differ between groups (0.50, 0.49, 0.49; and 0.46, 0.46, 0.46, respectively). It is intriguing to note the consistent rank-order of SSIM scores for the left and right eye regions seen in as one goes from Controls, to BAP− parents, to BAP+ parents. Such a pattern is consistent with the idea that there is a continuum of genetic liability for autism, expressed in non-autistic relatives in ways that are milder but qualitatively similar to those seen in autism.
Figure 4 Numerical quantification of the use of facial information. The three bar graph categories plot the use of information from the right eye, left eye, and mouth region of the face. The y-axis plots the structural similarity metric (SSIM; see Methods), a (more ...)
To produce a summary measure that captured the pattern apparent in the classification images, we took the maximum of the SSIM for the two eye regions, and divided it by the SSIM for the mouth region. We found statistically significant differences on this summary measure: the control group had the highest value, followed by the aloof-negative group, followed by the aloof-positive group. Non-parametric contrasts confirmed the reliability of this finding: Controls differed from aloof-positive participants (P<0.05, one-tailed Mann-Whitney U test, corrected for multiple comparisons) and from aloof-negative participants (P<0.05) (the contrast between aloof-negative and aloof-positive groups failed to achieve statistical significance).
The present study complements findings in siblings of autistic infants, which have indicated disproportionate gaze onto the mouth when interacting with people, even in the absence of any later development of autism [10
] . Eye tracking studies of adolescents who had autistic siblings have found reduced eye fixation compared to controls, and both the autistic individuals and their non-autistic siblings had abnormally reduced amygdala volumes [11
]. Possible pathology of the amygdala has been one among several hypotheses for social dysfunction in autism [12
], although its precise role remains debated [15
]. We have previously reported that neurological subjects with focal amygdala lesions fail to make use of the eye region of faces, showing classification images on the same Bubbles Task that are notable for an absence of the eyes [16
]. However, while subjects with amygdala lesions fail to make normal use of the eyes, they do not appear to make increased used of the mouth. The present findings in parents of autistic children, as well as the sibling studies noted above [10
] and findings in individuals with autism [3
], are consistent with the hypothesis that there is active avoidance of the eyes in autism, but our findings also fit the hypothesis that there is reduced attraction to the eyes and/or increased attraction to the mouth. Of course, these hypotheses are not mutually exclusive; additional studies will be required to determine their relative importance.
This is the first study to quantify a specific cognitive endophenotype defining a distinct face processing style in the parents of individuals with autism. We found a pattern of face processing (increased use of the mouth and diminished use of the eyes) similar to that seen in autistic individuals. Our finding emerged from a difficult and sensitive task, and it will be important to extend this also to whole faces in the real world, an issue that could be probed further with methods such as eyetracking. The face processing style we found appears to segregate with a specific component of the Broad Autism Phenotype, aloof personality; further studies with larger samples will be needed to explore possible correlates with other dimensions of the BAP, such as rigid personality. Taken together, the findings provide further support for a Broad Autism Phenotype and suggest avenues for isolating the genes that influence social behavior in autism.