This study aimed at examining whether facial features that are diagnostic of the current emotional expression are automatically processed irrespective of the task at hand and the position in the visual field. In Experiment 1, eye movements were recorded while participants accomplished an emotion classification, a gender discrimination or a passive (oddball) task. To examine whether facial expressions trigger fast, potentially reflexive, eye movements, half of all stimuli were presented briefly (150 ms) whereas the other faces were shown for 2000 ms. The initial fixation was controlled for by unpredictably shifting faces either down- or upwards on each trial such that participants initially fixated on the eye or mouth region, respectively. In Experiment 2, participants were instructed to classify emotional facial expressions of faces that were presented at different positions in the upper half, the middle or the lower half of the display screen.
In both experiments, the amount of time spent looking at the eye region distinctly exceeded the amount of time spent looking at the mouth. Moreover, subjects in Experiment 1 accomplished few saccades leaving the eye region but shifted their gaze from the mouth to the eyes very often. This gazing pattern occurred even when saccades did not allow for extracting further information from the stimuli (i.e., for short presentation durations). Experiment 2
indicates that these results do not reflect a general bias for attending the upper visual field 
but suggest that initial saccades target the facial feature that is nearest to fixation. These findings substantiate the conclusion that the eyes of a conspecific – even independent of the depicted emotional expression – provide important information that needs to be assessed quickly 
. Eye gaze is a crucial part of human non-verbal communication and interaction, as gaze cues signal the current focus of attention to the opponent and facilitate interactions 
. In line with this reasoning, healthy but also autistic subjects begin exploring depicted faces significantly more often on the right or left eye than anywhere else 
. These data suggest that the eyes play a crucial role in exploring and recognizing faces as well as in analyzing the emotional content of a particular expression. Our study further indicates that the eye region of conspecifics is processed preferentially across different facial expressions, positions in the visual field and experimental tasks.
Our second main finding was that the amount of fixation changes towards the eyes depended on the amount of diagnostic relevance of this facial feature for decoding the emotional expression 
. This finding was stable across all tasks, presentation times and positions in the visual field. Subjects showed a higher preference for the eyes when viewing fearful or neutral expressions but they tended to shift their gaze more often towards the mouth when a happy face was presented. Similar effects were also obtained for fixation durations in both experiments: subjects spent more time looking at the eyes of neutral or fearful faces and fixated relatively longer on the mouth region when happy expressions were shown. This indicates that an evaluation of diagnostic emotional information takes place irrespective of experimental conditions. As we could find a dependency of gaze shifting on the particular diagnostic facial feature even when faces were shown very briefly in Experiment 1, this might imply that the processing of emotional expressions takes place preattentively 
. Such a preattentive emotion processing mechanism driving subsequent eye movements might be relevant for quickly and accurately determining another person’s emotional state. In human interactions, it is always useful to permanently know about an opponent’s feelings and intentions even if the current situation (i.e. “task”) does not require such information, because the interpersonal interaction might quickly shift to a topic that could require a precise assessment of the opponent’s feelings and intentions.
Additional analyses using a computational model of bottom-up visual attention 
indicate that both above-mentioned results, the preferential scanning of the eye region as well as the enhanced processing of diagnostically relevant facial features, could not be explained by low-level image properties. A reanalysis of the data of Experiment 1 only for stimuli with a comparable saliency ratio in the eye compared to the mouth region revealed results highly similar to the original analysis. Furthermore, comparable results were found with the specifically tailored stimulus set that was used in Experiment 2. Remarkably, participants fixated the eye region 3 to 4 times longer than the mouth region even for faces with a similar saliency in both parts of the face (stimulus set 2).
Interestingly, participants performed at ceiling in the experimental tasks of both experiments. Thus, even when the faces were only shown for 150 ms (Experiment 1) and subjects were unable to change their gaze from the initial fixation to a different facial feature, they were almost perfect in determining the gender or the emotional expression. This finding indicates that participants were able to perceive the face as a whole (which is also a necessary precondition for showing subsequent saccades toward specific diagnostic features) within this short time duration. Since we additionally refrained from using backward masking, face processing could continue even after stimulus offset. The observed early recognition of emotional expressions is in line with studies showing high face recognition rates after presentation times well below 200 ms 
as well as larger amplitudes of early event-related brain potentials (N170) for facial displays containing diagnostic as compared to anti-diagnostic features 
. Taken together, emotional expressions seem to be detected very early even without redirecting the gaze toward diagnostic features 
. However, the observed pronounced tendency to shift the gaze to facial features that are diagnostic for the depicted emotional expression, which was unaffected by task demands and partly occurred after stimulus offset, indicates that attention is automatically shifted to such diagnostic features even when these saccades do not contribute to explicit emotion recognition.
To further characterize to what degree the observed effects rely on feature as compared to configural or holistic processing, it might be worthwhile to conduct future experiments with upright and inverted faces. In general, holistic processing seems to contribute to face perception since inverted faces are usually recognized slower than upright faces or other inverted objects 
. However, this so-called face-inversion effect was found to be substantially reduced when directing participants’ gaze toward the eye region 
. Moreover, face-inversion does not generally impair emotion recognition 
, which indicates that even for inverted faces, a similar pattern of gaze changes as in the current study might be obtained.
With respect to the neurobiological underpinnings of the currently observed results, recent studies suggest that differential attention to facial features and specifically the eye region may be mediated by the amygdala. For example, it was shown that amygdala damage impairs spontaneous gaze fixation on the eye region of facial stimuli presented in an experimental setting 
but also during free conversations 
. We recently demonstrated that amygdala activation in healthy individuals predicted the amount of gaze changes toward the eye region of fearful faces 
. Furthermore, a strong positive correlation between fusiform gyrus as well as amygdala activation and time spent fixating the eyes of happy, fearful, angry and neutral faces was shown in autistic subjects 
. Consequently, the amygdala can be robustly activated by a variety of facial expressions 
but activations seem to be largest when the eye region is crucial for determining the emotional state of the opponent (e.g., for fearful facial expressions 
). These findings indicate that the strong preference for processing the eye region of others that was also found in the current study might rely on an involvement of the amygdala in detecting salient facial features in the visual periphery and directing attention toward them 
. It remains an interesting question for future research to determine whether the relatively enhanced attention to the mouth region of happy faces that was found in the current study also relies on amygdala functioning. Benuzzi and colleagues recently reported increased amygdala activity to whole faces as compared to parts of faces displaying neutral expressions and they suggested that the amygdala might be involved in orienting attention toward socially relevant facial parts 
. Transferred to the current study, it might be reasoned that diagnostic emotional features constitute such relevant parts and therefore, it remains an intriguing possibility that the amygdala also mediates directed attention toward these features. Of course, this hypothesis needs to be tested by future studies utilizing neuroimaging methods, for example.
The current study revealed a very consistent pattern of preferentially scanning the eye region and attending to emotional diagnostic facial features across several experimental conditions. The robustness of these findings in healthy individuals suggests that an application of the current experimental paradigm might also be advantageous in patient groups. For example, it is still debated whether patients with autism spectrum disorders scan (emotional) faces differentially than healthy observers 
. Reduced attention to the eye region in these patients has been linked to amygdala hypoactivation 
and the variability in fixating the eye region within this group was found to be correlated with amygdala activity 
. Using a comparable emotion classification task as in Experiment 1, it was recently demonstrated that individuals with autism spectrum disorders show an enhanced avoidance of the eye region for briefly presented emotional expressions 
. In addition, the experimental paradigm of Experiment 1 offers the unique possibility to examine the time course of social attention in such patients in more detail by differentiating fast, potentially reflexive, eye movements from the subsequent scanning behavior that presumably is under conscious control. Moreover, it remains an interesting idea for future research to determine whether the degree of bottom-up attentional capture in patients with autism spectrum disorders differs from that in healthy controls. The computation of saliency maps and the comparison of eye movements to these low-level image statistics might be a first step into this direction.
A second clinical condition that seems to show abnormal face scanning is social anxiety disorder. In free viewing conditions, these patients tend to avoid fixating the eye region of conspecifics 
. Instead, they show hyperscanning as reflected by increased scanpath length and short fixation periods also on non-diagnostic features such as hair or ears 
. Interestingly, these patients show hyperactivation of the amygdala 
. In line with the above mentioned hypothesized functional role of the amygdala, one may speculate that patients with social phobia initially show enhanced reflexive gaze shifts toward the eye region but subsequently avoid scanning this feature to reduce an upcoming fear of being observed and evaluated by others. This hypothesis can be addressed by the experimental paradigm of Experiment 1. Patients with social phobia would be supposed to show a large number of initial gaze shifts toward the eye region but for a longer viewing duration, one would predict this pattern to change into enhanced attention to non-diagnostic features and an active avoidance of focusing on the eye region.
To sum up, our study clearly underlined the importance of an opponent’s eye region in driving gazing behavior. Moreover, we showed that diagnostic facial features of emotional expressions are preferentially processed irrespective of task demands, position in the visual field and low-level image statistics. These gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli. Thus, they may result from an automatic detection of salient features in the visual field.