Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Exp Psychol Gen. Author manuscript; available in PMC 2018 January 1.
Published in final edited form as:
PMCID: PMC5289071

Try to Look on the Bright Side: Children and Adults Can (Sometimes) Override Their Tendency to Prioritize Negative Faces


We used eye tracking to examine 4- to 10-year-olds’ and adults’ (N = 173) visual attention to negative (anger, fear, sadness, disgust) and neutral faces when paired with happy faces in 2 experimental conditions: free-viewing (“look at the faces”) and directed (“look only at the happy faces”). Regardless of instruction, all age groups more often looked first to negative versus positive faces (no age differences), suggesting that initial orienting is driven by bottom-up processes. In contrast, biases in more sustained attention—last looks and looking duration—varied by age and could be modified by top-down instruction. On the free-viewing task, all age groups exhibited a negativity bias which attenuated with age and remained stable across trials. When told to look only at happy faces (directed task), all age groups shifted to a positivity bias, with linear age-related improvements. This ability to implement the “look only at the happy faces” instruction, however, fatigued over time, with the decrement stronger for children. Controlling for age, individual differences in executive function (working memory and inhibitory control) had no relation to the free-viewing task; however, these variables explained substantial variance on the directed task, with children and adults higher in executive function showing better skill at looking last and looking longer at happy faces. Greater anxiety predicted more first looks to angry faces on the directed task. These findings advance theory and research on normative development and individual differences in the bias to prioritize negative information, including contributions of bottom-up salience and top-down control.

Keywords: emotion, attention bias, development, executive function, anxiety

A growing number of studies reveal that not all emotions are equal: Children and adults exhibit heightened interest to negative versus positive emotional stimuli, what is known as the negativity bias (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001; Vaish, Grossmann, & Woodward, 2008). The current study adds to this research by investigating 4- to 10-year-olds’ and adults’ visual attention to negative emotion faces (anger, fear, sadness, and disgust) when paired with happy expressions. Of central interest was identifying normative age-related changes in where children and adults (a) look first, (b) look last, and (c) look longest in a free-viewing context, as well as their ability to attend just to positive faces in the presence of competing negative faces when given explicit top-down instruction. We further examined sources of individual differences during these two experimental conditions, including contributions of executive function (working memory and inhibitory control) and anxiety. The overarching goals were to determine the degree to which children and adults naturally exhibit a negativity bias and to assess their flexibility in overriding this tendency.

Converging data from diverse research literatures indicate that humans prioritize negative over positive information. For example, adults show increased response times and higher error rates for negative (compared with positive) words on Stroop tasks (Bradley, Mogg, White, Groom, & de Bono, 1999). Individuals over 7 years of age respond more rapidly to cues appearing on the side of negative versus positive stimuli (Bradley, Mogg, & Millar, 2000; Bradley et al., 1999; Pérez-Edgar et al., 2011). Moreover, children as young as 3 years and adults more quickly locate negative, especially threatening stimuli, compared to positive or neutral images on visual search tasks (Frischen, Eastwood, & Smilek, 2008; LoBue & DeLoache, 2008; Öhman, Lundqvist, & Esteves, 2001). Researchers have also shown that children and adults engage in more frequent causal talk and exhibit more sophisticated social– cognitive reasoning in negative (compared with positive) contexts, give greater weight to negative versus positive information when making decisions, as well as exhibit better memory for negative versus positive events (Baltazar, Shutts, & Kinzler, 2012; Cacioppo & Berntson, 1994; Dunn, Bretherton, & Munn, 1987; Fiske, 2000; Lagattuta, 2014; Lagattuta & Sayfan, 2013; Lagattuta & Wellman, 2002; Tversky & Kahneman, 1991; Ybarra, 2002).

Although the negativity bias occurs across multiple aspects of attention and reasoning, little is known about continuities and discontinuities in the strength of this bias across normative development. This empirical gap is largely rooted in methodology: Measures for adults cannot always be used with children and vice versa. The current study addressed this shortfall by utilizing eye tracking to measure 4- to 10-year-olds’ and adults’ attention to negative faces when shown next to happy faces. Because this approach does not require reading skills or specific behavioral responses (e.g., pressing a certain button), the same procedures can be administered across a very wide age range. To our knowledge, eye-tracking studies measuring prioritized attention to negative versus positive emotional faces using the same task with children and adults have not been done. Related research, however, shows that neural reactivity to negative visual information, as measured by the late positive potential (LPP), declines between 7 years and adolescence, between adolescence and adulthood, and between early and late adulthood (Kisley, Wood, & Burrows, 2007; Kujawa et al., 2012; MacNamara et al., 2016; MacNamara, Post, Kennedy, Rabinak, & Phan, 2013). In addition, amygdala activation when viewing fear faces dampens between 8 years of age and adulthood (Guyer et al., 2008; Hwang, White, Nolan, Sinclair, & Blair, 2014; but see Todd, Evans, Morris, Lewis, & Taylor, 2011). Thus, we hypothesized that there would be an age-related decline in the tendency to look more at negative versus positive emotional faces, with 4- to 5-year-olds exhibiting the strongest bias and adults the weakest.

We examined visual attention to happy faces versus four types of negative emotions (anger, fear, sadness, disgust). Prior research suggests that fear and anger expressions may be most salient due to the evolutionary significance, although systematic direct comparisons between “threat” and “nonthreat” negative emotions are rare (Bradley et al., 2000; Calvo & Nummenmaa, 2008; Georgiou et al., 2005; Hsing, Mohr, Stansfield, & Preston, 2013; LoBue, Matthews, Harvey, & Thrasher, 2014; Pratto & John, 1991; Vaish et al., 2008). Researchers have also proposed that individuals attend most to information that is motivationally significant or unusual (e.g., MacNamara et al., 2016; Mather & Carstensen, 2005). For example, older children and adults exhibit heightened amygdala and LPP responses to emotional (negative and positive) compared to nonemotional stimuli (Hajcak & Dennis, 2009; MacNamara et al., 2013), and adults look more at emotional versus neutral images (Calvo & Lang, 2004; Nummenmaa, Hyönä, & Calvo, 2006). To test this broader salience hypothesis, we included happy-neutral face pairs. Based on the extant research, we predicted that the negativity bias would be strongest for anger and fear. We further expected participants to prioritize happy over neutral faces.

An additional benefit of using eye tracking is that we can test for differences by age and emotion type in initial orienting (first looks) as well as in two metrics of sustained attention: last looks and looking duration (attention bias). Although there are significant debates in the literature (see Burnham, 2007; Corbetta & Shulman, 2002; Folk, Remington, & Wright, 1994), first looks are often presumed to be driven by bottom-up stimuli salience. In contrast, the latter two indices (last looks and attention bias) are argued to be under top-down control because they reflect more sustained, volitional attention occurring later in time (Mather & Carstensen, 2005; Posner, 1980; Theeuwes, Atchley, & Kramer, 2000). If first looks are driven by bottom-up salience, then the tendency to look first at negative emotion faces should not vary by age. Moreover, if last looks and attention bias involve more deliberate, top-down control, then we should expect significant developmental changes, an argument which we will expand on below.

We created both free-viewing and directed task variants. Children and adults first viewed paired emotional faces while told to “look at the faces.” This assessed how children and adults inspect faces without any prompt for how to regulate their attention. Participants next viewed the same pairs (different random order) after being told to “look only at the happy faces.” Primary interest was in how well children and adults could use the experimenter-provided strategy. We further explored whether participants would fatigue when using this top-down goal. There is growing evidence that adults’ top-down goals can trump attention to emotional or threatening images (Corbetta & Shulman, 2002; Folk, Remington, & Johnston, 1992; Moskowitz, 2002; Vogt, De Houwer, & Crombez, 2011; Vogt, De Houwer, Crombez, & Van Damme, 2013), although not all studies support this theory (see Nummenmaa et al., 2006; Theeuwes, 1992, 2010). Moreover, according to vigilance-decrement theory (Davies & Parasuraman, 1982) the ability to purposefully direct attention decays over time. Here, we tested these theories from a developmental perspective. Manipulating task instructions not only measured top-down flexibility in attention to emotional information, but also tested bottom-up salience theories. We hypothesized that instruction type should influence last looks and attention bias (i.e., a shift to a positivity bias on the directed task), but should not influence first looks (i.e., initial orienting will more often be toward negative faces). We further anticipated that a positive attention bias on the directed task would decline across trials, especially for younger participants.

One mechanism that may support developmental changes in attention to emotional faces, including flexibility in top-down control, is age-related improvement in executive function (EF), higher-order cognitive processes associated with the prefrontal cortex (e.g., Alvarez & Emory, 2006; Bunge & Wright, 2007). There are substantial gains in working memory (WM; the capacity to keep multiple pieces of information in mind) and inhibitory control (IC; the ability to regulate dominant responses) between the ages of 3 and 10 and between childhood and adulthood (Best & Miller, 2010; Conklin, Luciana, Hooper, & Yarger, 2007; Kramer, Lagattuta, & Sayfan, 2015; Lagattuta, Sayfan, & Monsour, 2011; Zelazo, Carlson, & Kesek, 2008). Higher WM and IC may enable participants to direct their attention more toward positive, pleasant images. Indeed, researchers have speculated that decreases in LPP and amygdala responses to negative emotional stimuli between childhood and adulthood may be caused by the increasing functional integrity of prefrontal brain regions involved in top-down control (see Hwang et al., 2014). Thus, assessing relations between EF and visual attention provides an additional test of bottom-up versus top-down theories. If first looks are driven by bottom-up salience, then EF should not predict first orienting to emotion faces regardless of task instruction. If last looks and attention bias are under top-down control, then individuals with higher (vs. lower) EF should show a weaker negativity bias during free-viewing and a stronger positivity bias on the directed task. Because looking only at happy faces demands greater control than just looking at faces, we further predicted that EF would explain more of the variance in the directed version.

We included an additional variable of theoretical interest, anxiety. Studies have shown that anxious adults orient more quickly to threatening pictures than nonanxious individuals, especially anger and fear faces (Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg, & van IJzendoorn, 2007; LoBue & DeLoache, 2008; Mogg & Bradley, 2002), and they have difficulty directing their attention away from negative pictures compared to nonanxious controls (Fox, Derakshan, & Standage, 2011; Georgiou et al., 2005; Ladouceur et al., 2009). These relations between anxiety and attention to emotional stimuli are presumed to hold for children, although the developmental literature is sparse (see In-Albon, Kossowsky, & Schneider, 2010; Pérez-Edgar et al., 2011; Pérez-Edgar, Taber-Thomas, Auday, & Morales, 2014). In a meta-analysis, Bar-Haim et al. (2007) concluded that negative attention biases are heightened in anxious children and adults compared to less anxious individuals, and that these attention biases may be implicated in the emergence and maintenance of anxiety disorders. Using a low-risk, nonclinical sample, we examined relations between individual differences in anxiety and children’s and adults’ attention to emotional faces in the free-viewing and directed tasks. Given mounting evidence that more anxious individuals have lower EF, at least in adults (Bishop, 2007; Cisler & Koster, 2010; Eysenck, Derakshan, Santos, & Calvo, 2007), a more stringent test of the theorized anxiety-negativity bias link requires accounting for variability in EF. Thus, we tested the hypothesis that once controlling for age and EF, anxiety would still contribute additional variance, particularly for anger and fear pairs.

In summary, the current study aimed to advance theory and research on developmental changes and individual differences in the bias to prioritize negative emotional information, including contributions of bottom-up salience and top-down control. We assessed 4- to 10-year-olds’ and adults’ visual attention to paired negative versus positive emotional faces in two experimental conditions: (1) a free-viewing task, and (2) a directed task, where participants were instructed to look only at happy faces. Consistent with theories that initial orienting relies on bottom-up processes, we expected that first looks would not change with age, task instructions, or individual differences in EF. Given arguments that sustained attention is under top-down control, we hypothesized that all age groups would look last and look longer at negative faces during free-viewing, but that this bias would attenuate with age and with more advanced EF. Similarly, we expected last looks and attention biases to shift to a positivity bias on the directed “look only at the happy faces” task, with successful implementation of this goal improving with age and with higher EF. In accord with evidence for a link between anxiety and the negativity bias, we further predicted that children and adults higher (vs. lower) in anxiety would more often look first, look last, and look longer at negative emotional faces, even when controlling for age and EF. Because of their threat relevance, we anticipated that participants would orient more quickly, look last, and look longer at faces expressing anger and fear (compared to sadness and disgust), and that this would be especially true for individuals higher in anxiety.



Child and adult participants (N = 173) were divided into four age groups: 34 4- to 5-year-olds (range = 4 years, 0 months, to 5 years, 11 months; M = 5 years, 0 months; SD = 7 months; 20 females); 45 6- to 7- year-olds (range = 6 years, 2 months, to 7 years, 11 months; M = 7 years, 1 month; SD = 6 months; 26 females); 51 8- to 10- year-olds (range = 8 years, 1 month, to 10 years, 11 months; M = 9 years, 2 months; SD = 10 months; 24 females); and 43 adults (range = 18 years, 1 month, to 24 years, 3 months; M = 20 years, 2 months; SD = 1 year, 8 months; 23 females). Participants were separated into these groups based on prior research showing developmental gains in control of attention (Anderson, 2002) and EF (Kramer et al., 2015; Lagattuta et al., 2011) between these ages. Child participants were recruited through a pool of past participants, research fliers, and participant referral from areas surrounding a university town. Adults were recruited from psychology courses. No participant had a diagnosed psychiatric disorder (assessed via parent report for children and self-report for adults).

The child sample consisted of 79.2% Caucasian (non-Hispanic), 3.1% Asian American, 0.8% African American, 6.9% Hispanic, 9.2% mixed racial or ethnic heritage, and 0.8% other. The adult sample consisted of 53.5%, Caucasian (non-Hispanic), 14.0% Asian American, 16.3% Hispanic, 7.0% mixed racial or ethnic heritage, and 9.3% other. All participants were fluent in English. For the child participants, 82.3% of fathers and 87% of mothers held college degrees. For the adults, 46.6% of the fathers and 53.5% of the mothers held college degrees.

Forty-eight additional participants (7 4- to 5-year-olds, 17 6- to 7-year-olds, 11 8- to 10-year-olds, 13 adults) were also tested. We excluded three adults because they did not complete the anxiety measure. To minimize the number of trials with missing data, we excluded 45 participants (20% of the sample) who spent less than 60% of the time looking at the emotional faces during either the free-viewing or the directed task. We selected this conservative cutoff a priori before conducting any statistical tests; eye-tracking data for excluded participants were not exported. Opting for a lower threshold would have increased the number of missing data trials, and a higher threshold (70% or higher) would have excluded more than 40% of the sample. Excluded participants did not differ from those included: 96% fell within 2 SDs of the mean anxiety score for their respective age group (100% within 3 SDs), 93% for IC errors (98% within 3 SDs), 98% for IC response time (100% within 3 SDs), and 93% for WM (100% within 3 SDs).

Materials and Procedure

Attention to emotional faces

First, we describe the central eye-tracking measures.

Picture stimuli

Picture stimuli included 60 images from the NimStim collection, a set of validated facial emotion images portrayed by actors of varied racial and ethnic backgrounds (Tottenham et al., 2009). We selected 15 female (8 Caucasian, 4 African American, 2 Asian, 1 Hispanic) and 15 male targets (9 Caucasian, 5 African American, 1 Hispanic) to make 30 paired-image slides that portrayed the same actor posing two different emotions: 4.5-inch × 3.5-inch vertical photographs separated by 1.5 inches on a black screen. Twenty-four pairings included one negative image and one positive image, while the other six pairings included one neutral image and one positive image. Negative emotions came in four categories: anger, fear, sadness, and disgust, with each category including six exemplars (3 male, 3 female). The positive emotion was always happy. Images were balanced for teeth (e.g., if the negative picture showed teeth so did the paired positive expression; see Calvo & Nummenmaa, 2008). Happy expressions appeared equally on the left and right side of the screen for each emotion category. Images appearing on the left were located at 2.39 inches horizontally and 2.92 inches vertically on the screen, while images appearing on the right were located at 7.39 inches horizontally and 2.92 inches vertically.

During testing, 4- to 7-year-olds sat in a booster seat with a headrest to minimize head movement. Older children and adults sat in a regular chair. The Tobii T-60 eye tracker was placed 24 inches from the participant on an adjustable arm. Before the testing phase, participants’ eye movements were calibrated using the standard 9-point protocol. Participants had to calibrate accurately to all points before the testing could start. The eye tracker measured eye movements throughout each trial at a rate of 60 Hz with 0.5-degree accuracy.

Free-viewing task

After calibration, the initial unrestricted phase began. To achieve a baseline recording for the participants’ attention, they were told, “Now, I am going to show you some pictures of people’s faces on my computer. I’m not going to ask you any questions at all. Your job is just to sit quietly and look at my pictures. It will take about two minutes.” The presentation started with a 1/8-inch red circle centered on the screen for 3 s to focus the eyes to the center. Then, the 30 paired-emotion slides were shown in random order (4 s per slide), with the red circle appearing between each slide for 0.5 s to refocus attention to the center.

Directed task

Following the free-viewing task, participants saw the same stimuli in a new random order. To begin the directed task, the experimenter stated, “Now, I am going to show you the same pictures on my computer. This time your job is to look only at the happy faces. Don’t look at the other faces.” The experimenter checked that the participant understood this top-down goal by asking, “So what are the only faces you should look at?” Once the participant acknowledged they should only look at the happy faces, testing began.

Creating areas of interest

Areas of interest (AOIs) were placed directly on top of each photograph. There were 120 total AOIs (60 free viewing, 60 directed task). We utilized the Tobii I-VT fixation filter. This filter defined fixations according to these criteria: Minimum fixation duration of 60 ms, velocity threshold of 30 deg/s, maximum angle between fixations at 0.5 degrees, and maximum time between fixations at 75 ms. We exported data on time to first fixation and fixation duration (looking time) for each AOI.

Identifying first looks

Using the time to first fixation data, we determined the first face each participant looked at for each trial of each emotion pair type (i.e., whichever face had the shorter latency for first fixation). First looks to a happy face received a score of 1. Thus, proportion score values above .50 indicate a positivity bias, scores below .50 reflect a negativity bias, and scores no different than .50 reveal no bias. If the participant looked at neither face on a particular trial, it was treated as missing data.

Coding last looks

Video recordings of participants’ eye movements for the free-viewing and directed tasks were exported. For all 30 trials of each task, coders identified the last look—which face in the pair the participant was looking at before the trial ended (0 = negative or neutral face, 1 = happy face). If a participant never fixated on either picture within a trial, it was treated as missing data. Three undergraduate research assistants independently coded last looks for a random subset of 45 participants (26% participants; balanced for age). Coders were blind to all participant information. The remaining participants were divided equally among the three coders (balanced for age). Interrater reliability (kappa) between coders ranged from .91 to .97. As with first looks, we calculated proportion scores for each emotion pair type.

Calculating attention bias

Using fixation duration, we calculated the attention bias for each trial: (looking time to happy face minus looking time to paired negative or neutral face) divided by (total looking time to both faces). Thus, scores greater than zero indicate a positive attention bias, scores less than zero indicate a negative attention bias, and scores not differing from zero equal no bias. Trials were coded as missing if the participant failed to look at either face.

Executive function measures

Participants completed one measure of working memory and two inhibitory control tasks. We selected the following measures because they have adequate variability across these age groups as well as evidence predictive validity (Kennedy, Lagattuta, & Sayfan, 2015; Lagattuta, Elrod, & Kramer, 2016; Lagattuta, Sayfan, & Blattman, 2010; Lagattuta, Sayfan, & Harvey, 2014).

Working memory

For the memory for sentences task (Stanford–Binet Intelligence Scale, 4th ed.), the experimenter read aloud a sentence, and the participant attempted to repeat it back verbatim. Statements became increasingly long and complex over trials. Scoring followed standard procedures (Thorndike, Hagen, & Sattler, 1986).

Inhibitory control

Participants responded to two Stroop-like card tasks: happy-sad (Kramer et al., 2015; Lagattuta et al., 2011) and day-night (Gerstadt, Hong, & Diamond, 1994). For each measure, participants were instructed to say the opposite label for each picture (i.e., for happy-sad, say “happy” for a sad face and “sad” for a happy face; for day-night, say “day” for a moon and “night” for a sun) across 20 trials in a fixed random order. Participants were scored for the number of errors (0–20) as well as for cumulative response time (total time to complete the 20-card set). A trial was considered correct if the participant gave the appropriate verbal response as his or her first and only response (see also Gerstadt et al., 1994). Percent agreement between two independent coders (based on a random 20% of the sample) for identifying the number of errors was 95% (100% for agreement within one error), and percentage agreement for identifying cumulative response time within 1 s was 100% (we used percentage agreement instead of Cohen’s kappa because these are ordinal, not nominal scores; see Cohen, 1968).

Measuring anxiety

Children’s anxiety symptoms were self-reported as well as evaluated by a parent. Adults self-reported their symptoms.

Child report of child anxiety

A modified version of the Spence Children’s Anxiety Scale (Spence, 1997, 1998) was used to measure children’s anxiety (Lagattuta et al., 2012, Study 2). The measure included 17 items: 6 statements regarding general anxiety, 4 about worry in uncertain situations, 4 concerning worry about animals, and 3 fillers. Children responded using a 5-point pictorial Likert scale: None of the time (empty rectangle), a little bit of the time (rectangle [1/4] filled with blue), some of the time (rectangle [1/2] filled), a lot of the time (rectangle [3/4] filled), and all of the time (blue rectangle). The experimenter read each statement out loud (e.g., “I feel worried”) and then asked the child to point to the rectangle that reflected how often he or she felt that way. Prior to the task, children received training and practice on how to interpret and use all of the scale points (see Lagattuta et al., 2012, for further details).

Parent report of child anxiety

Parents reported on their children’s anxiety using the same measure, except in written format. The stems of questions changed from “I …” to “My child… .” They were told to answer the items to the best of their knowledge. They were not allowed to consult with their child or view their child’s responses.

Adult report of anxiety

Adult participants completed the Penn State Worry Questionnaire (PSWQ; Meyer, Miller, Metzger, & Borkovec, 1990). The PSWQ is a 16-item validated measure of worry and anxiety that uses a 5-point Likert scale ranging from not at all typical of me to very typical of me.

General procedure

Participants were tested individually by a female experimenter in a quiet room in a university research laboratory. Half of the children completed the measures of anxiety, working memory, and inhibitory control 1 week prior to the eye-tracking tasks (the other half in the reverse order). Although the adults completed all tasks in one visit to the laboratory, half of them responded to the individual difference measures prior to the eye-tracking tasks and the other half afterward. This study was part of a larger study investigating children’s and adults’ social cognition and future-oriented reasoning. Children were given small toys and a $15.00 gift card for participating. Adults received course credit.


Results are divided into four sections. We conducted preliminary analyses to confirm significant correlations between the two IC tasks (happy-sad and day-night) and to provide psychometric properties of the anxiety measures. Next, we examined how age, emotion type, (anger, fear, sadness, disgust, and neutral), and instruction (free viewing vs. directed) affected eye movements, with age as a between-subjects variable and emotion type and instruction as within-subjects variables. Within this section, we explored first looks (the emotional face that children and adults looked at first) as an indicator of initial orienting. Measures of sustained attention included last looks (the emotional face that participants were fixating on when each trial ended) and attention bias (the emotional face that individuals spent relatively more time viewing). Third, we analyzed whether looking patterns remained stable versus changed across the 30 trials of each task. Finally, we investigated how individual differences in EF and anxiety related to looking patterns. For all analyses of variance, we used Tukey’s honest significant difference to follow up on main effects and simple effects tests to examine interactions. All reported results are significant at .05.

Preliminary Analyses

We confirmed that we could aggregate IC scores by combining happy-sad and day-night performance: Even when controlling for age, happy-sad and day-night were strongly correlated for both errors (r = .57, p < .001) and cumulative response time (r = .51, p < .001).

The internal reliability of child self-reported anxiety (α = .80) and parent-reported child anxiety were high (α = .82). Children and parents utilized multiple scale points to respond to questions (Ms > 3.55 different scale points on the 5-point scale), indicating that no age group just selected the “none of the time” and “all of the time” anchors to respond to questions (see Lagattuta et al., 2012 for additional details on psychometric properties). Because there was no relation between the parent and the child reports of children’s emotions (r = −.05, p = .566), these scores were not combined. The internal reliability of the adult self-reported PSWQ (α = .92) was also high. Anxiety scores were z-scored by age group to normalize data for primary analyses.

Primary Analyses

Age, task, and emotion

We ran a 4 (age: 4- to 5-year-olds, 6-to 7-year-olds, 8- to 10-year-olds, adults) × 5 (emotion: anger, fear, sadness, disgust, neutral) × 2 (instruction: free-viewing, directed) repeated-measures multivariate analysis of variance on proportion of first looks and last looks to the happy face and average attention bias. The multivariate analysis yielded main effects for age, F(9, 507) = 14.04, p < .001, ηp2=.20, emotion, F(12, 2028) = 3.95 p < .001, ηp2=.02, and instruction, F(3, 167) = 706.31, p < .001, ηp2=.93, qualified by Age × Instruction, F(9, 507) = 9.57, p < .001, ηp2=.15, and Emotion × Instruction interactions, F(12, 2028) = 2.08, p = .016, ηp2=.01. In the following sections, we describe these results by metric (first looks, last looks, and attention bias).

First looks

The univariate tests for first looks confirmed the main effect for emotion, F(4, 676) = 8.04, p < .001, ηp2=.05. Consistent with our hypotheses (recall that proportions less than .50 indicate a negativity bias), children and adults more frequently looked first at negative versus positive faces. There was no difference by specific negative emotion (ps > .227): anger (M = .48, 95% confidence interval [CI][.46, .49]; compared to chance (.50): t(172) = −2.53, p = .012); fear (M = .47, CI[.45, .49]; t(172) = −3.55, p < .001); sadness (M = .47, CI [.46, .49]; t(172) = −3.57, p < .001); and disgust (M = .48, CI [.46, .50]; t(172) = −1.93, p = .055). Happy faces captured a greater proportion of first looks than neutral faces (M = .53, CI [.51, .55]; t(172) = 3.58, p < .001). As predicted, initial orienting toward emotional faces did not vary by age, F(3, 169) = 1.38, p = .251, ηp2=.02. In addition, giving children and adults the strategy to “look only at the happy faces” did not change initial orienting to negative stimuli, F(1, 169) = .55, p = .460, ηp2=.03.

Last looks

The univariate tests for last looks confirmed the main effects for age, F(3, 169) = 40.17, p < .001, ηp2=.42, and instruction, F(1, 169) = 1042.36, p < .001, ηp2=.86, subsumed by an Age × Instruction interaction, F(3, 169) = 15.65, p < .001, ηp2=.22. As expected, in the free-viewing task (Figure 1), 4- to 10-year-olds more often ended trials by looking at negative or neutral faces than at happy faces (compared to chance: .50), ts < −2.59, ps < .012. Adults, in contrast, showed no last-look bias on the free-viewing task, t(42) = .78, p = .441, and they ended trials looking at the happy faces significantly more often than did children (ps < .013). During the directed task (Figure 1), participants tended to end trials fixated on happy faces, ts > 9.22, ps < .001. This positivity bias increased between every age group (ps < .001).

Figure 1
Proportion of last looks to happy faces by task and age. Error bars represent standard errors. Yrs = years. Values less than .50 indicate a negativity bias; values greater than .50 indicate a positivity bias. During the free-viewing task, only children ...

Attention bias

The univariate tests for attention bias confirmed the main effects for age, F(3, 169) = 77.33, p < .001, ηp2=.58, instruction, F(1, 169) = 2062.69, p < .001, ηp2=.92, and emotion, F(4, 676) = 2.64, p = .033, ηp2=.02, qualified by Age × Instruction, F(3, 169) = 35.49, p < .001, ηp2=.39, and Emotion × Instruction interactions, F(4, 676) = 5.72, p < .001, ηp2=.03. In the free-viewing task (Figure 2), all age groups exhibited the anticipated negative attention bias (compared to chance: 0.0), ts < −2.44, ps < .019. This bias, however, was weaker for adults than children (ps < .014), and for 8- to 10-year-olds compared to 4- to 5-year-olds (ps < .010). On the directed task (Figure 2), the valence of the attention bias reversed: Children and adults now exhibited a positive attention bias, ts > 9.23, ps < .001. As we hypothesized, this positive attention bias increased with age (all between-ages comparisons significant, ps < .001).

Figure 2
Average attention bias by task and age. Error bars represent standard errors. Yrs = years. Zero indicates balanced looking, scores higher than zero indicate a positive attention bias, and scores lower than 0 indicate a negative attention bias. During ...

Figure 3 shows the Emotion × Instruction interaction. In the free-viewing task, participants spent more time looking at negative or neutral faces compared to happy faces, ts < −6.69, ps < .001. In line with our predictions, this negativity bias was stronger when happy was paired with anger and fear than when happy faces appeared next to sad, disgust, or neutral faces (ps < .007). In the directed task, participants displayed a positive attention bias for every paired emotion, ts > 24.45, ps < .001; this bias was weaker for neutral than for anger (p = .019).

Figure 3
Average attention bias by task and emotion. Error bars represent standard errors. Zero indicates balanced looking to both faces, scores higher than zero indicate a positive attention bias, and scores lower than 0 indicate a negative attention bias. Participants ...

Trial order

To investigate if eye-movement patterns changed over time, we conducted two separate linear mixed models (LMMs), one for the free-viewing and one for the directed task. We used attention bias as our dependent variable (LMM does not tolerate binary values as outcomes; thus, excluding first-look and last-look analyses). Both models included trial order (1 to 30), age (continuous), and emotion category (anger, fear, sadness, disgust, and neutral) as fixed effects with a random intercept and slope for trial order by participant (see Barr, Levy, Scheepers, & Tily, 2013). On the free-viewing task, there were no significant effects for trial, Fs < 1.03, ps > .392. Participants’ negative attention biases remained stable throughout the trials. In contrast, during the directed task, the analysis yielded main effects for age, F(1, 170.10) = 102.01, p < .001, and trial, F(1, 167.00) = 54.77, p < .001, qualified by an Age × Trial interaction, F(1, 165.47) = 8.70, p = .004; a finding consistent with the vigilance-decrement theory (Davies & Parasuraman, 1982). To examine the interaction, we conducted LMMs for each age group separately with trial as a fixed and random effect (intercept and slope). The effect of trial was significant for all age groups, Fs > 13.99, ps < .001, indicating that fatigue in implementing the top-down “look only at the happy faces” goal (opposite to the natural viewing tendency to look more at negative faces) occurred as the trials progressed. The slope of this drop was steeper for children (4- to 5-year-olds’ slope: −.007, CI [−.010, −.003]; 6- to 7-year-olds: −.008, CI [−.010, −.005]; 8- to 10-year-olds: −.006, CI [−.008, −.004]) than for adults (−.002, CI [−.004, −.001]).

Individual differences

We explored how individual differences in executive function and anxiety influenced attention to emotional faces (Table 1 displays the means for the individual difference measures by age group). We ran separate hierarchical linear regressions for the free-viewing and directed task (collapsing across trials within each task) for each of our metrics (first looks, last looks, and attention bias). Each regression included the following steps: (1) age (continuous), (2) executive function: IC errors, IC response time, and WM, (3) self-reported anxiety, (4) Age × IC Errors (5) Age × Self-Reported Anxiety. The Age × IC response time and the Age × WM interactions were excluded due to multicollinearity issues (Variation inflation factors [VIFs] >20.88; all other VIFs < 4.87). We verified that the same models reported below hold when we excluded the neutral-happy trials. The only exception was that WM reduced to a trend (β = .14, p = .066) for the analysis on last looks (directed task); the EF step remained significant (R2 = .48), F(4, 168) = 38.50; IC errors: β = −.22, p = .001; IC reaction time (RT): β = −19, p = .017). The amount of variance explained by IC also did not change if we substituted the composite score used in the primary analyses (aggregate of happy-sad and day-night) with either happy-sad or day-night.

Table 1
Means (and Standard Deviations) for the Individual Difference Measures by Age

First looks

Neither age, nor EF, nor anxiety predicted participants’ first looks on the free-viewing (R2s < .03, Fs < 1.11, ps > .295) or directed task (R2s < .03, Fs < 2.38, ps > .125).

Last looks

On the free-viewing task, age explained a significant amount of variance (R2 = .05, F(1, 171) = 9.34). With increasing age, individuals ended trials looking at the happy face more often (β = .23, p = .003). On the directed task, age (R2 = .38, F(1, 171) = 104.79), and EF (R2 = .49, F(4, 168) = 39.71, ΔR2 = .11, p < .001) independently predicted last looks. Increasing age (β = .62, p < .001), fewer IC errors (β = −.21, p = .001), shorter IC response time (β = −.19, p = .020), and higher WM (β = .18, p = .021) uniquely contributed to explaining variance in children’s and adults’ ability to end trials looking at happy faces.

Attention bias

When we examined the effects of EF and anxiety on attention bias in the free-viewing task, only age was a significant predictor (R2 = .13, F(1, 171) = 25.01, β = .36, p < .001). After being instructed to look only at the happy faces, age (R2 = .51, F(1, 171) = 180.83) and EF (R2 = .60, F(4, 168) = 64.11, ΔR2 = .09, p < .001) explained significant variance in performance. As with last looks, increasing age (β = .72, p < .001), fewer IC errors (β = −.18, p = .001), lower IC response time (β = −.19, p = .007), and higher WM (β = .17, p = .015) independently predicted greater success maintaining attention on the happy faces.

Further examining anxiety

Evidence suggests that individuals higher in anxiety may have lower EF (Bishop, 2007; Cisler & Koster, 2010; Eysenck et al., 2007), a relation shown in the current sample as well (the correlation between self-reported anxiety and WM: r = −.16, p = .032). Thus, EF may have taken up variance that overlapped with anxiety, reducing anxiety to a null effect. To check this, we reran the regression models but entered anxiety prior to EF. Anxiety was still unrelated to all attention metrics (ΔR2s < .01, −.10 < βs < .06, ps > .210). EF, however, remained a significant predictor of participants’ last looks and attention biases on the directed task even when controlling for anxiety (ΔR2s > .09, −.20 < βs < .21, ps < .020).

Because individuals with higher anxiety are theorized to be especially attuned to anger and fear (e.g., Bar-Haim et al., 2007), we conducted partial correlations controlling for age and EF between anxiety and the three eye-movement metrics (first looks, last looks, attention bias) separately for the anger and fear trials. Because we only included self-reported anxiety in the regression models, we also conducted correlations with parent-reported child anxiety. There was no relation between anxiety and attention to anger or fear on the free-viewing task (self-reported: −.07 < rs < .11, ps > .140; parent-reported: −.11 < rs < .15, ps > .103). In contrast, on the directed task, participants with higher self-reported anxiety (but not parent-reported child anxiety; r = .05, p = .620) more often looked at the anger faces first (r = −.22, p = .004).


Consistent with our predictions, children and adults prioritized negative versus positive emotional faces in the free-viewing condition, with heightened sustained attention to negative emotions (last looks and attention bias), but not first looks, decreasing with age and varying by emotion type. When provided the instruction to focus on happy faces (directed task), all age groups maintained the bias to orient first to negative emotion faces, indicating strong bottom-up salience of negative stimuli. Last looks and attention biases, however, shifted to a positivity bias, with significant age-related improvements in this top-down control. Strengthening these interpretations of bottom-up salience driving first looks and top-down flexibility in sustained attention, whereas individual differences in executive function (EF) had no relation to first looks in either experimental condition, individuals higher in EF looked last and looked longer at happy faces during the directed task, independent of age. Controlling for age and EF, anxiety predicted initial orienting to anger on the directed task. These findings advance theory and research on the negativity bias by revealing children’s and adults’ natural tendency to attend more to negative information during free viewing, as well as documenting which aspects of attention to emotional faces change with age, can be modified by top-down goals, and vary by EF and anxiety.

Initial Orienting and Bottom-Up Salience

Children and adults exhibited biased initial orienting to negative versus positive emotional faces and positive over neutral expressions regardless of task instruction (“look at the faces” vs. “only look at the happy faces”). There were no age-related changes, nor did individual differences in EF predict first looks. This developmental invariance and lack of relation to EF, combined with no differences between the free-viewing and directed tasks, align with interpretations that bottom-up salience drives first looks (Armstrong & Olatunji, 2012). That is, even when children and adults were exclusively searching for happy expressions, this intention did not alter their tendency to orient first to negative faces. Parallel to this, Nummenmaa et al. (2006) found that instructing adults to look at neutral objects did not prevent them from first orienting to negative (e.g., mutilation) or positive scenes (e.g., eroticism). We show that the initial salience of emotional faces (negative > positive; positive > neutral) operates similarly by 4 to 5 years of age. These data further fit with evidence from event-related potential studies that children and adults show a heightened LPP amplitude for negative versus positive and positive versus neutral stimuli (Kisley et al., 2007; MacNamara et al., 2013; MacNamara et al., 2016).

This negativity bias in initial orienting, a bias seemingly impervious to top-down instruction (but see Vogt et al., 2013) and unrelated to EF, aligns with theories proposing an evolutionary basis for quick detection of threatening or negative information (Baumeister et al., 2001; Cacioppo & Berntson, 1994; LoBue & DeLoache, 2008; LoBue, Matthews, Harvey, & Stark, 2014; LoBue, Matthews, Harvey, & Thrasher, 2014; Pratto & John, 1991; Rozin & Royzman, 2001; Vaish et al., 2008). This does not, however, preclude the possibility that faster attention to negative stimuli develops with age or is learned from experience. Preferential looking to fearful faces over happy or neutral expressions does not emerge until 7 months, coinciding with when infants first learn to crawl and parents start directing more negative affect toward them (see Hoehl, 2014; Leppänen, 2016). More frequent exposure to negative or hostile environments also increases perceptual sensitivity to anger (Cicchetti & Ng, 2014; Pollak & Tolley-Schell, 2003). Moreover, our data indicating that happy faces elicit more first looks than neutral faces support broader views that motivational salience or informativeness also drives first looks (Calvo & Lang, 2004). Thus, what determines bottom-up salience is likely determined by several variables (evolutionary significance, importance, novelty, experience, learning).

Sustained Attention and Top-Down Control

When asked to look at the faces, children and adults allocated more time to negative information. Although this negativity bias in sustained attention was unrelated to EF, it decreased with age (both when age was treated categorically and continuously)—only present for adults in looking duration (attention bias), but not last looks. Emotion discrimination and labeling research helps to explain this across-age prioritization of negative information during free viewing as well as the developmental changes. Even when not asked to label emotions (as in the current study), adults automatically process the semantic meaning of emotional faces (Hofelich & Preston, 2012; Preston & Stansfield, 2008). On explicit labeling or matching tasks, happy faces elicit quicker and more accurate naming in both children and adults, with speed and correctness increasing with age (Gao & Maurer, 2009; Montirosso, Peverelli, Frigerio, Crespi, & Borgatti, 2010; Preston & Stansfield, 2008). More generally, there are substantial improvements in children’s ability to categorize, label, understand, and regulate emotions during childhood (Lagattuta, 2014; Saarni, 1999; Thompson & Lagattuta, 2006). Thus, longer sustained attention to negative faces during natural viewing may reflect that more time is needed to determine their discrete signal value. Strengthening this stance, sustained attention to neutral faces was higher than to happy faces (opposite to initial orienting): Neutral faces are ambiguous (Cooney, Atlas, Joormann, Eugène, & Gotlib, 2006). The negativity bias in sustained attention during free viewing is likely also coinfluenced by factors driving initial orienting, including neural activity: Both LPP and amygdala responses to negative stimuli dampen over childhood and between childhood and adulthood (Guyer et al., 2008; Hwang et al., 2014; MacNamara et al., 2016).

When explicitly instructed to look only at happy faces, there was a linear age-related increase in looking longer and looking last at happy faces (with age as a categorical or continuous variable). The pronounced shift from a negativity bias to a positivity bias demonstrates that children as young as 4 to 5 years can execute top-down volitional control in sustained attention to emotional stimuli, a topic most heavily studied in adults (Corbetta & Shulman, 2002; Folk et al., 1992; Mather & Carstensen, 2005; Moskowitz, 2002; Posner, 1980; Theeuwes et al., 2000; Vogt et al., 2011, 2013). This ability to direct attention using a top-down goal—an aim that is opposite to natural viewing behavior—significantly improves with age. Consistent with the vigilance-decrement theory (Davies & Parasuraman, 1982), enacting a top-down goal that opposes prepotent tendencies requires cognitive effort and has limits: The positivity bias on the directed task weakened as trials progressed (reducing more for children than for adults) and not even adults executed perfect performance. These findings align with research showing advances between 3 and 9 years of age in attentional control on nonemotional tasks, for example, children’s ability to attend to the color versus shape or to avoid peripheral information (Astle, Nobre, & Scerif, 2012; Iarocci, Enns, Randolph, & Burack, 2009).

Greater flexibility in implementing a top-down goal comes not only with age but also advances in EF: Independent of age, individuals higher in EF (both WM and IC) more often looked last and looked longer at happy faces on the directed task. This supports hypotheses that higher functional integrity of prefrontal brain regions (known to be related to EF; Bunge & Wright, 2007; Casey et al., 1997) may improve top-down control of attention to emotional stimuli and dampen the amygdala and LPP response (see Hwang et al., 2014; MacNamara et al., 2016; Pessoa, 2009). Related studies further indicate that individual differences in EF are also associated with children’s and adults’ ability to regulate emotions, including their use of coping strategies such as reframing and distraction (Carlson & Wang, 2007; J. J. Gross, 1998; Hudson & Jacques, 2014; Mather & Knight, 2005; Riggs, Jahromi, Razza, Dillworth-Bart, & Mueller, 2006; Zelazo & Cunningham, 2007). Thus, EF may be implicated not just in how individuals experience, think about, and manage feelings, but also in regulating the visual input that elicit those feelings in the first place (see Isaacowitz, 2005).

Differences by Emotion Type

Adults have been found to exhibit a heightened negativity bias to anger and fear, often interpreted as due to the threat relevance of these expressions (Bradley et al., 1999, 2000; Georgiou et al., 2005; Leppänen & Nelson, 2012; Mogg & Bradley, 1999) with some evidence in children (LoBue & DeLoache, 2008; LoBue, Matthews, Harvey, & Stark, 2014; LoBue, Matthews, Harvey, & Thrasher, 2014; Pérez-Edgar, et al., 2011). Consistent with these findings, children and adults spent more time looking at anger and fear expressions compared to sad, disgust, or neutral faces when provided no instruction for how they should visually inspect the faces. In contrast, emotion type did not matter for first looks or last looks during free viewing, nor did it impact initial orienting or sustained attention on the directed task. Thus, although our findings indicate a general bias to prioritize negative stimuli, when children and adults did treat some emotions as “special,” they attended differentially to anger and fear.

Anxiety and Attention to Emotional Faces

Anxiety failed to predict attention to emotional faces on either the free-viewing or the directed task when collapsed across trials; however, partial correlations (controlling for age and EF) revealed that participants with higher self-reported anxiety more often looked first at anger faces on the directed task compared to those lower in self-reported anxiety. Parent reports of children’s anxiety had no relation to children’s attention. These largely null effects oppose some theoretical arguments (Bar-Haim et al., 2007; Bradley et al., 2000; Mogg & Bradley, 1998, 1999); however, they provide a critical reminder that links between anxiety and attention may be somewhat tenuous and context specific (Armstrong & Olatunji, 2012; Bar-Haim et al., 2007; Cheng et al., 2015; Dodd, Vogt, Turkileri, & Notebaert, 2016; Frischen et al., 2008; Vogt et al., 2013). Our data further indicate that anxiety-attention relations may be most robust in initial orienting to anger, the experimental condition that fits best with theory and research suggesting that anxious individuals exhibit heightened vigilance to threat (C. Gross & Hen, 2004; Öhman, 1996; Öhman et al., 2001; Mogg & Bradley, 2002; Wells & Matthews, 1994), especially in contexts with higher cognitive demands (Dodd et al., 2016; Eysenck et al., 2007; but see Pessoa, McKenna, Gutierrez, & Ungerleider, 2002).

Implications for Clinical Practice and Intervention

There is rising empirical and clinical interest in the effectiveness of attention bias modification (ABM) training for reducing stress or anxiety, with the general premise being that learning to attend more to positive images may reduce maladaptive affective responses (Bar-Haim, Morag, & Glickman, 2011; Hakamata et al., 2010; Pérez-Edgar et al., 2014; Wadlinger & Isaacowitz, 2008). The current research informs these intervention efforts by providing needed information on developmental patterns and variability in nonclinical, low-risk populations. Foremost, these data highlight that it is normative to show heightened attention to negative versus positive emotions; it is not specific to anxiety. Second, we document that individuals higher in EF showed greater proficiency in executing the top-down goal to look just at happy faces. These data suggest that ABM may decrease anxiety via improvements in cognitive control more generally (e.g., Pessoa, 2009), and they further indicate that efforts to improve EF outside of ABM may have downstream benefits for treating anxiety. Notably, performance on the nonemotional variant of the IC task (day-night) as well as WM were equally predictive of participants’ attention bias during directed viewing as the emotional IC task (happy-sad), suggesting that even nonemotional EF training may benefit those with anxiety. Indeed, training in EF has been found to transfer to improvements in several aspects of cognitive and emotional well-being across the life span (e.g., Basak, Boot, Voss, & Kramer, 2008; Dahlin, Nyberg, Bäckman, & Neely, 2008; Thorell, Lindqvist, Nutley, Bohlin, & Klingberg, 2009).


The current study advances theory and research on the negativity bias. Initial orienting was least mutable, indicative of bottom-up salience: The tendency to look first at negative versus positive emotional faces did not change with age, vary by instruction, or show any relation to EF. In contrast, indices of sustained attention provided evidence for top-down control: Negative last looks and attention biases attenuated with age, could be shifted to a positivity bias by explicit instruction, and showed strong connections to EF. Together, these data reveal that examining both biases and flexibility in attention as well as considering developmental changes and individual variability can provide informative new clues about how children and adults manage salient information in their environment.


No findings regarding the central eye-tracking tasks have been shared previously at a conference or online. Previous publications have included data from the individual difference measures: executive function (Kramer, Lagattuta, & Sayfan, 2015; Lagattuta, Sayfan, & Harvey, 2014; Lagattuta, Sayfan, & Monsour, 2011) and parent- and child-reported child anxiety (Lagattuta, Sayfan, & Bamford, 2012). This research was funded by a National Science Foundation grant to Kristin Hansen Lagattuta (0723375). Hannah J. Kramer was supported by the Predoctoral Training Consortium in Affective Science from the National Institute of Mental Health (201302291). We thank the children and adults who participated. We also thank Liat Sayfan, Samantha Frick, Amanda Blattman, Sandra Diaz, Christina Harvey, Karen Hjortsvang, Katie Kennedy, Elizabeth Lowen, and Alice Luu for assistance with this research.