|Home | About | Journals | Submit | Contact Us | Français|
Anxiety is a pervasive, impairing, and early appearing form of psychopathology. Even when anxiety remits, children remain at a two- to threefold increased risk for the later emergence of a mood disorder. Therefore, it is imperative to identify and examine underlying mechanisms that may shape early emerging patterns of behavior that are associated with anxiety. One of the strongest and first visible risk factors is childhood temperament. In particular, children who are behaviorally inhibited or temperamentally shy are more likely to exhibit signs of anxiety by adolescence. However, not all shy children do so, despite the early risk. We know that attention mechanisms, particularly the presence of attention biases toward or away from threat, can play a critical role in the emergence of anxiety. The current chapter will bring together these separate lines of research to examine the ways in which attention can modulate the documented link between early temperament and later anxiety. In doing so, the chapter will highlight multiple levels of analysis that focus on the behavioral, cognitive, and neural mechanisms in the temperament-attention-anxiety network. The chapter will help identify both markers and mechanisms of risk, supporting future work aimed at improving theory and intervention by focusing on attention biases to environmental threat.
Children can differ fundamentally in the ways in which they view and approach the world around them. While some children eagerly embrace the ambiguities and uncertainties of their environments as opportunities for discovery and surprise, other children retreat from the world, fleeing from these same uncertainties as markers of threat and risk. These patterns of behavior emerge from a complex equation incorporating in-born or biologically based emotional biases as well as learning processes deriving information from the environment. On the biological side of the equation, temperament-based patterns of approach and withdrawal have been linked to long-standing and stable profiles of socioemotional behavior [Fox, Henderson, Pérez-Edgar & White, 2008]. From an environmental perspective, we know that rearing environments, whether harsh and punitive or sensitive and nurturing, can also shape the ways in which children navigate their world [LoBue, 2013]. Adding to this complexity, the child’s own world view – as seen in patterns of attentional and interpretive biases – can step in to modify how he or she responds to surrounding events [White, Helfinstein & Fox, 2010].
Given the complex systems simultaneously at work shaping individual trajectories of development it is not surprising that there are a multitude of developmental pathways that emerge from seemingly equivalent starting points. For example, although infant temperament is one of the strongest early predictors of anxiety [Fox & Pine, 2012; Pérez-Edgar & Fox, 2005], the majority of temperamentally shy children do not go on to manifest an anxiety disorder [Degnan & Fox, 2007]. This pattern of early risk leading to relative normalcy may act as the developmental equivalent of the statistical construct of regression to the mean. Over the course of time, development appears to smooth away the jagged edges of early risk through naturally occurring maturational, experiential, and social processes [Degnan, Almas & Fox, 2010].
For a subset of children, however, the risks evident early in life persist, calcifying into a pattern of maladaptation throughout childhood and into adulthood. Children appear to be more open to prevention and intervention [Pine, Helfinstein, Bar-Haim, Nelson & Fox, 2009]. As a result, it becomes increasingly difficult over time to redirect maladaptive trajectories. Thus, it is exceedingly important to identify and target the mechanisms at play early in life. These mechanisms – developmental tethers – bind children to specific trajectories and resist the normal ameliorative or ‘smoothing away’ process. From our lab’s perspective, developmental tethers grow out the child’s individual early traits or biases. These biases provoke an environmental response. The child processes and interprets these responses and frames subsequent behaviors based on the conclusions drawn. This pattern of provocation and response can become cyclical, growing progressively more entrenched (and biased) with each successive iteration.
The current chapter will bring together separate lines of research in temperament and attention to examine the ways in which attention can modulate the documented link between early temperament and later anxiety. In doing so, the chapter will highlight multiple levels of analysis that focus on the behavioral, cognitive, and neural mechanisms in the temperament-attention-anxiety network. This includes observed social behavior, clinical assessments, computer-based attention tasks, psychophysiological techniques, and neuroimaging. The chapter will help identify the markers and mechanisms of risk, supporting future work aimed at improving both theory and intervention.
The psychological construct of temperament captures distinct patterns of neurochemistry, neuroanatomy, and gene expression which bias the ways in which individual children select, process, and respond to salient stimuli within their environments [Kagan, 2012; Rothbart, 2012]. Temperament-based differences are evident in the first months of life and may serve as the biological ‘seed’ for later personality [Rothbart, Ahadi & Evans, 2000]. Temperament-linked differences in outlook and behavior may also prove to be an important core mechanism for the later emergence of psychopathology [see also Hastings et al., this vol.]. In our laboratory, the focus has been on a specific temperamental type – behavioral inhibition. As infants, behaviorally inhibited children display signs of fear and wariness in response to unfamiliar stimuli [Schmidt et al., 1997] and this trait is marked by heightened vigilance, motor quieting, and withdrawal from novelty [Garcia Coll, Kagan & Reznick, 1984; Kagan, Reznick & Snidman, 1987]. By elementary school, many behaviorally inhibited children fear social circumstances, displaying poorly regulated social behavior and social reticence [Coplan, Rubin, Fox, Calkins & Stewart, 1994; Fox et al., 1995]. This, in turn, increases the likelihood of peer rejection, low self-esteem, and poor social competence [Rubin, Chen & Hymel, 1993; Schmidt, Fox, Schulkin & Gold, 1999]. Longitudinal studies of behavioral inhibition, and the broader construct of temperamental shyness, have found a marked increased risk for anxiety, particularly social anxiety, by mid-adolescence [Chronis-Tuscano et al., 2009; Kagan, Snidman, McManis & Woodward, 2001].
Despite this two- to threefold increase in risk for anxiety disorders, the majority of behaviorally inhibited children are not clinically anxious [Degnan & Fox, 2007]. Clearly, there must be a number of moderating influences that shape the trajectory from temperament to disorder. Past work suggests that parenting styles [Williams et al., 2009], parental anxiety levels [Biederman et al., 2001], and early schooling environment [Almas et al., 2011] all play a role in exacerbating or ameliorating early risk. Recently, a great deal of attention (pun intended) has focused on the role that systematic biases in early information processing patterns may play in shaping the emergence and course of anxiety. This will be the focus of the current chapter.
Cognitive models of anxiety suggest that attention biases toward threat may be causally implicated in the development of anxiety disorders [MacLeod & Mathews, 2012]. Early attention can be thought of as a gatekeeper, controlling which aspects of the environment are taken in for further processing while filtering out of awareness irrelevant information. Thus, attention mechanisms are central to our ability to carry out adaptive goal-directed behaviors [Crick & Dodge, 1994]. However, attention, as the gate-keeper to downstream information processing mechanisms, must also possess the flexibility to redirect resources to unexpected or ambiguous events in the environment, particularly if they are potentially threatening in nature.
LoBue  suggests that humans have perceptual biases for threatening stimuli that are evident in infancy and that may set the stage for learning – drawing attention to important stimuli in the environment. Importantly, these biases precede the development of fear for these potential threats [Oldfield, 1971] and seem to be independent of exposure to the threat in the child’s environment [Penkunas & Coss, 2013]. While a perceptual sensitivity to threat may be a normative, evolutionary-based safety mechanism, there is growing evidence that a pronounced bias in this attention mechanism may lay the foundation for anxiety. Indeed, Todd et al.  have argued that the predisposition to attend to specific emotion categories of the environment, ‘affect-biased attention,’ may act to shape broad patterns of socioemotional functioning by creating a habitual filtering process that privileges certain classes of information over other, less salient, classes.
Thus, hard-wired biases toward threat may, in some vulnerable populations, set the stage for later socioemotional difficulties. Indeed, many in the clinical literature make the clear declaration that attention plays a causal role in the emergence of anxiety and should therefore be a primary target of clinical intervention [Amir, Beard, Burns & Bomyea, 2009]. However, a number of important and critical open questions remain to be answered. For example, although general reviews and meta-analyses [Bar-Haim, Lamy, Pergamin, Bakermans-Kranenburg & van IJzendoorn, 2007] suggest a general pattern of attention bias toward threat, biases away from threat can emerge with manipulations of task parameters [Mogg, Bradley, De Bono & Painter, 1997; Mogg, Bradley, Miles & Dixon, 2004], specific diagnosis [Waters, Bradley & Mogg, in press], and exposure to stress prior to testing [Helfinstein, White, Bar-Haim & Fox, 2008]. Thus, any clinical utility may be limited until we better understand the parameters that shape the strength and directionality of any underlying bias to threat. In addition, questions regarding the early emergence of attention biases (as discussed below) will need to be addressed in order to better understand the mechanisms underlying any observed patterns of bias.
According to Wells and Matthews’  Self-Regulatory Executive Function model of emotional disorder, attentional processes are involved in the maintenance of emotional disorder because attention biases ‘diminish individuals’ ability to process information that is incompatible with their fears’ [Lonigan & Vasey, 2009]. Thus, a bias or vulnerability to threat may reduce one’s capacity to integrate information that would diminish fear, engendering a cycle of negative information processing that maintains anxiety. Individuals who are able to break this cycle by overriding the draw of negative information (and preventing biased attention and hypervigilance) are less likely to exhibit anxiety [Mathews & MacLeod, 1994]. For example, attention bias towards threat may predict self-reported anxiety only for children with low ability to control attention who also have low levels of attentional or effortful control [Lonigan & Vasey, 2009; Susa, Pitică, Benga & Miclea, 2012].
Most investigations and clinical manipulations of attention bias use a variant of the dot-probe task, originally developed by MacLeod et al. . In this task, participants see two stimuli (one threatening and one nonthreatening) side-by-side, typically for 500 ms. The pair of stimuli is followed by a probe in one of the two stimulus locations. The participant is then required to respond as quickly and accurately as possible to the probe. Individuals display an attentional bias towards threat when they are faster to respond to the probes that replace the threatening stimuli compared to the probes that replace the non-threatening stimuli. This task has been modified on several occasions, changing the presentation time of the emotional stimuli, the position of the stimuli on screen (vertical vs. horizontal), and the stimuli itself. For example, the original task used threat-related and neutral words [MacLeod et al., 1986]. Most of the current literature, however, uses threat-related pictures, particularly facial expressions, as their emotional stimuli. An important limitation of the dot-probe task is its inability to determine if the attention bias is generated by a bias in initial orienting or difficulty disengaging from the threatening stimuli as the strength and directionality in bias is measured by a comparison of behavioral reaction times.
Nevertheless, in a meta-analysis, Bar-Haim et al.  demonstrated that regardless of the discussed variations in the dot-probe task, and even when using other tasks (i.e. the Stroop and Posner tasks), anxious individuals show higher attentional bias towards threat-related stimuli compared with nonanxious individuals. In addition, evidence for the causal role of attention bias to threat and anxiety comes from experimental paradigms in which the attentional bias is modified towards or away from threat, showing changes in reported levels of anxiety and anxiety displays in laboratory observations [Bar-Haim, 2010; Hakamata et al., 2010]. Emerging data suggest that healthy and clinically anxious children also show changes in stress reactivity and anxiety when exposed to experimental manipulation of attention bias [Eldar et al., 2012; Eldar, Ricon & Bar-Haim, 2008].
The evidence reviewed above supports the relation between attentional biases and anxiety. However it does not address how these biases develop, the nature of the relation between these biases and anxiety symptoms, or if these biases are a precursor or a symptom of anxiety. To answer these questions, longitudinal studies assessing bias in attention over time and their relation across development to anxiety are required [Penkunas & Coss, 2013]. To our knowledge, these studies have yet to be published.
One way to begin answering such questions is to look at the relation of attention biases to threat at an early stage (i.e., children) and in populations at risk for anxiety (i.e., behavioral inhibition). Far fewer studies have examined differences in attentional bias and anxiety disorders in children compared to adults. However, the available evidence supports a similar pattern to the one observed in adults. Children and adolescents (ages 7–18) diagnosed with an anxiety disorder (generalized anxiety (GAD), social phobia, or separation anxiety) by clinical interview exhibited greater threat bias on the dot probe task using face stimuli relative to nonanxious comparisons [Roy et al., 2008]. Among children ages 8–12 clinically diagnosed as anxious (GAD, social phobia, separation anxiety, or specific phobia) those with higher levels of anxiety as assessed by self-report questionnaire exhibited threat bias on the faces dot probe [Waters, Henry, Mogg, Bradley & Pine, 2010]. Finally, among children with GAD, the magnitude of threat bias on the faces dot probe has been positively correlated with the levels of anxiety symptoms [Waters, Mogg, Bradley & Pine, 2008].
Studies examining the relation between attentional bias and populations at risk for anxiety such as behavioral inhibition are also scarce. Nevertheless, results in the expected direction have been found; adolescents (mean age 15 years) characterized by laboratory observations and maternal reports as high in behavioral inhibition as toddlers (14 and 24 months) and early in childhood (4 and 7 years) displayed higher attention bias to threat (angry faces) on the dot probe task compared to adolescents low in behavioral inhibition [Pérez-Edgar, Bar-Haim et al., 2010]. In addition, attention bias to threat moderated the relation between behavioral inhibition and withdrawn behaviors in adolescence as assessed by parent report, such that the relation between early behavioral inhibition and later social withdrawal was only evident in adolescents with an attention bias to threat. This relation between early inhibition and later withdrawal may emerge quite early as attention bias patterns moderate this link at age 5 [Pérez-Edgar et al., 2011].
The impact of attention on patterns of social behavior may be felt even earlier in development. In our earliest examination of attention as a developmental tether, Pérez-Edgar et al.  evaluated patterns of sustained attention (related to vigilance) in 9-month-olds. Infants watched fixation video clips as a distractor stimulus was presented intermittently in the periphery of the visual field. Vigilance was assessed by subtracting the time spent attending to the distractor from the time spent sustaining attention on the fixation. This study found that the group of infants who exhibited greater vigilance (less sustained attention) was more likely to show increases in behavioral inhibition from 14 months to 7 years. Moreover, initial behavioral inhibition levels predicted social difficulties assessed during an observed social dyad interaction with an unfamiliar peer in adolescence only for the group displaying high levels of vigilance.
These findings illustrate that attentional biases are related to socioemotional functioning in a similar manner in adults, in pediatric anxiety, and in individuals simply at risk for the development for anxiety. Most importantly, these findings shed light on the possible role of attention bias in the etiology of anxiety disorders. Nevertheless, this area of research is in its earliest stages and much more needs to be explored. Questions remain regarding what aspects of attention are involved in the observed bias (e.g., orienting or disengagement) and how biases change across development in order to determine possible sensitive periods for the development of these biases and their influences on anxiety. In addition, we are only now beginning to understand the biological conduits that may shape observed biases and reflect their effect on behavior across development. While there are early signs of a genetic component linked to variations in serotonin [Beevers, Gibb, McGeary & Miller, 2007; Fox, Ridgewell & Ashwin, 2009; Gibb, Benas, Grassia & McGeary, 2009; Pérez-Edgar, Bar-Haim, McDermott, Gorodetsky, et al., 2010], our focus here will be on the known neural correlates of attention bias.
In a pioneering examination of the biological correlates of attention bias to threat, Monk et al.  investigated differences in brain activation between children and adolescents with GAD and healthy controls while completing the standard dot probe paradigm in the fMRI environment. The initial focus was on the limbic system, given the role of the amygdala in the circuitry of fear. Surprisingly, there were no differences in amygdala activity between groups. However, the GAD group, compared to the control group, showed higher levels of ventrolateral prefrontal cortex (vlPFC) activity to trials that contained an angry face. In addition, vlPFC was negatively associated with anxiety symptoms. Since the vlPFC has been widely involved in regulatory processes, these results suggest a compensatory executive response from the GAD group in order to regulate other regions such as amygdala, which may be overactive. No significant differences in activation were found between the two groups in response to either happy or neutral faces relative to baseline, arguing for possible specificity in the response to angry faces.
A follow-up study by the same group [Monk et al., 2008] found support for this conclusion by presenting GAD adolescents with masked (17 ms presentation) emotional stimuli. Here, GAD youth, compared to the control group, showed greater right amygdala activity and a strong negative coupling between amygdala and vlPFC during exposure to masked angry faces. In addition, this amygdala activity was correlated with anxiety symptoms, linking both reactive and regulatory processes with the emergence of anxiety. Again, no group differences were found for masked happy faces. Indeed, trait anxiety has been positively associated with attention bias towards threat and activation in the vlPFC during the dot-probe task [Telzer et al., 2008], perhaps reflecting the use of cognitive control to disengage from threatening stimuli. Recent work in young adults with a history of behavioral inhibition [Hardee et al., 2013] contributes to the emerging picture of strong connections between regulatory and reactive responses to threat, manifest in the PFC and the limbic system. This study found stronger connections between the vlPFC and limbic sites for the participants with a history of behavioral inhibition, relative to non-inhibited peers. In addition, the level of connectivity was associated with concurrent levels of internalizing symptoms only in young adults with a history of behavioral inhibition.
Importantly, a recent study has indicated that activation patterns within the vlPFC and amygdala subserving threat bias on the dot-probe task are stable across time [Britton et al., 2013]. This study presented participants, ages 8–17 years, with the dot-probe task in an fMRI scanner on two separate occasions (an average of approximately 4 months apart), and found that activation was strongly correlated across the two time points in the vlPFC and amygdala, but few other brain regions. This finding provides support for the proposition that attention biases to threat are linked to stable patterns of neural functioning. The data also suggest that any direct manipulations of attention bias (discussed in detail below) may be tracked by changes at the neural level.
While fMRI may help us localize the regions at play in attention biases to threat, the technology cannot help us with the chronometry – how biases in processing unfurl over time. In an attempt to examine the timing of attention biases, Eldar et al.  examined differences in event-related potential (ERP) components between anxious and non-anxious young adults during the dot-probe task. This study found differences during the presentation of the emotional stimuli in components C1 and P2, suggesting that processing differences emerge within the first 100 ms of the task and implicating early, automatic attention biases. This pattern is in line with the fundamental role of affect-biased attention in processing, as Todd et al.  suggested. More recently, Shechner et al.  measured attention bias in anxious and nonanxious youth during a 10-second exposure to angry, happy, and neutral faces, using eye-tracking methodology to record eye movements. They found that anxious youth displayed greater attention bias to threat. In addition, this bias occurred in the earliest phases of stimulus presentation, as the anxious youth made more initial and faster fixations to angry faces than neutral faces.
Although our understanding of the psychological and biological mechanisms of attention bias to threat is relatively shallow, a newly thriving line of research is rapidly building on the available data [Britton et al., 2012; Hardee et al., 2013; Telzer et al., 2008]. Ongoing work [O’Toole & Dennis, 2013; Pérez-Edgar, Taber-Thomas, Thai, Morales & Danilo, 2013] is now beginning to examine the neural and psychophysiological correlates of attention bias before and after experimental intervention. These data will help us better understand the intriguing attention bias-anxiety relations first reported in the adult clinical literature.
Given the relationship between biases in attention and anxiety [Bar-Haim et al., 2007] as well as temperamental risk for anxiety [Pérez-Edgar, Bar-Haim et al., 2010; Pérez-Edgar & Fox, 2005; Pérez-Edgar et al., 2011], it has been argued that attention biases to threat may play a causal role (a) during development in the pathophysiology of the disorder [Bar-Haim, 2010], and (b) in the ongoing maintenance of the disorder [MacLeod et al., 1986]. One possibility is that over time a bias in information attended to – for example, the selective favoring of negative information – alters affective information processing, which when compounded over time, results in affective disorder. This suggests that attention may be a fruitful target for preventive and therapeutic intervention for anxiety. In this vein, attention bias modification (ABM) training has been developed as a novel treatment that attempts to alter biases in attention and reduce the processing of negative information in the hopes of ameliorating or preventing internalizing problems [for reviews, see Bar-Haim, 2010; Hakamata et al., 2010].
Cognitive bias modification has been an important approach for alleviating and reducing vulnerability to anxiety [for review, see Britton et al., 2013]. Attention bias to threat presents an interesting target for treatment given the causal role it seems to play in internalizing problems. The reasoning behind such a treatment is that if attention bias to threat causes anxiety, training attention away from threat may ameliorate and reduce the risk of anxiety (fig. 1) [Fox & Pine, 2012; Mathews & MacLeod, 2002]. Although the published number of studies is small, Hakamata et al.  conducted a meta-analysis looking at the impact of ABM on anxiety using 12 published ABM randomized controlled trials and found that ABM significantly reduces anxiety relative to a control task.
MacLeod et al.  made an early attempt to assess the impact of modifying attention bias on anxiety. The authors reported two separate studies, both of 64 undergraduate participants who completed a dot-probe ABM or control task. Similar to the original dot-probe task [MacLeod et al., 1986], the repeated trials of the attention training task present two stimuli differing in valence (negative/positive) followed immediately by a probe; the crucial difference is that in ABM the probe always (or usually) appears behind one of the stimulus types, thereby training attention toward that stimulus type. To demonstrate the viability of this paradigm, MacLeod et al.  presented half of their participants with training away from threat and half with training toward threat. Results showed that attention training influenced attention biases as predicted, with the away group showing a bias away from threat, and the toward group showing a bias toward threat. Moreover, although state emotions were not immediately different between the groups, the ‘away from threat’ group showed significantly reduced negative emotional states than the ‘toward threat group’ when later exposed to a puzzle completion laboratory stressor paradigm. This exciting finding that training attention away from threat can reduce anxiety ignited interest in ABM as a potentially cost-effective, easy-to-administer, non-pharmacological tool for addressing anxiety.
Several studies have followed-up on these initial results, further demonstrating the impact of ABM on internalizing problems in different populations and with varying outcome measures. Much as with initial attention bias studies, ABM studies have also employed a variety of stimuli, including affective words, IAPS pictures, and standard emotion-face stimuli such as the NimStim set [Tottenham et al., 2009]. Currently, the majority of ABM studies employ faces as they are considered ecologically valid and can be used across the lifespan [Fox & Pine, 2012].
One study tested 94 socially anxious undergraduates, who performed a single session of dot-probe ABM with face (neutral/disgust) stimulus pairs with training toward the negative stimulus, or a control dot-probe task in which the probe was equally likely to appear behind one or the other stimulus type [Amir, Bomyea & Beard, 2010]. They found that the ABM group exhibited less attention bias to threat after training than the control group. Amir et al.  then put all participants through a standard stress induction public-speaking challenge. As expected, the ABM group performed significantly better in the speech, which the authors interpreted as suggesting that the ABM group had lower levels of anxiety during this behavioral performance. In young adults with GAD (n = 29), Amir et al.  found that an eight-session dot-probe ABM training (with text stimuli) reduced attention bias to threat and decreased anxiety on self-report and interviewer measures for participants trained away from threat (relative to those who performed the control task).
The evidence suggests that ABM can effectively impact internalizing symptoms. The questions of how clinically significant the impact of ABM is, and whether it lasts beyond the immediate window of the training, were addressed in a study by Schmidt et al. . The authors administered 8 sessions of ABM training (with face stimuli; neutral/disgust) or a control task to 36 patients with generalized social anxiety disorder, and found that 72% of patients in the ABM group versus 11% of control patients did not meet diagnostic criteria symptoms after training and the overall results were maintained at the 4-month follow-up. Thus, not only does ABM training have clinically significant effects on anxiety, but these effects also last at least on the order of months.
There is also evidence that computer-based ABM training has impacts on anxiety experience in real-world situations [See, MacLeod & Bridle, 2009]. In this clever study, 40 recent high school graduates were administered 15 daily sessions of internet-based word ABM or control task during the month prior to a stressful life event (travelling abroad to begin college). Both the ABM and control group exhibited increased state anxiety from pre-training to post-stressful event; however, relative to the control group, the ABM group exhibited a significantly smaller increase. That is, the impact of a real-world stressful event on participants’ state level of anxiety was reduced by ABM training.
The evidence discussed above shows that ABM training is a promising new treatment for reducing internalizing problems, particularly anxiety, in adults. However, internalizing problems typically emerge during childhood, and potentially fruitful new applications of ABM training are focused on the treatment of childhood anxiety and the prevention of later anxiety onset in children at risk. ABM may be particularly useful in children, who may not have fully developed the cognitive skills required to be successful in traditional cognitive behavioral therapy [O’Toole & Dennis, 2013; Pérez-Edgar et al., 2013]. While the currently available research on ABM in children is limited, it does suggest that this is a promising direction for attention training research and that more studies in pediatric populations are needed.
Eldar et al.  conducted an early study of ABM in nonanxious youths and found that training toward threat increased vigilance to threat and stress-induced anxiety, but training away from threat did not have either effect. The lack of an effect of training away from threat might be explained by the fact that the sample (healthy, nonanxious youths) was not the intended or typical target for ABM treatment. Indeed, in recent years, studies of ABM in children with anxiety or subclinical internalizing symptomatology have yielded more promising results.
Rozenman et al.  reported on the impact of ABM training in a case series of 16 children and adolescents diagnosed with anxiety disorder. All patients underwent active ABM training for 12 sessions of 160 trails (15–20 min) over 4 weeks, with standard face pairs displaying neutral and threatening (i.e., anger and disgust), to train attention away from threat. The study found that ABM was feasible in youths and may indeed be effective, as 12 of the 16 patients no longer met diagnostic criteria after treatment. However, the lack of a control group was a significant limitation.
Following this case series studies have shifted to the gold-standard of randomized controlled trials and have demonstrated the effectiveness of ABM in children. For example, in pediatric anxiety disorder, 4 weekly sessions of face (neutral/ threat) ABM training away from threat were shown to be more effective in reducing anxiety symptoms than placebo tasks without training or with both neutral faces [Eldar et al., 2012]. Another randomized controlled trial by Bar-Haim et al.  tested 34 high-anxious 10-year-olds who completed four sessions over two weeks of ABM or placebo task. The authors found that, relative to the control group, the ABM group showed significantly faster disengagement from threat post-training, and significantly less state anxiety in response to stress induction (being videotaped while attempting to solve difficult puzzles). Our lab is currently in the midst of a randomized control ABM trial with 9- to 12-year-olds high in behavioral inhibition. Although preliminary, our data also suggest that self-reported anxiety decreases after 4 weeks of ABM, relative to placebo controls.
The specific psychological mechanisms by which attention bias modification training tasks exert their effects on attention processes remain to be fully understood [Fox & Pine, 2012; Shechner et al., 2012]. For example, Heeren et al.  tested variants of dot-probe-like ABM tasks to narrow in on which aspect of the task is the active ingredient. Participants were randomly assigned to receive one of four ABM tasks, which were altered to disentangle potential causal mechanisms by training just: (a) re-engagement to nonthreat, (b) disengagement from threat, (c) disengagement from threat and reengagement to nonthreat, or (d) neither (placebo). The study revealed that only training to disengage from threat reduced anxiety, while re-engagement to nonthreat had no effects by itself. These findings are in agreement with the available neuroimaging literature [Britton, Lissek, Grillon, Norcross & Pine, 2011; Telzer et al., 2008]. Thus, it seems that difficulty in disengaging from threat may be a crucial process in the maintenance of anxiety over time, and that ABM may act by altering this underlying developmental tether. Additional basic studies of this sort will be important going forward to hone in on the specific mechanisms underlying anxiety-related attention biases and to better target attention bias modification treatments.
Building on what is known about the frontolimbic neural processes underlying attention bias and threat processing [Pine, 2007], researchers have begun to hypothesize about and explore the impacts of ABM training on neural functioning. According to Fox and Pine , ABM may work at the neural level by strengthening vlPFC inhibitory control over amygdala and limbic system functioning. Repeated training of attention toward neutral stimuli in an ABM task may build the neural disposition (encoded in vlPFC) to deploy attention toward nonthreatening information, so that in future trials (perhaps both experimental and real-world) the individual is more capable of inhibiting the bias to attend to threat, thereby stemming the cycle of negative information processing before it starts.
Research linking attention bias to threat with vlPFC and amygdala function [Monk et al., 2006, 2008; Pine, 2007] combined with the evidence that ABM modifies attention bias to threat [Hakamata et al., 2010], support the suggestion that ABM likely has impacts on this frontolimbic network. Furthermore, one fMRI study examined the impact of ABM training on neural functioning in 29 young, healthy adults [Browning, Holmes, Murphy, Goodwin, & Harmer, 2009]. Following the procedures of MacLeod et al. , Browning et al.  trained half of the participants away from threat and half toward threat in one brief training session. Immediately after testing, participants were then placed in the scanner to perform an unrelated face-processing task. Those trained away from threat showed greater dorsolateral PFC activation to angry versus neutral faces than participants trained toward threat. Although the activation was dorsal, rather than ventral lateral PFC, this is likely due to the difference in task used in the scanner.
As with studies of attention bias, ERP studies can supplement our understanding of the potential functional mechanisms of ABM. As noted, it has been assumed that ABM impacts anxiety by improving attentional control [Fox & Pine, 2012], allowing individuals to break the early bias to attend to threat. If this was the case, then changes in anxiety-related behaviors fueled by changes in attentional control should be reflected not only in reported levels of anxiety, but also in neurobiological and physiological measures. The few studies examining this question have found support for this hypothesis.
An ERP study with anxious individuals [Eldar & Bar-Haim, 2010] found that ABM modified later attentional processes (N2, P2, and P3), reflecting more effortful or endogenous attentional mechanisms. However, an ERP study with non-anxious individuals [Dennis, O’Toole & DeCicco, 2013] found that ABM altered earlier components of attention (P1). This implies that ABM might modify attention through different processes between anxious and nonanxious individuals.
Other physiological methods have also shown differences after ABM. For example, Heeren et al. , in a modification of ABM, found that individuals with social phobia who trained attention towards nonthreatening stimuli (i.e., happy faces) showed reduced skin conductance response towards stress after training. Similarly, Dandeneau et al.  found that individuals after ABM exhibited decreased cortisol release during a stressful task. Our preliminary data also suggest that ABM may decrease levels of right frontal EEG asymmetry in behaviorally inhibited children. Together, this corpus of evidence suggests that ABM modifies not only the implicit attentional bias towards threat, but also changes physiological and neurobiological aspects of attention, attention control, and emotional responses towards threatening stimuli.
Basic and clinical research on the modification of attention biases has yielded exciting results, with the promise to add a low-cost, low-risk tool to the anxiety treatment toolbox. However, this research is in its infancy and pressing questions remain. While some work has begun to dig into the basic mechanisms underlying the seeming success of ABM [Rothbart et al., 2011; Shechner et al., 2012], it is crucial to further explore the psychological and biological mechanism involved in order to fully understand the impacts, appropriate use cases, and potential side effects of ABM.
As it is, a long series of questions remain unanswered. They include the following: What are the mechanisms by which ABM may act to impact anxiety? How long do the effects of ABM persist? How broad a ‘reach’ does ABM have in impacting the deep-seated, problematic behavioral patterns that are characteristic of clinical anxiety? Can temperamentally at-risk youths be targeted for intervention, using ABM as an inoculation tool? Finally, how do the effects of ABM interact with the normative developmental trajectories of attention?
Our laboratory is one of many now working to address these questions, examining the impact of attention biases and ABM on the cognitive, behavioral and neural correlates of anxiety and temperamental risk. Manipulating attention acts as one of the earliest emerging self-regulatory tools available to children [Posner, Rothbart, Sheese & Voelker, 2012]. The reviewed studies suggest that attention biases play an early developmental role in modulating the risk for anxiety. Thus, future studies will need to focus systematically on the ways in which attention mechanisms and socioemotional behavior develop hand-in-hand over the course of childhood. The information gained may allow us to increase our ability to intervene at the earliest signs of vulnerability by targeting one of the most pervasive functional mechanisms of risk.
Support for manuscript preparation was provided by grants from the National Institutes of Health (MH# 094633) and The Pennsylvania State University Social Science Research Institute (Level II Grant) to Koraly Pérez-Edgar.