PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Eur J Neurosci. Author manuscript; available in PMC 2017 April 1.
Published in final edited form as:
Published online 2016 February 22. doi:  10.1111/ejn.13180
PMCID: PMC4821707
NIHMSID: NIHMS753143

Extent and Time Course of Competition in Visual Cortex between Emotionally Arousing Distractors and a Concurrent Task

Abstract

Emotionally arousing cues automatically attract attentional resources, which may be at the cost of processing task-related information. Of central importance is how the visual system resolves competition for processing resources among stimuli differing in motivational salience. Here, we assessed the extent and time course of competition between emotionally arousing distractors and task-related stimuli in a frequency-tagging paradigm. Steady-state visual evoked potentials (ssVEPs) were evoked using random-dot kinematograms that consisted of rapidly flickering (8.57 Hz) dots, superimposed upon emotional or neutral distractor pictures flickering at 12 Hz. The time-varying amplitude of the ssVEP evoked by the motion detection task showed a significant reduction to the task-relevant stream while emotionally arousing pictures were presented as distractors. Competition between emotionally arousing pictures and moving dots began 450 ms post-picture onset and persisted for an additional 2600 ms. Competitive effects of the overlapping task and picture stream revealed cost effects for the motion detection task when unpleasant pictures were presented as distractors between 450 and 1650 ms post-picture onset, where an increase in ssVEP amplitude to the flickering picture stimulus was at the cost of ssVEP amplitude to the flickering dot stimulus. Cost effects were generalized to all emotionally arousing contents between 1850 and 3050 ms post-picture onset, where the greatest amount of competition was evident for conditions in which emotionally arousing pictures, compared to neutral, served as distractors. In sum, the processing capacity of the visual system as measured by ssVEPs is limited, resulting in prioritized processing of emotionally relevant cues.

Keywords: ssVEP, emotion, attention, competition, distraction

Introduction

Large-scale neural responses to visual scenes vary with the motivational relevance of the scene or of scene elements. Specifically, scenes evaluated as emotionally engaging prompt greater visual cortical responses as indexed by hemodynamic or electrophysiological measures (Bradley et al., 2003; Keil et al., 2003; Sabatinelli et al., 2007). Conversely, emotionally engaging scenes are powerful distractors, shown to diminish the visuo-cortical response magnitude elicited by a primary task stimulus, when shown as a task-irrelevant background (Müller et al., 2008; Deweese et al., 2014). Together, these findings may suggest a pool of limited capacity, in which cortical facilitation of a motivationally relevant stimulus leads to equivalent decrease of the response elicited by concurrent stimuli. Alternatively, facilitation of emotional distractors and concurrent suppression of task items may not draw from a fixed pool, and thus may not be linearly dependent.

The present study examined these competing hypotheses by quantifying the extent to which the suppressive effect of emotional distractors on a task-evoked visual response is accompanied by equivalent response enhancement for the distractors. Luminance-modulated steady-state visual evoked potentials (ssVEPs) were used because they offer a unique window into separately measuring the cortical engagement associated with concurrently presented stimuli (Norcia et al., 2015). For example, two fully overlapping stimuli flickering at different temporal rates evoke different electrophysiological response trains that can be separated as two distinct, narrow, peaks in the frequency spectrum of the electroencephalogram (EEG) recordings (Appelbaum et al., 2006).

It is well established that the amplitude of the ssVEP is heightened when elicited by emotionally engaging stimuli (Keil et al., 2003) or by selective attention (Müller et al., 1998; Keil et al., 2005). It has also been shown that ssVEP amplitude is reduced for stimuli competing with such task-relevant (Müller & Hübner, 2002; Wang et al., 2007) or emotionally engaging stimuli (Müller et al., 2008; Hindi Attar et al., 2010; Hindi Attar & Müller, 2012). The extent to which this reduction reflects a trade-off between task and distractor stimuli has been debated however: Chen et al. (2003) measured the electromagnetic equivalent of ssVEPs evoked by two superimposed images, and observed both trade-off and non-trade-off patterns, depending on the tagging frequency. Conversely, Keitel et al. (2010) used non-overlapping concentric rings flickering at different rates, and reported that evidence of competition between adjacent stimuli was not limited to a specific range or combination of frequencies. Arguing against a linear trade-off between emotional distractors and cognitive tasks, manipulation of foreground task load does not affect the competition effects exerted by affective pictures (Hindi Attar & Müller, 2012).

Here, we quantified the competition between emotionally arousing distractors and a fully overlapping foreground task using frequency-tagging of both distractors and task stimuli. This approach also allowed for an in-depth analysis of the time course of competition for visual processing resources. In line with the research reviewed above, we expected an amplitude reduction of the ssVEP evoked by the task stimulus when concurrently viewing emotionally arousing, compared to neutral, distractors. A trade-off account of emotional distraction would be supported if this amplitude decrease in task-ssVEP is accompanied by reciprocal increase in distractor-ssVEP, selectively for emotional engaging pictures.

Methods

Participants

Twenty-five female right-handed undergraduate students at the University of Florida provided written consent following the guidelines proposed by the University of Florida's Institutional Review Board and received either course credit or compensation (20 USD) for their participation. All procedures conformed to the World Medical Association Declaration of Helsinki. The data from twenty-three participants (age range: 18-27, mean age: 19.2) with normal to corrected-to-normal vision and no history of photic epilepsy were included in the final EEG analysis; two subjects were excluded because of poor data quality, attributed to excessive movements and/or unsatisfactory signal quality as determined by the circular T-square statistic ((Mast & Victor, 1991); see EEG reduction and analyses). For inclusion in the final behavioral analysis, participants were required to perform the coherent motion detection task at a minimal level of 60% or better (average: 75.9% correct, range: 61.1-94.6% correct), consistent with accuracy thresholds used in similar paradigms (Müller et al., 2008; Hindi Attar et al., 2010). Thus, the data from 20 participants was included in the final behavioral analysis. Due to incomplete responses from four of the 23 participants, stimulus-rating data was included for 19 participants.

Stimuli

Pictures

40 pleasant, (erotic couples, kittens), 40 neutral (people at work, cows) and 40 unpleasant (mutilated human bodies, snakes) pictures were selected from the International Affective Picture System1 (IAPS; (Lang et al., 1997)) based on normative valence and arousal ratings using the Self-Assessment Manikin (SAM; (Bradley & Lang, 1994)) 9-point scale and also from the Emotional Picture Set2 (EmoPicS; Wessa et al., 2010). When necessary, additional images were selected from the public domain to complete balanced picture categories. For all IAPS images, valence and arousal ratings for neutral pictures were 6.28 and 4.15, for pleasant 7.01 and 5.84, and for unpleasant images 2.84 and 5.9, respectively. For all EmoPicS images, valence and arousal ratings for neutral pictures were 5.19 and 3.03, and for unpleasant images were 3.29 and 5.84, respectively. All 120 stimuli were used previously by our laboratory (Deweese et al., 2014) and were controlled for visual complexity, measured as jpeg size, and were matched for luminance using scripts from the MATLAB® image processing toolbox. To control for brightness, RGB images were converted to grayscale by eliminating the hue and saturation information while retaining luminance. Mean luminance was calculated by averaging the luminance value for each grayscale image, and was standardized among all stimuli using an in-house MATLAB® script. Luminance was then measured for the entire screen using a Gossen (Nürnberg, Germany) luminance meter and was 80.02 cd/m2 on average. To minimize any effects of physical differences between stimulus categories, all picture were subjected to spatial Fourier transform using MATLAB® scripts, resulting in a spectrum of contrast energy for each (contrast-normalized) picture. Picture spectra were then compared using paired t-tests using pictures in each category as the observations and category as the factor, for each spatial frequency bin separately. This procedure demonstrated that there was no systematic difference between picture categories (all ts < 1.8) after performing the luminance and contrast normalization.

Picture stimuli were cropped and adjusted such that the defining element of each picture was presented in the center of a circular aperture, and surrounded by a solid black background (see Figure 1). To avoid contamination of the ssVEP with transient responses to the luminance gradient created by stimulus onset, a scrambled version of each image was generated in MATLAB® by randomly relocating pixel values. This procedure removed all content-related information while retaining the exact luminance value of each stimulus.

Figure 1
Time sequence for a single trial: intervals of coherent motion could occur between 2170-8000 ms (target window, solid black line). Mean target detection rates were calculated for early (580-3284 ms post-dot onset; 2580-5284 ms) and late (3290-6000 ms ...

The driving frequencies of the background stimulus (12 Hz) and flickering dots (8.57 Hz) were different to ensure distinct frequency tagging of each stimulus stream, and stimuli were overlapping to maximize competition effects (Desimone, 1998). Picture stimuli were “on” for 6 frames and “off” for 4 frames.

Dots

A total of 150 yellow dots (each 0.3 × 0.3 degrees of visual angle) were superimposed upon grayscale pictures (see above) subtending a viewing angle of 6.9° at a viewing distance of 170 cm. Each dot had a luminance of 104.6 cd/m2 as measured with a Gossen Luminance meter. The overlapping flickering dots (12 Hz) were directly superimposed upon the background distractors in a homogenous but random fashion, and remained inside the circular aperture described above (6.9° visual angle) at all times. The yellow dots were “on” for 6 frames and “off” for 8 frames. Thus, the ssVEP signal in response to the pictures and the dots was driven by luminance changes. Because we used an LED-backlit LCD monitor, we measured the regularity and timing of the luminance modulation (flicker) using two methods: First, the dynamics of the intended picture brightness levels were quantified as the total RGB energy of the texture given to the graphics card (picture plus dots) for each retrace, and saved to disk. Second, we measured luminance changes in front of the monitor with a Thorlab amplified light detector sampled at 500 Hz. These measurements suggested that luminance modulation was regular and displayed sharp on and offsets within 2 ms (half luminance point), as indicated in the specification of the Samsung LED backlit monitor used here. All dots remained in continuous motion throughout the trial and each dot changed its position by 0.04 degrees in a random direction with every ssVEP cycle (i.e., 8.57 times/sec). In a random subset of 50% of the trials, 100% of the dots moved coherently in the same direction (target). Coherent motion of the targets occurred in one of four diagonal directions (45°, 135°, 225°, 315°) at random. In an effort to produce a difficult and demanding detection task, coherent motion lasted for only 4 successive cycles of 8.57 Hz (i.e., 466.64 ms). Targets occurred unpredictably once (in 58 of the 120 trials) or twice (in 4 of the 120 trials) in a given trial, with the remaining 58 trials consisting of random movement of the dots. Four double target trials were inserted to ensure attention was directed to the task for the entire duration of the trial, and to reduce potential artifacts resulting from excessive eye-movements following early presentation of a target event.

Procedure

Participants were seated in a sound-attenuated, dimly lit electrically shielded chamber in which the sensor net (see EEG recording) was applied. Stimuli were presented centrally on a 23-inch Samsung SyncMaster SA950 LED monitor, set at a resolution of 1680 × 1050 with a refresh rate of 120 frames per second (i.e., 8.33 ms refresh interval).

Each trial began with a 1 second presentation of an image with individual pixel locations scrambled. Next, a total of 150 yellow dots (each 0.3 × 0.3 degrees of visual angle) flickering at 8.57 Hz were superimposed upon the scrambled image for 1750 ms. The scrambled background picture was then replaced by either a pleasant, neutral, or unpleasant picture (12 Hz) which remained on the screen for the duration of the trial (6416 ms; Figure 1). The first possible coherent motion event was at 2170 ms (i.e., 10 cycles) after onset of the flickering dots and the last coherent motion event was at 8000 ms (i.e., 60 cycles). At the end of each trial, participants were instructed to report the number of coherent motion events occurring during a given trial by typing the number “0” “1” or “2” on a standard keyboard. Each trial lasted for 10,166 ms, with inter-stimulus intervals randomly varying between 3000 and 5000 ms. Fixation was facilitated by presenting a white fixation dot at the center of the screen.

Prior to the experiment, all participants performed 15 practice trials to become familiar with the stimulation and task. In the training session, 8 of the trials contained a target (coherent motion of the flickering dots), with one of those targets being a double target. Following the experiment, participants rated each of the 120 affective picture stimuli used in the experiment in pseudo-randomized order on the dimensions of affective valence and arousal, using a paper and pencil version of the SAM.

EEG recording

Electrophysiological data were collected from the scalp using a 257-sensor net (EGI, Eugene, OR). Scalp impedance for each sensor was kept below 60 kΩ, which is recommended for this high input impedance amplifier (200 mΩ input impedance, see (Keil et al., 2014)). The EEG was collected continuously with a sampling rate of 250 Hz (16-bit resolution) and were band-pass filtered online in the 0.1–90 Hz frequency range using a hardware elliptical filter. The vertex electrode (Cz) was used as the recording reference. Further processing and filtering was performed offline.

EEG reduction and analyses

Continuous data were low-pass filtered offline at a frequency (3dB point) of 40 Hz (12th order Butterworth filter with 24 dB / octave roll-off implemented in MATLAB®) prior to segmenting. Single epochs of 10,500 ms in length (400 ms pre- and 10100 ms post-dot onset) were then extracted from the continuous EEG signal. Using the artifact rejection procedure proposed by (Junghöfer et al., 2000)) trials with artifacts were identified based on the distribution of statistical parameters of EEG epochs (absolute value, standard deviation, maximum of the differences) and were extracted across time points and channels. Sensors contaminated with artifacts were replaced by statistically weighted, spherical spline interpolated values, and a maximum of 25 channels was set for interpolation. Trials with spatially concentrated bad sensors were excluded as well, as these would invalidate interpolation for approximated sensors (see (Junghöfer et al., 2000), for a more detailed description). As a result, each of the three picture conditions retained an average of 28 trials (SD = 0.25) which did not differ by condition (p > 0.36). To ensure satisfactory signal quality, we submitted each participants’ data to the circular T-square statistic (Mast & Victor, 1991), which formally tests the temporal stability of the entrained brain signal at a given driving frequency. To this end, the entire ssVEP viewing epoch for each experimental condition was segmented in non-overlapping epochs containing 4 cycles each, and then submitted to the circular T-square algorithm. This algorithm can be used to test for the presence of an evoked signal at the frequency of interest, taking both phase and amplitude information into account. All participants included in the EEG analysis showed reliable (defined as p < 0.05 for the Chi-square distributed circular T-square at site Oz and its nearest neighbors) evoked oscillations at the driving frequency. This suggests satisfactory signal-to-noise ratios with the trial counts available in this experiment.

Steady-state visual evoked potential analyses

Artifact-free epochs of the voltage data were averaged for each of the three picture distractors. Time-varying amplitude at the stimulation frequencies of 8.57 Hz (dots) and 12 Hz (pictures) was extracted by means of a Hilbert transformation of the time-domain averaged data using in-house MATLAB® scripts: Data were filtered with a 10th-order Butterworth band-pass filter having a width of 0.5 Hz around the center frequency of 8.57 Hz and 12 Hz for the dot and picture conditions, respectively. Then the time-varying amplitude was extracted as the complex conjugate of the band-pass filtered signal and the Hilbert-transformed analytic signal, for each time point. For visualization purposes, the data were then temporally smoothed applying a linear moving average, with a window length of 400 ms, for the data between 2000-10100 ms (see Figure 1). In this step, it was ensured that no sharp transients were present as these would impact the Hilbert transform and thus influence the estimation of the ssVEP amplitude time course.

Statistical Analyses: Behavioral data and SAM ratings

The percentage of correctly identified targets (hits and correct rejections) was calculated for each distractor condition and participant. Hits were classified as correctly reporting the number of coherent motion events in a given trial, whereas correct rejections were classified as correctly reporting the lack of a coherent motion event when there was none. Mean target detection rates were calculated for early (580-3284 ms post-dot onset) and late (3290-6000 ms post-dot onset) time windows to assess changes in behavioral performance during early versus late periods of the picture-viewing epoch. Only trials in which a coherent motion event occurred during the simultaneous presentation of a background distractor were included in the early-late analysis, thus targets occurring during the scrambled period were omitted (see Figure 1). Because presentation of the 4 double targets occurred at random (including during the scrambled picture) and varied by participant, these targets were excluded from the behavioral analysis. Behavioral performance was evaluated by means of omnibus repeated-measures analysis of variance (ANOVA) with distractor (pleasant, neutral, unpleasant) and time (early, late) as factors. To determine the influence of early versus late presentation of coherent motion events on participant accuracy, percentile bootstrap confidence intervals were calculated for the difference in behavioral performance (i.e., percent correct) for early versus late target events, separately for pleasant, neutral, and unpleasant distractors (e.g., PLEearly minus PLElate). SAM ratings were averaged across participants for each of the three picture conditions, and were tested using a Friedman test, separately for valence and arousal. Follow-up pairwise comparisons were computed using the Wilcoxon test.

Statistical analysis: ssVEP time course

In line with previous work (Müller et al., 2008; Hindi Attar et al., 2010; Deweese et al., 2014), electrode Oz exhibited the maximum amplitude for the majority of subjects. Thus, ssVEP amplitudes were averaged in an occipital cluster of electrodes selected based on individual amplitude maxima for the flickering dot (range = 0.55-0.60 μV) and picture (range = 0.40-0.45 μV) conditions. This criteria resulted in an occipital cluster of electrodes including Oz and its 12 nearest neighbors (EGI sensors: 117 118 125 126 127 136 137 138 139 147 148 149 150), resulting in occipital regional mean amplitudes (Figure 2).

Figure 2Figure 2
Grand mean steady-state visual evoked potential averaged across all participants and conditions, recorded from electrode Oz. A reliable peak for each of the overlapping flickering stimulus conditions can be seen for the dots and picture stimuli at 8.57 ...

To evaluate modulation of the ssVEP signal across time, ssVEP amplitude was deviated from a baseline period (3000-3600 ms) in three time windows (T1: 4200-5400 ms, T2: 5600-6800 ms, T3: 7000-8800 ms; see Figure 1). The baseline period was selected as the 600 ms preceding picture onset (i.e., during the flickering dot and scrambled image) in order to avoid an onset ERP during the presentation of a background distractor ((Hindi Attar et al., 2010); Figure 1). The three time windows used for this analysis were selected based on visual inspection of the data (Figure 3), and to capture the temporal dynamics related to ssVEP amplitude modulation across the viewing epoch. The first 1200 ms time window (T1) was selected to capture immediate effects related to picture onset (expected based on previous research, e.g., (Müller et al., 2008; Deweese et al., 2014)), and was followed by two additional time windows selected to sample any effects related to sustained picture viewing.

Figure 3
A) Grand-averaged Hilbert transformed data for all conditions (pleasant, neutral, unpleasant) for the flickering dot (left) and picture (right) stimuli, averaged over a cluster of occipito-parietal electrodes. Gray boxes indicate the time windows of interest, ...

Coherent motion detection (dot) task

Initial analyses were conducted for each of the flickering stimuli in isolation to assess ssVEP amplitude modulation of the task-relevant dot stimulus and task irrelevant picture stimulus with respect to the hedonic content of the distractor. For the flickering dot task, a repeated measures ANOVA was conducted with distractor (pleasant, neutral, unpleasant), and time (baseline, T1, T2, T3) as factors.

Picture distractors

A repeated measures ANOVA was conducted for the flickering picture stimulus with distractor (pleasant, neutral, unpleasant), and time (baseline, T1, T2, T3) as factors. Paired t-tests were used to follow up significant interactions.

Competition analysis

To quantify neural competition between the flickering dot and picture stimuli, a competition index was calculated that represented the difference in ssVEP power measured for the flickering dot task and picture distractor. The mean ssVEP amplitudes were averaged for each distractor content and each time point, and were deviated from the baseline separately for the driving frequency associated with the dot task and the distractor picture. For the coherent motion task, this measure was typically negative (i.e. less power compared to when no picture distractor was present); for the picture distractor, this measure was typically positive (i.e. more power than for the scrambled [baseline] picture). Thus, by adding the absolute values of these two measures, this competition index estimates the total difference in steady state power between the flickering dot task and picture distractors. Refer to Table 1 for the average ssVEP amplitude values used to calculate competition index.

Table 1
Average ssVEP amplitude values used to calculate competition index.

These measures were analyzed in a repeated measures ANOVA with factors of distractor (pleasant, neutral, unpleasant) and time window (T1, T2, T3). Significant effects were followed by repeated measures ANOVAs retaining the factor of distractor, conducted separately for each time point. Effects in those models were followed by paired samples t-tests.

For all effects involving repeated measures, the Huynh-Feldt procedure was used to correct for violations of sphericity.

Results

Behavioral data & SAM ratings

Interference of emotionally arousing background pictures on the coherent motion detection task was reflected in the behavioral performance data, illustrated in Figure 4. Behavioral performance was significantly worse for early versus late occurrences of coherent motion events (targets) within a given trial (Time, F(1,19) = 13.31, p = 0.002). To determine the influence of early versus late presentation of coherent motion events on participant accuracy with respect to the background distractor, we calculated a bootstrap confidence interval separately for pleasant, neutral, and unpleasant distractors (e.g., PLEearly minus PLElate). The results suggest that behavioral performance was worse for early versus late presentation of coherent motion events when pleasant and neutral images served as distractors (95% CIPLE [−24.69, −4.19], 95% CINEU [−22.92, −5.04]), whereas behavioral performance did not differ between early and late target events when unpleasant images served as distractors (95% CIUNP [−15.75, 1.37]). These findings would suggest that unpleasant distractors consistently interfered with task performance throughout the duration of the trail. Bootstrap results are based on 1000 bootstrap samples.

Figure 4
A) Behavioral performance is represented as percent correct for early (2580-5284 ms) and late (5290-8000 ms) coherent motion events occurring during a given trial for pleasant, neutral and unpleasant distractors. Percent correct data are superimposed ...

A Friedman test was conducted to evaluate differences in self-reported valence and arousal (using the 9-point SAM scale) for pleasant (mean, valence: 6.2, arousal: 3.6), neutral (mean, valence: 5.1, arousal: 1.5), and unpleasant (mean, valence: 3.4, arousal: 4.3) distractors. Both Friedman tests were significant, Valence (χ2 (2, 19) = 24.56, p < 0.0001) and Arousal (χ2 (2, 19) = 35.04, p < 0.0001), indicating differences in self-reported stimulus ratings on dimensions of valence and arousal. Follow-up pairwise comparisons were conducted using a Wilcoxon test and controlling for the Type I errors across these comparisons at the p = 0.05 level using the LSD procedure. Participants rated unpleasant (p < 0.0001) and pleasant (p < 0.0001) distractors as significantly more arousing compared to neutral distractors (Figure 5). There was no significant difference in arousal ratings between pleasant and unpleasant distractors (p = 0.136). Valence ratings differed significantly between each distractor content (p < 0.001 [neutral vs. pleasant], p < 0.0001 [neutral vs. unpleasant], and p < 0.0001 [pleasant vs. unpleasant]; Figure 5).

Figure 5
Mean ratings of self-reported valence and arousal for pleasant, neutral, and unpleasant images. Mean valence and arousal ratings are superimposed on each picture category, by subject.

Steady-state visual evoked potentials

In all participants, the flickering dot task and picture distractors evoked steady-state responses at the expected frequencies of 8.57 Hz and 12 Hz, respectively, with the greatest overall ssVEP amplitudes across all experimental conditions occurring for sensor Oz and its nearest neighbors (Figure 2). The grand mean time-varying energy of the signal over occipital sensors as quantified by the Hilbert transform is shown in Figure 3 (A) for both the dot and the picture stimuli.

Coherent motion detection (dot) task

The task-relevant flickering dot stream was affected by the hedonic content of the distractor stimulus (Distractor Content, F(2,44) = 3.507, p = 0.039), with a significant quadratic trend indicating that emotionally arousing distractors (both pleasant and unpleasant), resulted in a reduction in ssVEP amplitude to the flickering dots (F(1,22) = 7.926, p = 0.01), compared to neutral distractors. The ssVEP amplitude of the flickering dot stimulus was also affected by time (F(3,66) = 5.373, p = 0.012), for which an overall reduction in ssVEP amplitude was observed during the picture viewing interval, compared to the baseline segment. These results are illustrated in panel A of Figure 3 (left).

Picture distractors

The ssVEP amplitude of the task-irrelevant flickering picture stream increased reliably across time (F(3,66) = 6.263, p = 0.01) and was modulated by the content of the flickering distractor (Time × Distractor, F(6,132) = 3.233, p = 0.005). This interaction was driven by significantly greater ssVEP amplitude to unpleasant distractors compared to pleasant distractors (t22 = 3.312, p = 0.003, 95% CI [0.018, 0.078]) in the first time window (T1, 4200-5400 ms). No other comparisons reached significance (t < 1.832, p = 0.081). These results are illustrated in panel A of Figure 3 (right).

Competition analysis

A repeated measures ANOVA assessed differences in ssVEP amplitude relative to baseline. Figure 3 illustrates that, when the distractor picture was present, ssVEP amplitude decreased for the flickering dot task (left panel) and increased for the flickering picture distractor (right panel; Tagged Stimulus, F(1,22) = 5.3, p = 0.031), demonstrating the expected suppression of ssVEP amplitude for the motion detection task and reciprocal enhancement of ssVEP amplitude to the flickering distractor, compared to baseline (scrambled image). A three-way interaction of tagged stimulus (dot, pictures), distractor content and time (F(6,132) = 2.385, p = 0.032), indicated that ssVEP amplitude varied as a function of both distractor type and tagged stimulus, across the presentation interval.

The three-way interaction was therefore followed by analysis of the competition index, which quantified the visual cortical activity evoked by each of the two concurrent flickering stimuli (see Table 1). The ANOVA confirmed main effects of distractor content (F(2,44) = 6.366, p = 0.004) and time (F(2,22) = 121.168, p < 0.0001). Follow up ANOVAs revealed a main effect of distractor content in both T1 (F(2,44) = 4.839, p = 0.016) and T2 (F(2,44) = 4.932, p = 0.012) time windows (Figure 6); there was no difference at T3 (F < 1.86, p = 0.169). In T1, the main effect of distractor was due to greater competition for unpleasant (t22 = 3.130, p = 0.005, 95% CI [0.031, 0.155]) compared to neutral distractors (pleasant was not different from neutral, t < 1.5, p = 0.15). The greatest amount of competition was observed in T2, where both pleasant (t22 = 2.675, p = 0.014, 95% CI [−0.144, −0.182]) and unpleasant (t22 = 2.612, p = 0.016, 95% CI [0.015, 0.129]) distractors resulted in greater levels of competition between the two flickering stimuli, compared to neutral distractors. As illustrated in Figure 6, these competition effects, which are largest when a reduction in ssVEP amplitude to the flickering dot stimulus is accompanied by an increase in ssVEP amplitude to the distracting picture stimulus, suggest a trade-off of attentional processing resources that change across time and are modulated by emotional content.

Figure 6
Mean competition index values for the three distractor contents (pleasant, neutral, unpleasant), plotted separately for each time window (T1: 4200-5400 ms, T2: 5600-6800 ms, T3: 7000-8800 ms), by subject. The group mean for each distractor content is ...

Discussion

The present study was designed to investigate the extent to which a reduction in the task-evoked ssVEP signal when emotionally arousing stimuli serve as distractors is accompanied by a reciprocal electrocortical enhancement of the emotional stimulus, as afforded by frequency-tagging. Of central importance is how the visual system resolves competition for processing resources among stimuli differing in motivational salience, and in particular, the ways in which emotionally arousing stimuli may lead to interference with competing cues. Here, the term ‘processing resources’ refers to the engagement of macroscopic populations of visual neurons: thus, in discussing competition for attentional ‘resources’ there is the exchange of a metaphorical currency for a biological one (see also Franconeri et al., 2013). The pattern of ssVEP amplitude modulation observed here is consistent with a recent study from our laboratory (Deweese et al., 2014), among others (e.g., (Müller et al., 2008; Hindi Attar et al., 2010; Hindi Attar & Müller, 2012)), in which a reduction in ssVEP amplitude to the task-relevant stream was observed while emotionally arousing pictures were presented as distractors. The current paradigm employed the frequency-tagging technique, which allowed for the assignment of a unique frequency to both the foreground motion detection task and the background distractor, and thus enabled the quantification of neural competition between the two stimuli in visual cortex. In line with our hypothesis, we observed competitive effects of attention related to the processing of two concurrently presented stimuli, although these effects were not linear.

In the literature, interference by emotionally arousing distractors has been observed to be relatively early but brief (500-1000 ms; (Müller et al., 2008; Hindi Attar et al., 2010)). The extended viewing epoch used in this study has afforded the additional advantage of observing and quantifying not only early, but also sustained differences in the temporal patterns associated with competition of concurrently presented stimuli. Initial competition effects between emotionally arousing distractors and the motion detection task began 450 ms post-picture onset and persisted for an additional 2600 ms. This temporal pattern is consistent with the findings from (Wieser et al., 2012), who in a similar paradigm, observed a sustained perceptual bias toward threatening faces in observers with high social anxiety. In line with work in healthy observers (Müller et al., 2008; Hindi Attar et al., 2010), interference by emotionally arousing distractors was also reflected here as poorer behavioral performance for detecting coherent motion events occurring earlier versus later in the trial sequence.

An interesting finding from these data revealed that competition between the motion detection task and background distractors was not linear: Competition effects showed a different time course than either the ssVEP for the background distractors or the task stimulus alone. Figure 3 illustrates the initial effects of ssVEP amplitude reduction to the flickering dot task, which began approximately 450 ms post-picture onset. The increase in ssVEP amplitude evoked by the background distractors was slower to develop, and reached a maximum and sustained amplitude in the second time window (Figure 3). Initial effects of competition were observed only with unpleasant distractors. This is consistent with a range of conceptual and empirical studies: Rapid and enhanced processing of threat cues, relative to pleasant and neutral cues is a hallmark of evolutionary perspectives on the processing of survival-relevant stimuli (e.g., (Lang et al., 1997; Öhman et al., 2001)). In a recent study, Wieser and Keil (2013) used frequency-tagging to examine the interaction between facial expressions and surrounding contextual information in healthy, non-anxious observers. The authors found that the presence of a fearful face was associated with enhanced ssVEP amplitude to threatening versus pleasant peripheral background images, supporting the idea that threat cue detection may result in increased arousal and heightened environmental awareness (Phelps & LeDoux, 2005). Thus, evolutionarily old fear circuits in the medial temporal lobe may contribute to amplification of threat-relevant signals in lower-tier visual cortex. Such amplification and prioritization of threat cues may be at the cost of task processing (Miskovic and Keil, 2013), but may also assist in detecting additional threat-related aspects of the visual scene (Wieser & Keil, 2013).

Initial competition effects of foreground task and unpleasant background distractors was subsequently followed by competition for all emotional stimuli between 1850 and 3050 ms post-picture onset (T2). Importantly, although no longer modulated by affective content, competition between the overlapping stimuli remained evident in the final time window, between 3250 and 5050 ms post-picture onset (T3). In this vein, re-entrant modulation of the visual cortex by deep structures such as the amygdala (Sabatinelli et al., 2009) has been discussed as a potential mechanism for enhancing perception of emotionally arousing visual stimuli. Based on work in the macaque model, this mechanism has been hypothesized to rely on several serial processing steps along the ventral stream (Amaral et al., 2003), which may require several hundreds of milliseconds, consistent with human electrophysiology work (Keil et al., 2009, 2012). As the ssVEP is a measure of ongoing stimulus processing in visual cortex, it is likely that the ssVEP may reflect the net effect of biasing signals of higher-order cortical regions (Kastner & Ungerleider, 2000), which in turn alter the sensitivity of lower-tier visual cortical neurons in favor of features associated with emotionally arousing content (e.g., (Keil et al., 2012)). Thus, the findings reported here are consistent with the literature suggesting emotionally arousing stimuli receive prioritized processing (Bradley et al., 2003) at the cost of concurrently presented information (Keil et al., 2005; Ihssen et al., 2007; Hindi Attar & Müller, 2012; Deweese et al., 2014), and in a frequency-tagging paradigm confirms that emotionally arousing stimuli involuntarily capture and “withdraw” attentional resources away from simultaneous task demands. This “trade-off” of attention is consistent with the biased competition model of attention, which suggests that competition effects are greatest when competing stimuli activate overlapping neural populations (Desimone & Duncan, 1995).

In sum, these data support the view that the processing capacity of the visual system as measured by ssVEPs is limited. Given these capacity constrains in cortices low in the traditional visual hierarchy, it appears evolutionary adaptive that motivationally and emotionally relevant cues are prioritized at the cost of concurrent stimuli. The fact that this trade-off pattern was found to be not linear may reflect the resolution of local competition in visual cortex through distinct, widespread neural circuits. Evolutionary old motive circuits in the medial temporal lobe may amplify the neural response to emotionally engaging cues (Sabatinelli et al., 2009), whereas task-based amplification may be reflective of bias signals originating in frontal cortices (Barcelo et al., 2000). Thus, future work may want to examine competition at the level of brain networks, using the strong ssVEP signal as a reference point for detecting and quantifying neural communication during emotional distraction.

Acknowledgements

This research was supported by the National Institute of Mental Health Grants R01 MH084932-02 and R01 MH097320, and by a grant from the Spanish Government (I+D+i PSI2009-07066), awarded to Andreas Keil. This research was also supported by the German Research Foundation Grant Mu 972/22-2, awarded to Matthias Müller. The authors declare no competing interests. Additionally, the authors would like to thank Christianne Biggane, Cintya Larios, and Tyler Jarrett for their help with data collection.

Abbreviations

ANOVA
Analysis of variance
EEG
Electroencephalogram
ERP
Event-related potential
SAM
Self-Assessment Manikin
ssVEP
Steady-state visual evoked potential

Footnotes

1IAPS pictures used were: pleasant (1463, 4647, 4656, 4658, 4664, 4669, 4670, 4672, 4676, 4677, 4680, 4681, 4687, 4689, 4690, 4693, 4800, 4810, 4649, 4604, 4611), neutral (1670, 2411, 2383, 2393), unpleasant (1050, 1051, 1080, 1110, 1113, 1114, 1120, 3017, 3030, 3051, 3059, 3060, 3061, 3062, 3063, 3069, 3080, 3130, 3150, 3168, 3170, 3213, 3266, 9253)

2EmoPicS pictures used were: neutral (120, 278, 279, 280, 281, 282), unpleasant (234, 239, 242, 322, 324)

References

  • Amaral DG, Behniea H, Kelly JL. Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey. Neuroscience. 2003;118:1099–1120. [PubMed]
  • Appelbaum LG, Wade AR, Vildavski VY, Pettet MW, Norcia AM. Cue- invariant networks for figure and background processing in human visual cortex. J. Neurosci. 2006;26:11695–11708. [PMC free article] [PubMed]
  • Barcelo F, Suwazono S, Knight RT. Prefrontal modulation of visual processing in humans. Nat Neurosci. 2000;3:399–403. [PubMed]
  • Bradley MM, Lang PJ. Measuring Emotion - the Self-Assessment Mannequin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry. 1994;25:49–59. [PubMed]
  • Bradley MM, Sabatinelli D, Lang PJ, Fitzsimmons JR, King W, Desai P. Activation of the visual cortex in motivated attention. Behav Neurosci. 2003;117:369–380. [PubMed]
  • Chen Y, Seth AK, Gally JA, Edelman GM. The power of human brain magnetoencephalographic signals can be modulated up or down by changes in an attentive visual task. Proc Natl Acad Sci U A. 2003;100:3501–3506. [PubMed]
  • Desimone R. Visual attention mediated by biased competition in extrastriate visual cortex. Philos Trans R Soc Lond B Biol Sci. 1998;353:1245–1255. [PMC free article] [PubMed]
  • Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci. 1995;18:193–222. [PubMed]
  • Deweese MM, Bradley MM, Lang PJ, Andersen SK, Müller MM, Keil A. Snake fearfulness is associated with sustained competitive biases to visual snake features: Hypervigilance without avoidance. Psychiatry Res. 2014;219:329–335. [PMC free article] [PubMed]
  • Hindi Attar C, Andersen SK, Muller MM. Time course of affective bias in visual attention: convergent evidence from steady-state visual evoked potentials and behavioral data. Neuroimage. 2010;53:1326–1333. [PubMed]
  • Hindi Attar C, Müller MM. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task. 2012 [PMC free article] [PubMed]
  • Ihssen N, Heim S, Keil A. The Costs of Emotional Attention: Affective Processing Inhibits Subsequent Lexico-Semantic Analysis. J. Cogn. Neurosci. 2007;19:1932–1949. [PubMed]
  • Junghöfer M, Elbert T, Tucker DM, Rockstroh B. Statistical control of artifacts in dense array EEG/MEG studies. Psychophysiology. 2000;37:523–532. [PubMed]
  • Kastner S, Ungerleider LG. Mechanisms of visual attention in the human cortex. Annu Rev Neurosci. 2000;23:315–341. [PubMed]
  • Keil A, Costa V, Smith JC, Sabatinelli D, McGinnis EM, Bradley MM, Lang PJ. Tagging cortical networks in emotion: a topographical analysis. Hum. Brain Mapp. 2012;33:2920–2931. [PMC free article] [PubMed]
  • Keil A, Debener S, Gratton G, Junghofer M, Kappenman ES, Luck SJ, Luu P, Miller GA, Yee CM. Committee report: Publication guidelines and recommendations for studies using electroencephalography and magnetoencephalography. Psychophysiology. 2014;51:1–21. [PubMed]
  • Keil A, Gruber T, Muller MM, Moratti S, Stolarova M, Bradley MM, Lang PJ. Early modulation of visual perception by emotional arousal: evidence from steady-state visual evoked brain potentials. Cogn Affect Behav Neurosci. 2003;3:195–206. [PubMed]
  • Keil A, Moratti S, Sabatinelli D, Bradley MM, Lang PJ. Additive effects of emotional content and spatial selective attention on electrocortical facilitation. Cereb. Cortex. 2005;15:1187–1197. [PubMed]
  • Keil A, Sabatinelli D, Ding M, Lang PJ, Ihssen N, Heim S. Re-entrant Projections Modulate Visual Cortex in Affective Perception: Directional Evidence from Granger Causality Analysis. Hum. Brain Mapp. 2009;30:532–540. [PMC free article] [PubMed]
  • Keitel C, Andersen SK, Muller MM. Competitive effects on steady-state visual evoked potentials with frequencies in- and outside the alpha band. Exp Brain Res. 2010;205:489–495. [PubMed]
  • Lang PJ, Bradley MM, Cuthbert BN. Motivated attention: Affect, activation, and action. In: Lang PJ, Simons RF, Balaban M, editors. Attention and Orienting: Sensory and Motivational Processes. Lawrence Erlbaum; Mahwah, NJ: 1997. pp. 97–136.
  • Mast J, Victor JD. Fluctuations of steady-state VEPs: interaction of driven evoked potentials and the EEG. Electroencephalogr Clin Neurophysiol. 1991;78:389–401. [PubMed]
  • Miskovic V, Keil A. Perceiving Threat In the Face of Safety: Excitation and Inhibition of Conditioned Fear in Human Visual Cortex. J. Neurosci. 2013;33:72–78. [PMC free article] [PubMed]
  • Müller MM, Andersen S, Keil A. Time course of competition for visual processing resources between emotional pictures and a foreground task. Cereb. Cortex. 2008;18:1892–1899. [PubMed]
  • Müller MM, Hübner R. Can the spotlight of attention be shaped like a doughnut? Evidence from steady-state visual evoked potentials. Psychol Sci. 2002;13:119–124. [PubMed]
  • Müller MM, Picton TW, Valdes-Sosa P, Riera J, Teder-Salejarvi WA, Hillyard SA. Effects of spatial selective attention on the steady-state visual evoked potential in the 20-28 Hz range. Brain Res Cogn Brain Res. 1998;6:249–261. [PubMed]
  • Norcia AM, Appelbaum LG, Ales JM, Cottereau BR, Rossion B. The steady-state visual evoked potential in vision research: A review. J. Vis. 2015;15:4. [PMC free article] [PubMed]
  • Öhman A, Mineka S. Fears, phobias, and preparedness: toward an evolved module of fear and fear learning. Psychol Rev. 2001;108:483–522. amp. [PubMed]
  • Phelps EA, LeDoux JE. Contributions of the amygdala to emotion processing: from animal models to human behavior. Neuron. 2005;48:175–187. [PubMed]
  • Sabatinelli D, Lang PJ, Bradley MM, Costa VD, Keil A. The timing of emotional discrimination in human amygdala and ventral visual cortex. J Neurosci. 2009;29:14864–14868. [PMC free article] [PubMed]
  • Sabatinelli D, Lang PJ, Keil A, Bradley MM. Emotional perception: Correlation of functional MRI and event related potentials. Cereb. Cortex. 2007;17:1066–1073. [PubMed]
  • Wang J, Clementz B, Keil A. The Neural Correlates of Feature-Based Selective Attention When Viewing Spatially and Temporally Overlapping Images. Neuropsychologia. 2007;45:1393–1399. [PMC free article] [PubMed]
  • Wessa M, Kanske P, Neumeister P, Bode K, Heissler J, Schönfelder S. EmoPics: Subjektive und psychophysiologische Evaluation neuen Bildmaterials für die klinisch-bio-psychologische Forschung. Z. Für Klin. Psychol. Psychother. 2010;39:77.
  • Wieser MJ, Keil A. Fearful faces heighten the cortical representation of contextual threat. Neuroimage. 2013 [PMC free article] [PubMed]
  • Wieser MJ, McTeague LM, Keil A. Competition effects of threatening faces in social anxiety. Emotion. 2012;12:1050–1060. [PMC free article] [PubMed]