Cognitive operations are thought to emerge from dynamic interactions between spatially distinct brain areas. Synchronization of oscillations has been proposed to regulate these interactions, but we do not know whether this large-scale synchronization can respond rapidly to changing cognitive demands. Here we show that as task demands change during a trial, multiple distinct networks are dynamically formed and reformed via oscillatory synchronization. Distinct frequency-coupled networks were rapidly formed to process reward value, maintain information in visual working memory, and deploy visual attention. Strong single-trial correlations showed that networks formed even before the presentation of imperative stimuli could predict the strength of subsequent networks, as well as the speed and accuracy of behavioral responses seconds later. These frequency-coupled networks better predicted single-trial behavior than either local oscillations or event-related potentials. Our findings demonstrate the rapid reorganization of networks formed by dynamic activity in response to changing task demands within a trial.
cross-frequency coupling; synchrony; reward; visual working memory; visual attention
Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention.
Event related potentials; Attention: Visual; Memory: Working memory; Executive functions
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrate that a directed-forgetting cue leads to higher fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we show that this is possible because the to-be-forgotten item is expelled from visual working memory following the cue allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features.
visual attention; visual working memory; long-term memory; visual search; electrophysiology; event-related potentials
Research has shown that performing visual search while maintaining representations in visual working memory displaces up to one object’s worth of information from memory. This memory displacement has previously been attributed to a nonspecific disruption of the memory representation by the mere presentation of the visual search array, and the goal of the present study was to determine whether it instead reflects the use of visual working memory in the actual search process. The first hypothesis tested was that working memory displacement occurs because observers preemptively discard about an object’s worth of information from visual working memory in anticipation of performing visual search. Second, we tested the hypothesis that on target-absent trials no information is displaced from visual working memory because no target is entered into memory when search is completed. Finally, we tested whether visual working memory displacement is due to the need to select a response to the search array. The findings rule out these alternative explanations. The present study supports the hypothesis that change-detection performance is impaired when a search array appears during the retention interval due to nonspecific disruption or masking.
Theories of attention are compatible with the idea that we can bias attention to avoid selecting objects that have known nontarget features. Although this may underlie several existing phenomena, the explicit guidance of attention away from known nontargets has yet to be demonstrated. Here we show that observers can use feature cues (i.e., color) to bias attention away from nontarget items during visual search. These negative cues were used to quickly instantiate a template for rejection that reliably facilitated search across the cue-to-search stimulus onset asynchronies (SOAs), although negative cues were not as potent as cues that guide attention toward target features. Furthermore, by varying the search set size we show a template for rejection is increasingly effective in facilitating search as scene complexity increases. Our findings demonstrate that knowing what not to look for can be used to configure attention to avoid certain features, complimenting what is known about setting attention to select certain target features.
A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of subjects to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember one of two groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task, but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appear to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials (ERPs) and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance.
Due to the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial of the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition that are unobservable with behavioral methods alone. I then review a number of important methodological issues and often forgotten facts that should be considered when evaluating or planning ERP experiments.
Although areas of frontal cortex are thought to be critical for maintaining information in visuospatial working memory, the event-related potential (ERP) index of maintenance is found over posterior cortex in humans. In the present study, we reconcile these seemingly contradictory findings. Here we show that macaque monkeys and humans exhibit the same posterior ERP signature of working memory maintenance that predicts the precision of the memory-based behavioral responses. In addition, we show that the specific pattern of rhythmic oscillations in the alpha band, recently demonstrated to underlie the human visual working memory ERP component, is also present in monkeys. Next, we concurrently recorded intracranial local field potentials from two prefrontal and another frontal cortical area to determine their contribution to the surface potential indexing maintenance. The local fields in the two prefrontal areas, but not the cortex immediately posterior, exhibited amplitude modulations, timing, and relationships to behavior indicating that they contribute to the generation of the surface ERP component measured from the distal posterior electrodes. Rhythmic neural activity in the theta and gamma bands during maintenance provided converging support for the engagement of the same brain regions. These findings demonstrate that nonhuman primates have homologous electrophysiological signatures of visuospatial working memory to those of humans and that a distributed neural network, including frontal areas, underlies the posterior ERP index of visuospatial working memory maintenance.
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.
attention; working memory; visual search; capture
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions.
attention; working memory; cuing; automaticity; strategic control; PsychINFO classification 2346
The error-related negativity (ERN) and positivity (Pe) are components of event-related potential (ERP) waveforms recorded from humans that are thought to reflect performance monitoring. Error-related signals have also been found in single-neuron responses and local-field potentials recorded in supplementary eye field and anterior cingulate cortex of macaque monkeys. However, the homology of these neural signals across species remains controversial. Here, we show that monkeys exhibit ERN and Pe components when they commit errors during a saccadic stop-signal task. The voltage distributions and current densities of these components were similar to those found in humans performing the same task. Subsequent analyses show that neither stimulus- nor response-related artifacts accounted for the error-ERPs. This demonstration of macaque homologues of the ERN and Pe forms a keystone in the bridge linking human and nonhuman primate studies on the neural basis of performance monitoring.
Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in prior studies and demonstrate that the visual working memory system can detect changes in object identity across spatial transformations.
Can we entirely erase a temporary memory representation from mind? This question has been addressed in several recent studies that tested the specific hypothesis that a representation can be erased from visual working memory based on a cue that indicated that the representation was no longer necessary for the task. In addition to behavioral results that are consistent with the idea that we can throw information out of visual working memory, recent neurophysiological recordings support this proposal. However, given the infinite capacity of long-term memory, it is unclear whether throwing a representation out of visual working memory really removes its effects on memory entirely. In this paper, we advocate for an approach that examines our ability to erase memory representations from working memory, as well as possible traces that those erased representations leave in long-term memory.
purging; process purity; directed-forgetting; visual working memory; long-term memory
During the last decade one of the most contentious and heavily studied topics in the attention literature has been the role that working memory representations play in controlling perceptual selection. The hypothesis has been advanced that to have attention select a certain perceptual input from the environment, we only need to represent that item in working memory. Here we summarize the work indicating that the relationship between what representations are maintained in working memory and what perceptual inputs are selected is not so simple. First, it appears that attentional selection is also determined by high-level task goals that mediate the relationship between working memory storage and attentional selection. Second, much of the recent work from our laboratory has focused on the role of long-term memory in controlling attentional selection. We review recent evidence supporting the proposal that working memory representations are critical during the initial configuration of attentional control settings, but that after those settings are established long-term memory representations play an important role in controlling which perceptual inputs are selected by mechanisms of attention.
visual attention; visual working memory; long-term memory; event-related potentials
Previous studies have proposed that attention is not necessary for detecting simple features but is necessary for binding them to spatial locations. The present study tested this hypothesis, using the N2pc component of the event-related potential waveform as a measure of the allocation of attention. A simple feature detection condition, in which observers reported whether a target color was present or not, was compared with feature-location binding conditions, in which observers reported the location of the target color. A larger N2pc component was observed in the binding conditions than in the detection condition, indicating that additional attentional resources are needed to bind a feature to a location than to detect the feature independently of its location. This finding supports theories of attention in which attention plays a special role in binding features.
Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different motion direction in visual working memory. Control experiments showed that none of a variety of alternative explanations could account for this repulsion effect induced by working memory. Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms as those involved in processing basic sensory events.
Visual working memory; Visual perception
It has been intensely debated whether visual stimuli are processed to the point of semantic analysis in the absence of awareness. In the present study, we measured the extent to which the meaning of a stimulus was registered using the N400 component of human event-related potentials (ERPs), a highly sensitive index of the semantic mismatch between a stimulus and the context in which it is presented. Observers judged the semantic relatedness of a context and target word while ERPs were recorded under continuous flash suppression (Experiment 1 and 2) and binocular rivalry (Experiment 3). Finally, we parametrically manipulated the visibility of the target word by increasing the contrast between the target word and the suppressive stimulus presented to the other eye (Experiment 4). We found that the amplitude of the N400 was attenuated with increasing suppression depth and was absent whenever the observers could not discriminate the meaning of suppressed words. We discuss these findings in the context of single-process models of consciousness which can account for a large body of empirical evidence obtained from visual masking, attentional manipulations and, now, interocular suppression paradigms.
The role of spike rate versus timing codes in visual target selection is unclear. We simultaneously recorded activity from multiple frontal eye field neurons and asked whether they interacted to select targets from distractors during visual search. When both neurons in a pair selected the target and had overlapping receptive fields (RFs), they cooperated more than when one or neither neuron in the pair selected the target, measured by positive spike timing correlations using joint peristimulus time histogram analysis. The amount of cooperation depended on the location of the search target: it was higher when the target was inside both neurons’ RFs than when it was inside one RF but not the other, or outside both RFs. Elevated spike timing coincidences occurred at the time of attentional selection of the target as measured by average modulation of discharge rates. We observed competition among neurons with spatially non-overlapping RFs, measured by negative spike timing correlations. Thus, we provide evidence for dynamic and task-dependent cooperation and competition among frontal eye field neurons during visual target selection.
Attention; Saccade; Vision; Visual; Decision; Receptive Field; Redundancy; Macaque
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response.
Previous research suggests that target templates are stored visual working memory and used to guide attention during visual search. However, observers can search efficiently even if working memory is filled to capacity by a concurrent task. The idea that target templates are stored in working memory receives support primarily from studies of nonhuman primates in which the target varies from trial-to-trial, and it is possible that working memory templates are not necessary when target identity remains constant, as in most studies of visual search in humans. To test this hypothesis, we asked subjects to perform a visual search task during the delay interval of a visual working memory task. The two tasks were found to interfere with each other when the search targets changed from trial-to-trial, but not when target identity remained constant. Thus, a search template is stored in visual working memory only when the target varies from trial-to-trial. These findings suggest that the network of brain areas involved in shifting attention during visual search tasks may be able to operate essentially independently of the anatomical areas that perform visual working memory maintenance of objects, but only if the identity of the visual search target is stable across time.