Markers of preparatory visual–spatial attention in sensory cortex have been described both as lateralized, slow-wave event-related potential (ERP) components and as lateralized changes in oscillatory-electroencephalography alpha power, but the roles of these markers and their functional relationship are still unclear. Here, 3 versions of a visual–spatial cueing paradigm, differing in perceptual task difficulty and/or response instructions, were used to investigate the functional relationships between posterior oscillatory-alpha changes and our previously reported posterior, slow-wave biasing-related negativity (swBRN) ERP activity. The results indicate that the swBRN reflects spatially specific, pretarget preparatory activity sensitive to the expected perceptual difficulty of the target detection task, correlating in both location and strength with the early sensory-processing N1 ERP to the target, consistent with reflecting a preparatory baseline-shift mechanism. In contrast, contralateral event-related decreases in alpha-band power were relatively insensitive to perceptual difficulty and differed topographically from both the swBRN and target N1. Moreover, when response instructions emphasized making immediate responses to targets, compared with prescribing delayed responses, contralateral alpha-event-related desynchronization activity was particularly strong and correlated with the longer latency target-P3b activity. Thus, in contrast to the apparent perceptual-biasing role of swBRN activity, contralateral posterior alpha activity may represent an attentionally maintained task set linking stimulus-specific information and task-specific response requirements.
attention; biasing; control; EEG; ERP
The integration of multisensory information has been shown to be guided by spatial and temporal proximity, as well as to be influenced by attention. Here we used neural measures of the multisensory spread of attention to investigate the spatial and temporal linking of synchronous versus near-synchronous auditory and visual events. Human participants attended selectively to one of two lateralized visual-stimulus streams while task-irrelevant tones were presented centrally. Electrophysiological measures of brain activity showed that tones occurring simultaneously or delayed by 100ms were temporally linked to an attended visual stimulus, as reflected by robust cross-modal spreading-of-attention activity, but not when delayed by 300ms. The neural data also indicated a ventriloquist-like spatial linking of the auditory to the attended visual stimuli, but only when occurring simultaneously. These neurophysiological results thus provide unique insight into the temporal and spatial principles of multisensory feature integration and the fundamental role attention plays in such integration.
Multisensory; Attention; Temporal; Spatial; ERP; Ventriloquism
Reward has been shown to promote human performance in multiple task domains. However, an important debate has developed about the uniqueness of reward-related neural signatures associated with such facilitation, as similar neural patterns can be triggered by increased attentional focus independent of reward. Here, we used functional magnetic resonance imaging to directly investigate the neural commonalities and interactions between the anticipation of both reward and task difficulty, by independently manipulating these factors in a cued-attention paradigm. In preparation for the target stimulus, both factors increased activity within the midbrain, dorsal striatum, and fronto-parietal areas, while inducing deactivations in default-mode regions. Additionally, reward engaged the ventral striatum, posterior cingulate, and occipital cortex, while difficulty engaged medial and dorsolateral frontal regions. Importantly, a network comprising the midbrain, caudate nucleus, thalamus, and anterior midcingulate cortex exhibited an interaction between reward and difficulty, presumably reflecting additional resource recruitment for demanding tasks with profitable outcome. This notion was consistent with a negative correlation between cue-related midbrain activity and difficulty-induced performance detriments in reward-predictive trials. Together, the data demonstrate that expected value and attentional demands are integrated in cortico-striatal-thalamic circuits in coordination with the dopaminergic midbrain to flexibly modulate resource allocation for an effective pursuit of behavioral goals.
attention; fMRI; midbrain; reward; task demands
The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal.
Auditory; Stroop; Conflict; EEG; Incongruency
Associating stimuli with the prospect of reward typically facilitates responses to those stimuli due to an enhancement of attentional and cognitive-control processes. Such reward-induced facilitation might be especially helpful when cognitive-control mechanisms are challenged, as when one must overcome interference from irrelevant inputs. Here, we investigated the neural dynamics of reward effects in a color-naming Stroop task by employing event-related potentials (ERPs). We found that behavioral facilitation in potential-reward trials, as compared to no-reward trials, was paralleled by early ERP modulations likely indexing increased attention to the reward-predictive stimulus. Moreover, reward changed the temporal dynamics of conflict-related ERP components, which may be a consequence of an early access to the various stimulus features and their relationships. Finally, although word meanings referring to potential-reward colors were always task-irrelevant, they caused greater interference compared to words referring to no-reward colors, an effect that was accompanied by a relatively early fronto-central ERP modulation. This latter observation suggests that task-irrelevant reward information can undermine goal-directed behavior at an early processing stage, presumably reflecting priming of a goal-incompatible response. Yet, these detrimental effects of incongruent reward-related words were absent in potential-reward trials, apparently due to the prioritized processing of task-relevant reward information. Taken together, the present data demonstrate that reward associations can influence conflict processing by changing the temporal dynamics of stimulus processing and subsequent cognitive-control mechanisms.
The observation of cueing effects (faster responses for cued than uncued targets) rapidly following centrally-presented arrows has led to the suggestion that arrows trigger rapid, automatic, shifts of spatial attention. However, these effects have primarily been observed during easy target-detection tasks when both cue and target remain on the screen until the behavioral response. We manipulated stimulus duration and task difficulty in an attention-cueing experiment to explore non-attentional explanations for rapid cueing effects. Contrary to attention-based predictions, short-interval cueing effects were observed only for long-duration cue and target stimuli, occurred even when the cue and target were presented simultaneously, and were driven by slowing of the uncued-target responses, rather than any facilitation for cued targets. We propose that, under these long-duration, short-interval conditions, the processing of the cue and target interact more extensively in the brain, and that when the cue and target convey incongruent spatial information (i.e., on invalidly cued trials) it leads to conflict-related slowing of responses
Spatial cueing; Arrow cues; Attention; Automatic orienting; Conflict
The specific role of different parietal regions to episodic retrieval is a topic of intense debate. According to the Attention to Memory (AtoM) model, dorsal parietal cortex (DPC) mediates top–down attention processes guided by retrieval goals, whereas ventral parietal cortex (VPC) mediates bottom–up attention processes captured by the retrieval output or the retrieval cue. This model also hypothesizes that the attentional functions of DPC and VPC are similar for memory and perception. To investigate this last hypothesis, we scanned participants with event-related fMRI whereas they performed memory and perception tasks, each comprising an orienting phase (top–down attention) and a detection phase (bottom–up attention). The study yielded two main findings. First, consistent with the AtoM model, orienting-related activity for memory and perception overlapped in DPC, whereas detection-related activity for memory and perception overlapped in VPC. The DPC overlap was greater in the left intraparietal sulcus, and the VPC overlap in the left TPJ. Around overlapping areas, there were differences in the spatial distribution of memory and perception activations, which were consistent with trends reported in the literature. Second, both DPC and VPC showed stronger connectivity with medial-temporal lobe during the memory task and with visual cortex during the perception task. These findings suggest that, during memory tasks, some parietal regions mediate similar attentional control processes to those involved in perception tasks (orienting in DPC vs. detection in VPC), although on different types of information (mnemonic vs. sensory).
It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher-level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (onset at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face stimuli. Furthermore, however, the masking appeared to render a strong attenuation of earlier feed-forward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.
Several major cognitive neuroscience models have posited that focal spatial attention is required to integrate different features of an object to form a coherent perception of it within a complex visual scene. Although many behavioral studies have supported this view, some have suggested that complex perceptual discrimination can be performed even with substantially reduced focal spatial attention, calling into question the complexity of object representation that can be achieved without focused spatial attention. In the present study, we took a cognitive neuroscience approach to this problem by recording cognition-related brain activity both to help resolve the questions about the role of focal spatial attention in object-categorization processes and to investigate the underlying neural mechanisms, focusing particularly on the temporal cascade of these attentional and perceptual processes in visual cortex. More specifically, we recorded electrical brain activity in humans engaged in a specially designed cued-visual-search paradigm to probe the object-related visual processing before and during the transition from distributed to focal spatial attention. The onset times of the color-popout cueing information, indicating where within an object array the subject was to shift attention, was parametrically varied relative to the presentation of the array (i.e., either occurring simultaneously or being delayed by 50 or 100 ms). The electrophysiological results demonstrate that some level of object-specific representation can be formed in parallel for multiple items across the visual field under spatially distributed attention, before focal spatial attention is allocated to any of them. The object-discrimination process appears to be subsequently amplified as soon as focal spatial attention is directed to a specific location and object. This set of novel neurophysiological findings thus provides important new insights on fundamental issues that have been long-debated in cognitive neuroscience concerning both object-related processing and the role of attention.
Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stimuli within a short temporal window. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.
Multisensory integration has often been characterized as an automatic process. Recent findings suggest that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by cross-modal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in our understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant findings.
While it is commonly accepted that reward is an effective motivator of behavior, little is known about potential costs resulting from reward associations. Here, we used functional magnetic resonance imaging (fMRI) to investigate the neural underpinnings of such reward-related performance-disrupting effects in a reward-modulated Stroop task in humans. While reward associations in the task-relevant dimension (i.e., ink color) facilitated performance, behavioral detriments were found when the task-irrelevant dimension (i.e., word meaning) implicitly referred to reward-predictive ink colors. Neurally, only relevant reward associations invoked a typical reward-anticipation response in the nucleus accumbens (NAcc), which was in turn predictive of behavioral facilitation. In contrast, irrelevant reward associations increased activity in a medial prefrontal motor-control-related region, namely the pre-supplementary motor area (pre-SMA), which likely reflects the preemption and inhibition of automatic response tendencies that are amplified by irrelevant reward-related words. This view was further supported by a positive relationship between pre-SMA activity and pronounced response slowing in trials containing reward-related as compared to reward-unrelated incongruent words. Importantly, the distinct neural processes related to the beneficial and detrimental behavioral effects of reward associations appeared to arise from preferential-coding mechanisms in visual-processing areas that were shared by the two stimulus dimensions, suggesting a transfer of reward-related saliency to the irrelevant dimension, but with highly differential behavioral and neural ramifications. More generally, the data demonstrate that even entirely irrelevant reward associations can influence stimulus-processing and response-selection pathways relatively automatically, thereby representing an important flip-side of reward-driven performance enhancements.
reward; Stroop interference; fMRI; nucleus accumbens; pre-supplementary motor area; attention
Performance in a behavioral task can be facilitated by associating stimulus properties with reward. In contrast, conflicting information is known to impede task performance. Here we investigated how reward associations influence the within-trial processing of conflicting information using a color-naming Stroop task in which a subset of ink colors (task-relevant dimension) was associated with monetary incentives. We found that color-naming performance was enhanced on trials with potential reward versus those without. Moreover, in potential-reward trials, typical conflict-induced performance decrements were attenuated if the incongruent word (task-irrelevant dimension) was unrelated to reward. In contrast, incongruent words that were semantically related to reward-predicting ink colors interfered with performance in potential-reward trials and even more so in no-reward trials, despite the semantic meaning being entirely task-irrelevant. These observations imply that the prospect of reward enhances the processing of task-relevant stimulus information, whereas incongruent reward-related information in a task-irrelevant dimension can impede task performance.
Reward; Conflict; Stroop; Interference
The current study investigated the neural activity patterns associated with numerical sensitivity in adults. Event-related potentials (ERPs) were recorded while adults observed sequentially presented display arrays (S1 and S2) of non-symbolic numerical stimuli (dots) and made same/different judgments of these stimuli by pressing a button only when numerosities were the same (target trials). The main goals were to contrast the effects of numerical distance (close, medium, and far) and change direction (increasing, decreasing) between S1 and S2, both in terms of behavior and brain activity, and to examine the influence of individual differences in numeracy on the effects of these manipulations. Neural effects of distance were found to be significant between 360–600 ms after the onset of S2 (greater negativity-wave activity for closer numerical distances), while direction effects were found between 320–440ms (greater negativity for decreasing direction). ERP change-direction effects did not interact with numerical distance, suggesting that the two types of information are processed independently. Importantly, subjects’ behavioral Weber fractions (w) for the same/different discrimination task correlated with distance-related ERP-activity amplitudes. Moreover, w also correlated with a separate objective measure of mathematical ability. Results thus draw a clear link between brain and behavior measures of number discrimination, while also providing support for the relationship between non-verbal magnitude discrimination and symbolic numerical processing.
Spatial attention to a visual stimulus that occurs synchronously with a task-irrelevant sound from a different location can lead to increased activity not only in visual cortex, but also auditory cortex, apparently reflecting the object-related spreading of attention across both space and modality (Busse et al., 2005). The processing of stimulus conflict, including multisensory stimulus conflict, is known to activate the anterior cingulate cortex (ACC), but the interactive influence on the sensory cortices remains relatively unexamined. Here we used fMRI to examine whether the multisensory spread of visual attention across the sensory cortices previously observed will be modulated by whether there is conceptual or object-related conflict between the relevant visual and irrelevant auditory inputs. Subjects visually attended to one of two lateralized visual letter streams while synchronously occurring, task-irrelevant, letter sounds were presented centrally, which could be either congruent or incongruent with the visual letters. We observed significant enhancements for incongruent versus congruent letter-sound combinations in the ACC and in the contralateral visual cortex when the visual component was attended, presumably reflecting the conflict detection and the need for boosted attention to the visual stimulus during incongruent trials. In the auditory cortices, activity increased bilaterally if the spatially discordant auditory stimulation was incongruent, but only in the left, language-dominant side when congruent. We conclude that a conflicting incongruent sound, even when task-irrelevant, distracts more strongly than a congruent one, leading to greater capture of attention. This greater capture of attention in turn results in increased activity in the auditory cortex.
The superior colliculus (SC) has been shown to play a crucial role in the initiation and coordination of eye- and head-movements. The knowledge about the function of this structure is mainly based on single-unit recordings in animals with relatively few neuroimaging studies investigating eye-movement related brain activity in humans.
The present study employed high-field (7 Tesla) functional magnetic resonance imaging (fMRI) to investigate SC responses during endogenously cued saccades in humans. In response to centrally presented instructional cues, subjects either performed saccades away from (centrifugal) or towards (centripetal) the center of straight gaze or maintained fixation at the center position. Compared to central fixation, the execution of saccades elicited hemodynamic activity within a network of cortical and subcortical areas that included the SC, lateral geniculate nucleus (LGN), occipital cortex, striatum, and the pulvinar.
Activity in the SC was enhanced contralateral to the direction of the saccade (i.e., greater activity in the right as compared to left SC during leftward saccades and vice versa) during both centrifugal and centripetal saccades, thereby demonstrating that the contralateral predominance for saccade execution that has been shown to exist in animals is also present in the human SC. In addition, centrifugal saccades elicited greater activity in the SC than did centripetal saccades, while also being accompanied by an enhanced deactivation within the prefrontal default-mode network. This pattern of brain activity might reflect the reduced processing effort required to move the eyes toward as compared to away from the center of straight gaze, a position that might serve as a spatial baseline in which the retinotopic and craniotopic reference frames are aligned.
Being able to effectively explore our visual world is of fundamental importance, and it has been suggested that the straight-ahead gaze (primary position) might play a special role in this context. We employed fMRI in humans to investigate how neural activity might be modulated for saccades relative to this putative default position. Using an endogenous cueing paradigm, saccade direction and orbital starting position were systematically manipulated, resulting in saccades toward primary position (centripetal) and away from primary position (centrifugal) that were matched in amplitude, directional predictability, as well as orbital starting position. In accord with earlier research, we found that fMRI activity in the superior colliculus (SC), as well as in the frontal eye fields and the intraparietal sulcus, was enhanced contralateral to saccade direction across all saccade conditions. Furthermore, the SC exhibited a relative activity decrease during re-centering relative to centrifugal saccades, a pattern that was paralleled by faster saccadic reaction times. In contrast, activity within the cortical eye fields was not significantly modulated during re-centering saccades as compared to other saccade types, suggesting that the re-centering bias is predominantly implemented at a subcortical rather than cortical processing stage. Such a modulation might reflect a special coding bias facilitating the return of gaze to a default position in the gaze space in which retinotopic and egocentric reference frames are aligned and from which the visual world can be effectively explored.
superior colliculus; fMRI; eye movement; re-centering bias; cortical eye fields
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Electrophysiology; EEG; ERP; Multisensory; SOA
The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings) embedded amongst a of character strings. Beginning at 130 ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180 ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g., interhemispheric) processing of those stimuli shortly later. Additional early (130–150 ms) negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300 ms) were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
word reading; ERPs; visual cortex; visual orthography
Varying degrees of plasticity in different subsystems of language have been demonstrated by studies showing that some aspects of language are processed similarly by native speakers and late-learners whereas other aspects are processed differently by the two groups. The study of speech segmentation provides a means by which the ability to process different types of linguistic information can be measured within the same task, because lexical, syntactic, and stress-pattern information can all indicate where one word ends and the next begins in continuous speech. In this study, native Japanese and native Spanish late-learners of English (as well as near-monolingual Japanese and Spanish speakers) were asked to determine whether specific sounds fell at the beginning or in the middle of words in English sentences. Similar to native English speakers, late-learners employed lexical information to perform the segmentation task. However, non-native speakers did not use syntactic information to the same extent as native English speakers. Although both groups of late-learners of English used stress pattern as a segmentation cue, the extent to which this cue was relied upon depended on the stress-pattern characteristics of their native language. These findings support the hypothesis that learning a second language later in life has differential effects on subsystems within language.
bilingual; speech segmentation; lexical; syntax; stress pattern
A dominant view in numerical cognition is that numerical comparisons operate on a notation independent representation (Dehaene, 1992). Although previous human neurophysiological studies using scalp-recorded event-related potentials (ERPs) on the numerical distance effect have been interpreted as supporting this idea, differences in the electrophysiological correlates of the numerical distance effect in symbolic notations (e.g. Arabic numerals) and non-symbolic notations (e.g. a set of visually presented dots of a certain number) are not entirely consistent with this view.
Methods and results
Two experiments were conducted to resolve these discrepancies. In Experiment 1, participants performed a symbolic and a non-symbolic numerical comparison task ("smaller or larger than 5?") with numerical values 1–4 and 6–9 while ERPs were recorded. Consistent with a previous report (Temple & Posner, 1998), in the symbolic condition the amplitude of the P2p ERP component (210–250 ms post-stimulus) was larger for values near to the standard than for values far from the standard whereas this pattern was reversed in the non-symbolic condition. However, closer analysis indicated that the reversal in polarity was likely due to the presence of a confounding stimulus effect on the early sensory ERP components for small versus larger numerical values in the non-symbolic condition. In Experiment 2 exclusively large numerosities (8–30) were used, thereby rendering sensory differences negligible, and with this control in place the numerical distance effect in the non-symbolic condition mirrored the symbolic condition of Experiment 1.
Collectively, the results support the claim of an abstract semantic processing stage for numerical comparisons that is independent of input notation.
Human perception of faces is widely believed to rely on automatic processing by a domain-specific, modular component of the visual system. Scalp-recorded event-related potential (ERP) recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specific N170 enhancement has been reported to be largely immune to the influence of endogenous processes such as task strategy or attention. However, most studies examining the influence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specific processing is immune to the influence of spatial attention. This question was addressed in a dual-visual-stream ERP study in which the influence of spatial attention on the face-specific N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a significant face-specific N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specific processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these findings underscore the extensive influence that top-down attention exercises over the processing of visual stimuli, including those of high natural salience.
N170; ERPs; FFA; STS; event-related potentials; visual attention
Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated a frontal-parietal network in the top-down control of attention. However, little is known about the timing and sequence of activations within this network. To investigate these timing questions, we used event-related electrical brain potentials (ERPs) and a specially designed visual-spatial attentional-cueing paradigm, which were applied as part of a multi-methodological approach that included a closely corresponding event-related fMRI study using an identical paradigm. In the first 400 ms post cue, attention-directing and control cues elicited similar general cue-processing activity, corresponding to the more lateral subregions of the frontal-parietal network identified with the fMRI. Following this, the attention-directing cues elicited a sustained negative-polarity brain wave that was absent for control cues. This activity could be linked to the more medial frontal–parietal subregions similarly identified in the fMRI as specifically involved in attentional orienting. Critically, both the scalp ERPs and the fMRI-seeded source modeling for this orienting-related activity indicated an earlier onset of frontal versus parietal contribution (∼400 versus ∼700 ms). This was then followed (∼800–900 ms) by pretarget biasing activity in the region-specific visual-sensory occipital cortex. These results indicate an activation sequence of key components of the attentional-control brain network, providing insight into their functional roles. More specifically, these results suggest that voluntary attentional orienting is initiated by medial portions of frontal cortex, which then recruit medial parietal areas. Together, these areas then implement biasing of region-specific visual-sensory cortex to facilitate the processing of upcoming visual stimuli.
Attention is a fundamental cognitive function that allows us to focus neural resources on events or information in our environment that are most important or interesting to us at any given moment. Recent functional neuroimaging studies have indicated that a network of brain areas in frontal and parietal cortex is involved in directing our attention to specific locations in our visual field. However, little is known about the timing and sequence of activations within the various parts of this attentional control network, thus limiting our understanding of their functional roles. We extracted a more precise picture of the neural mechanisms of attentional control by combining two complementary methods of measuring cognitive brain activity: functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). fMRI offers information on a millimeter scale about the locations of brain activity, whereas EEG offers temporal information on a scale of milliseconds. Our results indicate that visual-spatial attentional control is initiated in frontal brain areas, joined shortly afterwards by parietal involvement. Together, these brain areas then prepare relevant areas in the visual cortex for performing enhanced processing of visual input in the attended region of space.
A combination of methods reveals the timing and sequence of neural activation within the frontal-parietal network and provides a more precise picture of the mechanisms controlling visual attention in humans.
In a previous functional magnetic resonance (fMRI) study, a subdivision of the human auditory cortex into four distinct territories was
achieved. One territory (T1a) exhibited functional specialization in terms of a foreground-background decomposition task involving matching-to-sample monitoring on tone sequences. The present study more specifically determined whether memory-guided analysis of tone sequences is part of the T1a specialization. During the encoding periods, an arbitrary and unfamiliar four-tone-sequence
(melody) played by one instrument was presented. The melody-instrument-combination was different in each period. During subsequent
retrieval periods, learned and additional combinations were presented, and the tasks were either to detect the target melodies
(experiment I) or the target instruments (experiment II). T1a showed larger activation during the melody retrieval. The results
generally suggest that (1) activation of T1a during retrieval is determined less by the sound material than by the executed task, and (2) more
specifically, that memory-guided sequential analysis in T1a is dominant over recognition of characteristic complex sounds.
Inhibitory motor control is a core function of cognitive control. Evidence from diverse experimental approaches has linked this function to a mostly right-lateralized network of cortical and subcortical areas, wherein a signal from the frontal cortex to the basal ganglia is believed to trigger motor-response cancellation. Recently, however, it has been recognized that in the context of typical motor-control paradigms those processes related to actual response inhibition and those related to the attentional processing of the relevant stimuli are highly interrelated and thus difficult to distinguish. Here, we used fMRI and a modified Stop-signal task to specifically examine the role of perceptual and attentional processes triggered by the different stimuli in such tasks, thus seeking to further distinguish other cognitive processes that may precede or otherwise accompany the implementation of response inhibition. In order to establish which brain areas respond to sensory stimulation differences by rare Stop-stimuli, as well as to the associated attentional capture that these may trigger irrespective of their task-relevance, we compared brain activity evoked by Stop-trials to that evoked by Go-trials in task blocks where Stop-stimuli were to be ignored. In addition, region-of-interest analyses comparing the responses to these task-irrelevant Stop-trials, with those to typical relevant Stop-trials, identified separable activity profiles as a function of the task-relevance of the Stop-signal. While occipital areas were mostly blind to the task-relevance of Stop-stimuli, activity in temporo-parietal areas dissociated between task-irrelevant and task-relevant ones. Activity profiles in frontal areas, in turn, were activated mainly by task-relevant Stop-trials, presumably reflecting a combination of triggered top-down attentional influences and inhibitory motor-control processes.