The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Electrophysiology; EEG; ERP; Multisensory; SOA
N400, an event-related brain potential (ERP) waveform elicited by meaningful stimuli, is normally reduced by stimulus repetition (N400 repetition priming), and relatedness between the eliciting stimulus and preceding ones (relatedness priming). Schizophrenia patients' N400 relatedness priming deficits suggest impairment in using meaningful prime stimuli to facilitate processing of related concepts in semantic memory. To examine whether this deficiency arises from difficulty activating the prime concept per se, as indexed by reduced N400 repetition priming; or from impaired functional connections among concepts in semantic memory, as reflected by reduced relatedness priming but normal repetition priming; we recorded ERPs from 16 schizophrenia patients and 16 controls who viewed prime words each followed at 300- or 750-ms stimulus-onset asynchrony (SOA) by an unrelated, related or repeated target word, or a nonword, in a lexical-decision task. In both groups, N400s were largest (most negative) for unrelated, intermediate for related, and smallest for repeated targets. Schizophrenia patients exhibited subnormal N400 relatedness priming at the 300-ms SOA, but normal repetition priming at both SOAs, suggesting that their impairment in using prime words to activate related concepts results from abnormal functional connections among concepts within semantic memory, rather than inability to activate the prime concept itself.
Cognition; language; semantics; event-related potentials; N400
The present study investigated the neural mechanisms that contribute to the detection of visual feature changes between stimulus displays by means of event-related lateralizations of the electroencephalogram (EEG). Participants were instructed to respond to a luminance change in either of two lateralized stimuli that could randomly occur alone or together with an irrelevant orientation change of the same or contralateral stimulus. Task performance based on response times and accuracy was decreased compared to the remaining stimulus conditions when relevant and irrelevant feature changes were presented contralateral to each other (contralateral distractor condition). The sensory response to the feature changes was reflected in a posterior contralateral positivity at around 100 ms after change presentation and a posterior contralateral negativity in the N1 time window (N1pc). N2pc reflected a subsequent attentional bias in favor of the relevant luminance change. The continuation of the sustained posterior contralateral negativity (SPCN) following N2pc covaried with response times within feature change conditions and revealed a posterior topography comparable to the earlier components associated with sensory and attentional mechanisms. Therefore, this component might reflect the re-processing of information based on sustained short-term memory representations in the visual system until a stable target percept is created that can serve as the perceptual basis for response selection and the initiation of goal-directed behavior.
attention; perception; short-term memory; N2pc; SPCN; re-entrant processing
Statistical regularities in the environment guide perceptual processing; however, some predictions are bound to be more important than others. In this electroencephalogram (EEG) study, we test how task relevance influences the way predictions are learned from the statistics of visual input, and exploited for behavior. We developed a novel task in which participants are simply instructed to respond to a designated target stimulus embedded in a serial stream of non-target stimuli. Presentation probabilities were manipulated such that a designated target cue stimulus predicted the target onset with 70% validity. We also included a corresponding control contingency: a pre-designated control cue predicted a specific non-target stimulus with 70% validity. Participants were not informed about these contingencies. This design allowed us to examine the neural response to task-relevant predictive (cue) and predicted stimuli (target), relative to task-irrelevant predictive (control cue) and predicted stimuli (control non-target). The behavioral results confirmed that participants learned and exploited task-relevant predictions even when not explicitly defined. The EEG results further showed that target-relevant predictions are coded more strongly than statistically equivalent regularities between non-target stimuli. There was a robust modulation of the response for predicted targets associated with learning, enhancing the response to cued stimuli just after 200 ms post-stimulus in central and posterior electrodes, but no corresponding effects for predicted non-target stimuli. These effects of target prediction were preceded by a sustained frontal negativity following presentation of the predictive cue stimulus. These results show that task relevance critically influences how the brain extracts predictive structure from the environment, and exploits these regularities for optimized behavior.
prediction; expectation; task-relevance; EEG; event-related potential
Previous studies have shown that emotion can have 2-fold effects on perception. At the object-level, emotional stimuli benefit from a stimulus-specific boost in visual attention at the relative expense of competing stimuli. At the visual feature-level, recent findings indicate that emotion may inhibit the processing of small visual details and facilitate the processing of coarse visual features. In the present study, we investigated whether emotion can boost the activation and inhibition of automatic motor responses that are generated prior to overt perception. To investigate this, we tested whether an emotional cue affects covert motor responses in a masked priming task. We used a masked priming paradigm in which participants responded to target arrows that were preceded by invisible congruent or incongruent prime arrows. In the standard paradigm, participants react faster, and commit fewer errors responding to the directionality of target arrows, when they are preceded by congruent vs. incongruent masked prime arrows (positive congruency effect, PCE). However, as prime-target SOAs increase, this effect reverses (negative congruency effect, NCE). These findings have been explained as evidence for an initial activation and a subsequent inhibition of a partial response elicited by the masked prime arrow. Our results show that the presentation of fearful face cues, compared to neutral face cues, increased the size of both the PCE and NCE, despite the fact that the primes were invisible. This is the first demonstration that emotion prepares an individual's visuomotor system for automatic activation and inhibition of motor responses in the absence of visual awareness.
emotion; masking; motor priming; fearful faces; activation; inhibition
In spite of the excellent temporal resolution of event-related EEG potentials
(ERPs), the overlapping potentials evoked by masked and masking stimuli are hard
to disentangle. However, when both masked and masking stimuli consist of pairs
of relevant and irrelevant stimuli, one left and one right from fixation, with
the side of the relevant element varying between pairs, effects of masked and
masking stimuli can be distinguished by means of the contralateral preponderance
of the potentials evoked by the relevant elements, because the relevant elements
may independently change sides in masked and masking stimuli. Based on a
reanalysis of data from which only selected contralateral-ipsilateral effects
had been previously published, the present contribution will provide a more
complete picture of the ERP effects in a masked-priming task. Indeed, effects
evoked by masked primes and masking targets heavily overlapped in conventional
ERPs and could be disentangled to a certain degree by contralateral-ipsilateral
differences. Their major component, the N2pc, is interpreted as indicating
preferential processing of stimuli matching the target template, which process
can neither be identified with conscious perception nor with shifts of spatial
attention. The measurements showed that the triggering of response preparation
by the masked stimuli did not depend on their discriminability, and their
priming effects on the processing of the following target stimuli were
qualitatively different for stimulus identification and for response
preparation. These results provide another piece of evidence for the
independence of motor-related and perception-related effects of masked
event-related potentials; masking; masked priming; N2pc; LRP; N2cc
Functional clusters of neurons in the monkey prefrontal and anterior cingulate cortex are involved in guiding attention to the most valuable objects in a scene.
Attentional control ensures that neuronal processes prioritize the most relevant stimulus in a given environment. Controlling which stimulus is attended thus originates from neurons encoding the relevance of stimuli, i.e. their expected value, in hand with neurons encoding contextual information about stimulus locations, features, and rules that guide the conditional allocation of attention. Here, we examined how these distinct processes are encoded and integrated in macaque prefrontal cortex (PFC) by mapping their functional topographies at the time of attentional stimulus selection. We find confined clusters of neurons in ventromedial PFC (vmPFC) that predominantly convey stimulus valuation information during attention shifts. These valuation signals were topographically largely separated from neurons predicting the stimulus location to which attention covertly shifted, and which were evident across the complete medial-to-lateral extent of the PFC, encompassing anterior cingulate cortex (ACC), and lateral PFC (LPFC). LPFC responses showed particularly early-onset selectivity and primarily facilitated attention shifts to contralateral targets. Spatial selectivity within ACC was delayed and heterogeneous, with similar proportions of facilitated and suppressed responses during contralateral attention shifts. The integration of spatial and valuation signals about attentional target stimuli was observed in a confined cluster of neurons at the intersection of vmPFC, ACC, and LPFC. These results suggest that valuation processes reflecting stimulus-specific outcome predictions are recruited during covert attentional control. Value predictions and the spatial identification of attentional targets were conveyed by largely separate neuronal populations, but were integrated locally at the intersection of three major prefrontal areas, which may constitute a functional hub within the larger attentional control network.
To navigate within an environment filled with sensory stimuli, the brain must selectively process only the most relevant sensory information. Identifying and shifting attention to the most relevant sensory stimulus requires integrating information about its sensory features as well as its relative value, that is, whether it's worth noticing. In this study, we describe groups of neurons in the monkey prefrontal cortex that convey signals relating to the value of a stimulus and its defining feature and location at the very moment when attention is shifted to the stimulus. We found that signals conveying information about value were clustered in a ventromedial prefrontal region, and were separated from sensory signals within the anterior cingulate cortex and the lateral prefrontal cortex. The integration of valuation and other “top-down” processes, however, was achieved by neurons clustered at the intersection of ventromedial, anterior cingulate, and lateral prefrontal cortex. We conclude that valuation processes are recruited when attention is shifted, independent of any overt behavior. Moreover, our analysis suggests that valuation processes can bias the initiation of attention shifts, as well as ensure sustained attentional focusing.
Humans perceive a harmonic series as a single auditory object with a pitch equivalent to the fundamental frequency (F0) of the series. When harmonics are presented to alternate ears, the repetition rate of the waveform at each ear doubles. If the harmonics are resolved, then the pitch perceived is still equivalent to F0, suggesting the stimulus is binaurally integrated before pitch is processed. However, unresolved harmonics give rise to the doubling of pitch which would be expected from monaural processing (Bernstein and Oxenham, J. Acoust. Soc. Am., 113:3323–3334, 2003). We used similar stimuli to record responses of multi-unit clusters in the central nucleus of the inferior colliculus (IC) of anesthetized guinea pigs (urethane supplemented by fentanyl/fluanisone) to determine the nature of the representation of harmonic stimuli and to what extent there was binaural integration. We examined both the temporal and rate-tuning of IC clusters and found no evidence for binaural integration. Stimuli comprised all harmonics below 10 kHz with fundamental frequencies (F0) from 50 to 400 Hz in half-octave steps. In diotic conditions, all the harmonics were presented to both ears. In dichotic conditions, odd harmonics were presented to one ear and even harmonics to the other. Neural characteristic frequencies (CF, n = 85) were from 0.2 to 14.7 kHz; 29 had CFs below 1 kHz. The majority of clusters responded predominantly to the contralateral ear, with the dominance of the contralateral ear increasing with CF. With diotic stimuli, over half of the clusters (58%) had peaked firing rate vs. F0 functions. The most common peak F0 was 141 Hz. Almost all (98%) clusters phase locked diotically to an F0 of 50 Hz, and approximately 40% of clusters still phase locked significantly (Rayleigh coefficient >13.8) at the highest F0 tested (400 Hz). These results are consistent with the previous reports of responses to amplitude-modulated stimuli. Clusters phase locked significantly at a frequency equal to F0 for contralateral and diotic stimuli but at 2F0 for dichotic stimuli. We interpret these data as responses following the envelope periodicity in monaural channels rather than as a binaurally integrated representation.
pitch; unresolved harmonics; binaural; integration; alternating phase
Prefrontal neurons code many kinds of behaviourally relevant visual information. In behaving monkeys, we used a cued target detection task to address coding of objects, behavioural categories and spatial locations, examining the temporal evolution of neural activity across dorsal and ventral regions of the lateral prefrontal cortex (encompassing parts of areas 9, 46, 45A and 8A), and across the two cerebral hemispheres. Within each hemisphere there was little evidence for regional specialisation, with neurons in dorsal and ventral regions showing closely similar patterns of selectivity for objects, categories and locations. For a stimulus in either visual field, however, there was a strong and temporally specific difference in response in the two cerebral hemispheres. In the first part of the visual response (50–250 ms from stimulus onset), processing in each hemisphere was largely restricted to contralateral stimuli, with strong responses to such stimuli, and selectivity for both object and category. Later (300–500 ms), responses to ipsilateral stimuli also appeared, many cells now responding more strongly to ipsilateral than to contralateral stimuli, and many showing selectivity for category. Activity on error trials showed that late activity in both hemispheres reflected the animal's final decision. As information is processed towards a behavioural decision, its encoding spreads to encompass large, bilateral regions of prefrontal cortex.
behaving monkey; dynamic coding; frontal specialisation; prefrontal cortex
Despite the lack of ipsilateral receptive fields (RFs) for neurons in the hand representation of area 3b of primary somatosensory cortex, interhemispheric interactions have been reported to varying degrees. We investigated spatiotemporal properties of these interactions to determine: response types; timing between stimuli to evoke the strongest bimanual interactions; topographical distribution of effects; and their dependence on similarity of stimulus locations on the two hands. We analyzed response magnitudes and latencies of single neurons and multi-neuron clusters recorded from 100-electrode arrays implanted in one hemisphere of each of two anesthetized owl monkeys. Skin indentations were delivered to the two hands simultaneously and asynchronously at mirror locations (matched sites on each hand) and non-mirror locations. Since multiple neurons were recorded simultaneously, stimuli on the contralateral hand could be within or outside of the classical RFs of any given neuron. For most neurons, stimulation on the ipsilateral hand suppressed responses to stimuli on the contralateral hand. Maximum suppression occurred when the ipsilateral stimulus was presented 100ms before the contralateral stimulus onset (P < 0.0005). The longest stimulus onset delay tested (500ms) allowed contralateral responses to recover to control levels (P = 0.428). Stimulation on mirror digits did not differ from stimulation on non-mirror locations (P = 1.000). These results indicate that interhemispheric interactions are common in area 3b, somewhat topographically diffuse, and maximal when the suppressing ipsilateral stimulus preceded the contralateral stimulus. Our findings point to a neurophysiological basis for “interference” effects found in human psychophysical studies of bimanual stimulation.
area 3b; bimanual; interhemispheric; multielectrode; primate; Utah array
When two targets are presented in close temporal proximity amongst a rapid serial visual stream of distractors, a period of disrupted attention and attenuated awareness lasting 200–500 ms follows identification of the first target (T1). This phenomenon is known as the “attentional blink” (AB) and is generally attributed to a failure to consolidate information in visual short-term memory due to depleted or disrupted attentional resources. Previous research has shown that items presented during the AB that fail to reach conscious awareness are still processed to relatively high levels, including the level of meaning. For example, missed word stimuli have been shown to prime later targets that are closely associated words. Although these findings have been interpreted as evidence for semantic processing during the AB, closely associated words (e.g., day-night) may also rely on specific, well-worn, lexical associative links which enhance attention to the relevant target.
We used a measure of semantic distance to create prime-target pairs that are conceptually close, but have low word associations (e.g., wagon and van) and investigated priming from a distractor stimulus presented during the AB to a subsequent target (T2). The stimuli were words (concrete nouns) in Experiment 1 and the corresponding pictures of objects in Experiment 2. In both experiments, report of T2 was facilitated when this item was preceded by a semantically-related distractor.
This study is the first to show conclusively that conceptual information is extracted from distractor stimuli presented during a period of attenuated awareness and that this information spreads to neighbouring concepts within a semantic network.
According to the traditional two-stage model of face processing, the face-specific N170 event-related potential (ERP) is linked to structural encoding of face stimuli, whereas later ERP components are thought to reflect processing of facial affect. This view has recently been challenged by reports of N170 modulations by emotional facial expression. This study examines the time-course and topography of the influence of emotional expression on the N170 response to faces.
Dense-array ERPs were recorded in response to a set (n = 16) of fear and neutral faces. Stimuli were normalized on dimensions of shape, size and luminance contrast distribution. To minimize task effects related to facial or emotional processing, facial stimuli were irrelevant to a primary task of learning associative pairings between a subsequently presented visual character and a spoken word.
N170 to faces showed a strong modulation by emotional facial expression. A split half analysis demonstrates that this effect was significant both early and late in the experiment and was therefore not associated with only the initial exposures of these stimuli, demonstrating a form of robustness against habituation. The effect of emotional modulation of the N170 to faces did not show significant interaction with the gender of the face stimulus, or hemisphere of recording sites. Subtracting the fear versus neutral topography provided a topography that itself was highly similar to the face N170.
The face N170 response can be influenced by emotional expressions contained within facial stimuli. The topography of this effect is consistent with the notion that fear stimuli exaggerates the N170 response itself. This finding stands in contrast to previous models suggesting that N170 processes linked to structural analysis of faces precede analysis of emotional expression, and instead may reflect early top-down modulation from neural systems involved in rapid emotional processing.
The relationship between the firing of single cells and local field potentials (LFPs) has received increasing attention, with studies in animals [1–11] and humans [12–14]. Recordings in the human medial temporal lobe (MTL) have demonstrated the existence of neurons with selective and invariant responses , with a relatively late but precise response onset around 300 ms after stimulus presentation [16–18] and firing only upon conscious recognition of the stimulus . This represents a much later onset than expected from direct projections from inferotemporal cortex [16, 18]. The neural mechanisms underlying this onset remain unclear. To address this issue, we performed a joint analysis of single-cell and LFP responses during a visual recognition task. Single-neuron responses were preceded by a global LFP deflection in the theta range. In addition, there was a local and stimulus-specific increase in the single-trial gamma power. These LFP responses correlated with conscious recognition. The timing of the neurons’ firing was phase locked to these LFP responses. We propose that whereas the gamma phase locking reflects the activation of local networks encoding particular recognized stimuli, the theta phase locking reflects a global activation that provides a temporal window for processing consciously perceived stimuli in the MTL.
•Global theta LFP increases immediately precede MTL single-cell responses•Gamma power reflects activations of local networks encoding specific stimuli•The timing of the neurons’ firing is phase locked to LFP responses•LFP responses give a temporal window for processing consciously perceived stimuli
Rey et al. show that, in human medial temporal lobe (MTL), single-cell responses triggered by consciously perceived stimuli are locked to global theta and local gamma LFP responses; the latter reflects local activations, but the former shortly precedes the spike responses and may provide a window for stimulus processing in the MTL.
When a flashed stimulus is followed by a backward mask, subjects fail to perceive it unless the target-mask interval exceeds a threshold duration of about 50 ms. Models of conscious access postulate that this threshold is associated with the time needed to establish sustained activity in recurrent cortical loops, but the brain areas involved and their timing remain debated. We used high-density recordings of event-related potentials (ERPs) and cortical source reconstruction to assess the time course of human brain activity evoked by masked stimuli and to determine neural events during which brain activity correlates with conscious reports. Target-mask stimulus onset asynchrony (SOA) was varied in small steps, allowing us to ask which ERP events show the characteristic nonlinear dependence with SOA seen in subjective and objective reports. The results separate distinct stages in mask-target interactions, indicating that a considerable amount of subliminal processing can occur early on in the occipito-temporal pathway (<250 ms) and pointing to a late (>270 ms) and highly distributed fronto-parieto-temporal activation as a correlate of conscious reportability.
Understanding the neural mechanisms that distinguish between conscious and nonconscious processes is a crucial issue in cognitive neuroscience. In this study, we focused on the transition that causes a visual stimulus to cross the threshold to consciousness, i.e., visibility. We used a backward masking paradigm in which the visibility of a briefly presented stimulus (the “target”) is reduced by a second stimulus (the “mask”) presented shortly after this first stimulus. (Human participants report the visibility of the target.) When the delay between target and mask stimuli exceeds a threshold value, the masked stimulus becomes visible. Below this threshold, it remains nonvisible. During the task, we recorded electric brain activity from the scalp and reconstructed the cortical sources corresponding to this activity. Conscious perception of masked stimuli corresponded to activity in a broadly distributed fronto-parieto-temporal network, occurring from about 300 ms after stimulus presentation. We conclude that this late stage, which could be clearly separated from earlier neural events associated with subliminal processing and mask-target interactions, can be regarded as a marker of consciousness.
The sequence of neural events associated with the unfolding of conscious awareness is revealed by comparing electrical brain responses to visual stimuli above and below the behavioral threshold for perception.
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. Event-related potentials (ERPs) were recorded as subjects attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 ms later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ~250 ms. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest response times. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
This study investigated neural processing interactions during Stroop interference by varying the temporal separation of relevant and irrelevant features of congruent, neutral, and incongruent colored-bar/color-word stimulus components. High-density event-related potentials (ERPs) and behavioral performance were measured as participants reported the bar color as quickly as possible, while ignoring the color words. The task-irrelevant color words could appear at 1 of 5 stimulus onset asynchronies (SOAs) relative to the task-relevant bar-color occurrence: −200 or −100 ms before, +100 or +200 ms after, or simultaneously. Incongruent relative to congruent presentations elicited slower reaction times and higher error rates (with neutral in between), and ERP difference waves containing both an early, negative-polarity, central-parietal deflection, and a later, more left-sided, positive-polarity component. These congruency-related differences interacted with SOA, showing the greatest behavioral and electrophysiological effects when irrelevant stimulus information preceded the task-relevant target and reduced effects when the irrelevant information followed the relevant target. We interpret these data as reflecting 2 separate processes: 1) a ‘priming influence’ that enhances the magnitude of conflict-related facilitation and conflict-related interference when a task-relevant target is preceded by an irrelevant distractor; and 2) a reduced ‘backward influence’ of stimulus conflict when the irrelevant distractor information follows the task-relevant target.
conflict processing; event-related potentials (ERPs); incongruency; stimulus onset asynchrony (SOA); Stroop task
Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.
Spatial visual attention modulates the first negative-going deflection in the human averaged event-related potential (ERP) in response to visual target and non-target stimuli (the N1 complex). Here we demonstrate a decomposition of N1 into functionally independent subcomponents with functionally distinct relations to task and stimulus conditions. ERPs were collected from 20 subjects in response to visual target and non-target stimuli presented at five attended and non-attended screen locations. Independent component analysis, a new method for blind source separation, was trained simultaneously on 500 ms grand average responses from all 25 stimulus-attention conditions and decomposed the non-target N1 complexes into five spatially fixed, temporally independent and physiologically plausible components. Activity of an early, laterally symmetrical component pair (N1aR and N1aL) was evoked by the left and right visual field stimuli, respectively. Component N1aR peaked ca. 9 ms earlier than N1aL. Central stimuli evoked both components with the same peak latency difference, producing a bilateral scalp distribution. The amplitudes of these components were no reliably augmented by spatial attention. Stimuli in the right visual field evoked activity in a spatio-temporally overlapping bilateral component (N1b) that peaked at ca. 180 ms and was strongly enhanced by attention. Stimuli presented at unattended locations evoked a fourth component (P2a) peaking near 240 ms. A fifth component (P3f) was evoked only by targets presented in either visual field. The distinct response patterns of these components across the array of stimulus and attention conditions suggest that they reflect activity in functionally independent brain systems involved in processing attended and unattended visuospatial events.
Subjects with Attention-Deficit Hyperactivity Disorder (ADHD) are overdistractible by stimuli out of the intended focus of attention. This control deficit could be due to primarily reduced attentional capacities or, e. g., to overshooting orienting to unexpected events. Here, we aimed at identifying disease-related abnormalities of novelty processing and, therefore, studied event-related potentials (ERP) to respective stimuli in adult ADHD patients compared to healthy subjects.
Fifteen unmedicated subjects with ADHD and fifteen matched controls engaged in a visual oddball task (OT) under simultaneous EEG recordings. A target stimulus, upon which a motor response was required, and non-target stimuli, which did not demand a specific reaction, were presented in random order. Target and most non-target stimuli were presented repeatedly, but some non-target stimuli occurred only once (‘novels’). These unique stimuli were either ‘relative novels’ with which a meaning could be associated, or ‘complete novels’, if no association was available.
In frontal recordings, a positive component with a peak latency of some 400 ms became maximal after novels. In healthy subjects, this novelty-P3 (or ‘orienting response’) was of higher magnitude after complete than after relative novels, in contrast to the patients with an undifferentially high frontal responsivity. Instead, ADHD patients tended to smaller centro-parietal P3 responses after target signals and, on a behavioural level, responded slower than controls.
The results demonstrate abnormal novelty processing in adult subjects with ADHD. In controls, the ERP pattern indicates that allocation of meaning modulates the processing of new stimuli. However, in ADHD such a modulation was not prevalent. Instead, also familiar, only context-wise new stimuli were treated as complete novels. We propose that disturbed semantic processing of new stimuli resembles a mechanism for excessive orienting to commonly negligible stimuli in ADHD.
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness.
visual perception; visual awareness; response priming; rapid-chase; feature-based attention; objects; phobias; lightness
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 milliseconds following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across two days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20–35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60–90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Electroencephalography; Classical Conditioning; Emotion; Perception; Large-Scale Brain Oscillations; Adaptation
The impact of task relevance on event-related potential amplitudes of early visual processing was previously demonstrated. Study designs, however, differ greatly, not allowing simultaneous investigation of how both degree of distraction and task relevance influence processing variations. In our study, we combined different features of previous tasks. We used a modified 1-back task in which task relevant and task irrelevant stimuli were alternately presented. The task irrelevant stimuli could be from the same or from a different category as the task relevant stimuli, thereby producing high and low distracting task irrelevant stimuli. In addition, the paradigm comprised a passive viewing condition. Thus, our paradigm enabled us to compare the processing of task relevant stimuli, task irrelevant stimuli with differing degrees of distraction, and passively viewed stimuli. EEG data from twenty participants was collected and mean P100 and N170 amplitudes were analyzed. Furthermore, a potential connection of stimulus processing and symptoms of attention deficit hyperactivity disorder (ADHD) was investigated.
Our results show a modulation of peak N170 amplitudes by task relevance. N170 amplitudes to task relevant stimuli were significantly higher than to high distracting task irrelevant or passively viewed stimuli. In addition, amplitudes to low distracting task irrelevant stimuli were significantly higher than to high distracting stimuli. N170 amplitudes to passively viewed stimuli were not significantly different from either kind of task irrelevant stimuli. Participants with more symptoms of hyperactivity and impulsivity showed decreased N170 amplitudes across all task conditions. On a behavioral level, lower N170 enhancement efficiency was significantly correlated with false alarm responses.
Our results point to a processing enhancement of task relevant stimuli. Unlike P100 amplitudes, N170 amplitudes were strongly influenced by enhancement and enhancement efficiency seemed to have direct behavioral consequences. These findings have potential implications for models of clinical disorders affecting selective attention, especially ADHD.
Selective attention; Working memory; Cognitive control; P100; N170; ADHD
The visual half-field procedure was used to examine hemispheric asymmetries in meaning selection. Event-related potentials were recorded as participants decided if a lateralized ambiguous or unambiguous prime was related in meaning to a centrally-presented target. Prime-target pairs were preceded by a related or unrelated centrally-presented context word. To separate the effects of meaning frequency and associative strength, unambiguous words were paired with concordant weakly-related context words and strongly-related targets (e.g., taste-sweet-candy) that were similar in associative strength to discordant subordinate-related context words and dominant-related targets (e.g., river-bank-deposit) in the ambiguous condition. Context words and targets were reversed in a second experiment. In an unrelated (neutral) context, N400 responses were more positive than baseline (facilitated) in all ambiguous conditions except when subordinate targets were presented on left visual field-right hemisphere (LVF-RH) trials. Thus, in the absence of biasing context information, the hemispheres seem to be differentially affected by meaning frequency, with the left maintaining multiple meanings and the right selecting the dominant meaning. In the presence of discordant context information, N400 facilitation was absent in both visual fields, indicating that the contextually-consistent meaning of the ambiguous word had been selected. In contrast, N400 facilitation occurred in all of the unambiguous conditions; however, the left hemisphere (LH) showed less facilitation for the weakly-related target when a strongly-related context was presented. These findings indicate that both hemispheres use context to guide meaning selection, but that the LH is more likely to focus activation on a single, contextually-relevant sense.
Lexical Ambiguity; Cerebral Hemispheres; Context Effects; ERP; N400; LPC
Our ability to focus attention on task-relevant stimuli and ignore irrelevant distractions is reflected by differential enhancement and suppression of neural activity in sensory cortices. Previous research has shown that older adults exhibit a deficit in suppressing task-irrelevant information, the magnitude of which is associated with a decline in working memory performance. However, it remains unclear if a failure to suppress is a reflection of an inability of older adults to rapidly assess the relevance of information upon stimulus presentation when they are not aware of the relevance beforehand. To address this, we recorded the electroencephalogram (EEG) in healthy older participants (aged 60–80 years) while they performed two different versions of a selective face/scene working memory task, both with and without prior knowledge as to when relevant and irrelevant stimuli would appear. Each trial contained two faces and two scenes presented sequentially followed by a nine second delay and a probe stimulus. Participants were given the following instructions: remember faces (ignore scenes), remember scenes (ignore faces), remember the xth and yth stimuli (where x and y could be 1st, 2nd, 3rd or 4th), or passively view all stimuli. Working memory performance remained consistent regardless of task instructions. Enhanced neural activity was observed at posterior electrodes to attended stimuli, while neural responses that reflected the suppression of irrelevant stimuli was absent for both tasks. The lack of significant suppression at early stages of visual processing was revealed by P1 amplitude and N1 latency modulation indices. These results reveal that prior knowledge of stimulus relevance does not modify early neural processing during stimulus encoding and does not improve working memory performance in older adults. These results suggest that the inability to suppress irrelevant information early in the visual processing stream by older adults is related to mechanisms specific to top-down suppression.
suppression; EEG; aging; working memory; selective attention