In spite of the excellent temporal resolution of event-related EEG potentials
(ERPs), the overlapping potentials evoked by masked and masking stimuli are hard
to disentangle. However, when both masked and masking stimuli consist of pairs
of relevant and irrelevant stimuli, one left and one right from fixation, with
the side of the relevant element varying between pairs, effects of masked and
masking stimuli can be distinguished by means of the contralateral preponderance
of the potentials evoked by the relevant elements, because the relevant elements
may independently change sides in masked and masking stimuli. Based on a
reanalysis of data from which only selected contralateral-ipsilateral effects
had been previously published, the present contribution will provide a more
complete picture of the ERP effects in a masked-priming task. Indeed, effects
evoked by masked primes and masking targets heavily overlapped in conventional
ERPs and could be disentangled to a certain degree by contralateral-ipsilateral
differences. Their major component, the N2pc, is interpreted as indicating
preferential processing of stimuli matching the target template, which process
can neither be identified with conscious perception nor with shifts of spatial
attention. The measurements showed that the triggering of response preparation
by the masked stimuli did not depend on their discriminability, and their
priming effects on the processing of the following target stimuli were
qualitatively different for stimulus identification and for response
preparation. These results provide another piece of evidence for the
independence of motor-related and perception-related effects of masked
event-related potentials; masking; masked priming; N2pc; LRP; N2cc
The present study used event-related potentials (ERPs) to examine the time course of orthographic and phonological priming in the masked priming paradigm. Participants monitored visual target words for occasional animal names, and ERPs to nonanimal critical items were recorded. These critical items were preceded by different types of primes: Orthographic priming was examined using transposed-letter (TL) primes (e.g., barin-BRAIN) and their controls (e.g., bosin-BRAIN); phonological priming was examined using pseudohomophone primes (e.g., brane-BRAIN) and their controls (e.g., brant-BRAIN). Both manipulations modulated the N250 ERP component, which is hypothesized to reflect sublexical processing during visual word recognition. Orthographic (TL) priming and phonological (pseudohomophone) priming were found to have distinct topographical distributions and different timing, with orthographic effects arising earlier than phonological effects.
It is known that neural responses become less dependent on the stimulus size and location along the visual pathway. This study aimed to use this property to find evidence of neural feedback in visually evoked potentials (VEP). High-density VEPs evoked by a contrast reversing checkerboard were collected from 15 normal observers using a 128-channel EEG system. Surface Laplacian method was used to calculate skull-scalp currents corresponding to the measured scalp potentials. This allowed us to identify several distinct foci of skull-scalp currents and to analyse their individual time-courses. Response nonlinearity as a function of the stimulus size increased markedly from the occipital to temporal loci. Similarly, the nonlinearity of reactivations (late evoked response peaks) over the occipital, lateral-occipital, and frontal scalp regions increased with the peak latency. Response laterality (contralateral vs. ipsilateral) was analysed in lateral-occipital and temporal loci. Early lateral-occipital responses were strongly contralateral but the response laterality decreased and then disappeared for later peaks. Responses in temporal loci did not differ significantly between contralateral and ipsilateral stimulation. Overall, the results suggest that feedback from higher-tier visual areas, e.g., those in temporal cortices, may significantly contribute to reactivations in early visual areas.
The underlying specificity of visual object categorization and discrimination can be elucidated by studying different types of repetition priming. Here we focused on this issue in face processing. We investigated category priming (i.e. the prime and target stimuli represent different exemplars of the same object category) and item priming (i.e. the prime and target stimuli are exactly the same image), using an immediate repetition paradigm. Twenty-three subjects were asked to respond as fast and accurately as possible to categorize whether the target stimulus was a face or a building image, but to ignore the prime stimulus. We recorded event-related potentials (ERPs) and reaction times (RTs) simultaneously. The RT data showed significant effects of category priming in both face trials and building trials, as well as a significant effect of item priming in face trials. With respect to the ERPs, in face trials, no priming effect was observed at the P100 stage, whereas a category priming effect emerged at the N170 stage, and an item priming effect at the P200 stage. In contrast, in building trials, priming effects occurred already at the P100 stage. Our results indicated that distinct neural mechanisms underlie separable kinds of immediate repetition priming in face processing.
Category priming; Item priming; P100; N170; P200
We studied visual representation in the parietal cortex by recording whole-scalp neuromagnetic responses to luminance stimuli of varying eccentricities. The stimuli were semicircles (5.5 degrees in radius) presented at horizontal eccentricities from 0 degree to 16 degrees, separately in the right and left hemifields. All stimuli evoked responses in the contralateral occipital and medial parietal areas. The waveforms and distributions of the occipital responses varied with stimulus side (left, right) and eccentricity, whereas the parietal responses were remarkably similar to all stimuli. The equivalent sources of the parietal signals clustered within 1 cm3 in the medial parieto-occipital sulcus and did not differ significantly between the stimuli. The strength of the parietal activation remained practically constant with increasing stimulus eccentricity, suggesting that the visual areas in the parieto-occipital sulcus lack the enhanced foveal representation typical of most other visual areas. This result strengthens our previous suggestion that the medial parieto-occipital sulcus is the human homologue of the monkey V6 complex, characterized by, for example, lack of retinotopy and the absence of relative foveal magnification.
Visual input from the left and right visual fields is processed predominantly in the contralateral hemisphere. Here we investigated whether this preference for contralateral over ipsilateral stimuli is also found in high-level visual areas that are important for the recognition of objects and faces. Human subjects were scanned with functional magnetic resonance imaging (fMRI) while they viewed and attended faces, objects, scenes, and scrambled images in the left or right visual field. With our stimulation protocol, primary visual cortex responded only to contralateral stimuli. The contralateral preference was smaller in object- and face-selective regions, and it was smallest in the fusiform gyrus. Nevertheless, each region showed a significant preference for contralateral stimuli. These results indicate that sensitivity to stimulus position is present even in high-level ventral visual cortex.
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Electrophysiology; EEG; ERP; Multisensory; SOA
Previous studies have shown that emotion can have 2-fold effects on perception. At the object-level, emotional stimuli benefit from a stimulus-specific boost in visual attention at the relative expense of competing stimuli. At the visual feature-level, recent findings indicate that emotion may inhibit the processing of small visual details and facilitate the processing of coarse visual features. In the present study, we investigated whether emotion can boost the activation and inhibition of automatic motor responses that are generated prior to overt perception. To investigate this, we tested whether an emotional cue affects covert motor responses in a masked priming task. We used a masked priming paradigm in which participants responded to target arrows that were preceded by invisible congruent or incongruent prime arrows. In the standard paradigm, participants react faster, and commit fewer errors responding to the directionality of target arrows, when they are preceded by congruent vs. incongruent masked prime arrows (positive congruency effect, PCE). However, as prime-target SOAs increase, this effect reverses (negative congruency effect, NCE). These findings have been explained as evidence for an initial activation and a subsequent inhibition of a partial response elicited by the masked prime arrow. Our results show that the presentation of fearful face cues, compared to neutral face cues, increased the size of both the PCE and NCE, despite the fact that the primes were invisible. This is the first demonstration that emotion prepares an individual's visuomotor system for automatic activation and inhibition of motor responses in the absence of visual awareness.
emotion; masking; motor priming; fearful faces; activation; inhibition
The time course of cross-script translation priming and repetition priming was examined in two different scripts using a combination of the masked priming paradigm with the recording of event-related potentials (ERPs). Japanese-English bilinguals performed a semantic categorization task in their second language (L2) English and in their first language (L1) Japanese. Targets were preceded by a visually presented related (translation equivalent/repeated) or unrelated prime. The results showed that the amplitudes of the N250 and N400 ERP components were significantly modulated for L2-L2 repetition priming, L1-L2 translation priming, and L1-L1 repetition priming, but not for L2-L1 translation priming. There was also evidence for priming effects in an earlier 100-200 ms time window for L1-L1 repetition priming and L1-L2 translation priming. We argue that a change in script across primes and targets provides optimal conditions for prime word processing, hence generating very fast-acting translation priming effects when primes are in L1.
Translation Priming; Cross-Script Priming; ERPs
There have been conflicting findings as to whether the P3 brain potential to targets in oddball tasks is reduced in depressed patients. The P3 to novel distracter stimuli in a three-stimulus oddball task has a more frontocentral topography than P3 to targets and is associated with different cognitive operations and neural generators. The novelty P3 potential was predicted to be reduced in depressed patients. EEG was recorded from 30 scalp electrodes (nose reference) in 20 unmedicated depressed patients and 20 matched healthy controls during a novelty oddball task with three stimuli: infrequent target tones (12%), frequent standard tones (76%) and nontarget novel stimuli, e.g., animal or environment sounds (12%). Novel stimuli evoked a P3 potential with shorter peak latency and more frontocentral topography than the parietal-maximum P3b to target stimuli. The novelty P3 was markedly reduced in depressed patients compared to controls. Although there was a trend for patients to also have smaller parietal P3b to targets, this group difference was not statistically significant. Nor was there a group difference in the earlier N1 or N2 potentials. The novelty P3 reduction in depressed patients is indicative of a deficit in orienting of attention and evaluation of novel environmental sounds.
Depression; ERP; P3; Novelty; Attention
One's own name constitutes a unique part of conscious awareness – but does this also hold true for unconscious processing? The present study shows that the own name has the power to bias a person's actions unconsciously even in conditions that render any other name ineffective. Participants judged whether a letter string on the screen was a name or a non-word while this target stimulus was preceded by a masked prime stimulus. Crucially, the participant's own name was among these prime stimuli and facilitated reactions to following name targets whereas the name of another, yoked participant did not. Signal detection results confirmed that participants were not aware of any of the prime stimuli, including their own name. These results extend traditional findings on “breakthrough” phenomena of personally relevant stimuli to the domain of unconscious processing. Thus, the brain seems to possess adroit mechanisms to identify and process such stimuli even in the absence of conscious awareness.
Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.
Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.
These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli.
This study investigates the effects of profound acquired unilateral deafness on the adult human central auditory system by analyzing long-latency auditory evoked potentials (AEPs) with dipole source modeling methods. AEPs, elicited by clicks presented to the intact ear in 19 adult subjects with profound unilateral deafness and monaurally to each ear in eight adult normal-hearing controls, were recorded with a 31-channel system. The responses in the 70–210 ms time window, encompassing the N1b/P2 and Ta/Tb components of the AEPs, were modeled by a vertically and a laterally oriented dipole source in each hemisphere. Peak latencies and amplitudes of the major components of the dipole waveforms were measured in the hemispheres ipsilateral and contralateral to the stimulated ear. The normal-hearing subjects showed significant ipsilateral–contralateral latency and amplitude differences, with contralateral source activities that were typically larger and peaked earlier than the ipsilateral activities. In addition, the ipsilateral–contralateral amplitude differences from monaural presentation were similar for left and for right ear stimulation. For unilaterally deaf subjects, the previously reported reduction in ipsilateral–contralateral amplitude differences based on scalp waveforms was also observed in the dipole source waveforms. However, analysis of the source dipole activity demonstrated that the reduced inter-hemispheric amplitude differences were ear dependent. Specifically, these changes were found only in those subjects affected by profound left ear unilateral deafness.
Auditory evoked potentials (AEPs); dipoles; unilateral deafness; human; plasticity
We report three experiments that combine the masked priming paradigm with the recording of event-related potentials in order to examine the time-course of cross-modal interactions during word recognition. Visually presented masked primes preceded either visually or auditorily presented targets that were or were not the same word as the prime. Experiment 1 used the lexical decision task, and in Experiments 2 and 3 participants monitored target words for animal names. The results show a strong modulation of the N400 and an earlier ERP component (N250 ms) in within-modality (visual-visual) repetition priming, and a much weaker and later N400-like effect (400–700 ms) in the cross-modal (visual-auditory) condition with prime exposures of 50 ms (Experiments 1 & 2). With a prime duration of 67 ms (Experiment 3), cross-modal ERP priming effects arose earlier during the traditional N400 epoch (300–500 ms) and were also larger overall than at the shorter prime duration.
word recognition; cross-modal priming; event-related potentials
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness.
visual perception; visual awareness; response priming; rapid-chase; feature-based attention; objects; phobias; lightness
The present study used event-related potentials (ERPs) to examine the time-course of visual word recognition using a masked repetition priming paradigm. In two experiments participants monitored a stream of words for occasional animal names, and ERPs were recorded to non-animal critical target items that were either repetitions or were unrelated to the immediately preceding masked prime word. In Experiment 1 the onset interval between the prime and target (stimulus-onset-asynchrony – SOA) was manipulated across four levels (60, 180, 300 and 420 ms) and the duration of primes was held constant at 40 ms. In Experiment 2 the SOA between the prime and target was held constant at 60 ms and the prime duration was manipulated across four levels (10, 20 30 and 40 ms). Both manipulations were found to have distinct effects on the N250 and N400 ERP components. The results provide converging evidence that the N250 reflects processing at the level of form representations (orthography and phonology) while the N400 reflects processing at the level of meaning.
Visual word processing; word recognition; N400; N250; masked priming
Participants performed a priming task during which emotional faces served as prime stimuli and emotional words served as targets. Prime-target pairs were congruent or incongruent and two levels of prime visibility were obtained by varying the duration of the masked primes. To probe a neural signature of the impact of the masked primes, lateralized readiness potentials (LRPs) were recorded over motor cortex. During the high-visibility condition, responses to word targets were faster when the prime-target pairs were congruent than when they were incongruent, providing evidence of priming effects. In line with the behavioral results, the electrophysiological data showed that high-visibility face primes resulted in LRP differences between congruent and incongruent trials, suggesting that prime stimuli initiated motor preparation. Contrary to the above pattern, no evidence for reaction time or LRP differences was observed during the low-visibility condition, revealing that the depth of facial expression processing is dependent upon stimulus visibility.
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus.
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Visual evoked potentials (VEPs) to lateralised light flashes were recorded from two acallosal patients. In one patient, these recordings were made while he performed a choice-reaction time task, and in the other patient the VEPs were obtained during a simple reaction time task. In both cases the patient's VEPs from electrode sites contralateral to the visual field of stimulus delivery resembled those of controls. Their VEPs from ipsilateral sites were aberrant, however, in that while control subjects showed a smaller and slightly delayed ipsilateral N160 component, this was not discernible in the patients' data. It is concluded that the ipsilateral N160 relies for its generation on the transcallosal transfer of information processed initially by the contralateral hemisphere.
Most visual stimuli we experience on a day-to-day basis are continuous sequences, with spatial structure highly correlated in time. During rapid serial visual presentation (RSVP), this correlation is absent. Here we study how subjects' target detection responses, both behavioral and electrophysiological, differ between continuous serial visual sequences (CSVP), flashed serial visual presentation (FSVP) and RSVP. Behavioral results show longer reaction times for CSVP compared to the FSVP and RSVP conditions, as well as a difference in miss rate between RSVP and the other two conditions. Using mutual information, we measure electrophysiological differences in the electroencephalography (EEG) for these three conditions. We find two peaks in the mutual information between EEG and stimulus class (target vs. distractor), with the second peak occurring 30–40 ms earlier for the FSVP and RSVP conditions. In addition, we find differences in the persistence of the peak mutual information between FSVP and RSVP conditions. We further investigate these differences using a mutual information based functional connectivity analysis and find significant fronto-parietal functional coupling for RSVP and FSVP but no significant coupling for the CSVP condition. We discuss these findings within the context of attentional engagement, evidence accumulation and short-term visual memory.
target detection; visual presentation; electroencephalography; mutual information
In a post-cued letter identification task, participants were presented with 7-letter nonword target stimuli that were formed of a random string of consonants (DCMFPLR) or a pronounceable sequence of consonants and vowels (DAMOPUR). Targets were preceded by briefly presented pattern-masked primes that could be the same sequence of letters as the target, composed of seven different letters, or sharing either the first or last five letters of the target. There was some evidence for repetition priming effects that were independent of target type in an early component, the N/P150, thought to reflect the mapping of visual features onto letter representations, and that is insensitive to orthographic structure. Following this, pronounceable nonwords showed significantly greater repetition priming effects than consonant strings, in line with the behavioral results. Initial versus final overlap only started to influence target processing at around 200–250 ms post-target onset, at about the same time as the effects of target type emerged. The results are in line with a model where the initial parallel mapping of visual features onto a location-specific orthographic code is followed by the subsequent activation of location-invariant orthographic and phonological codes.
Pseudoword superiority effect; ERPs; Nonword processing; Masked priming
We employed an EEG paradigm manipulating predictive context to dissociate the neural dynamics of anticipatory mechanisms. Subjects either detected random targets or targets preceded by a predictive sequence of three distinct stimuli. The last stimulus in the 3-stimulus sequence (decisive stimulus) did not require any motor response but 100% predicted a subsequent target event. We show that predictive context optimizes target processing via the deployment of distinct anticipatory mechanisms at different times of the predictive sequence. Prior to the occurrence of the decisive stimulus, enhanced attentional preparation was manifested by reductions in the alpha oscillatory activities over visual cortices, resulting in facilitation of processing of the decisive stimulus. Conversely, the subsequent 100% predictable target event did not reveal deployment of attentional preparation in the visual cortices, but elicited enhanced motor preparation mechanisms, indexed by an increased contingent negative variation (CNV) and reduced mu oscillatory activities over motor cortices before movement onset. The present results provide evidence that anticipation operates via different attentional and motor preparation mechanisms by selectively pre-activating task-dependent brain areas as predictability gradually increases.
attention; motor preparation; alpha; mu; beta
Attending to a conversation in a crowded scene requires selection of relevant information, while ignoring other distracting sensory input, such as speech signals from surrounding people. The neural mechanisms of how distracting stimuli influence the processing of attended speech are not well understood. In this high-density electroencephalography (EEG) study, we investigated how different types of speech and non-speech stimuli influence the processing of attended audiovisual speech. Participants were presented with three horizontally aligned speakers who produced syllables. The faces of the three speakers flickered at specific frequencies (19 Hz for flanking speakers and 25 Hz for the center speaker), which induced steady-state visual evoked potentials (SSVEP) in the EEG that served as a measure of visual attention. The participants' task was to detect an occasional audiovisual target syllable produced by the center speaker, while ignoring distracting signals originating from the two flanking speakers. In all experimental conditions the center speaker produced a bimodal audiovisual syllable. In three distraction conditions, which were contrasted with a no-distraction control condition, the flanking speakers either produced audiovisual speech, moved their lips, and produced acoustic noise, or moved their lips without producing an auditory signal. We observed behavioral interference in the reaction times (RTs) in particular when the flanking speakers produced naturalistic audiovisual speech. These effects were paralleled by enhanced 19 Hz SSVEP, indicative of a stimulus-driven capture of attention toward the interfering speakers. Our study provides evidence that non-relevant audiovisual speech signals serve as highly salient distractors, which capture attention in a stimulus-driven fashion.
crossmodal; EEG; bimodal; SSVEP; oscillatory
This study reports a new approach to studying the time-course of the perceptual processing of objects by combining for the first time the masked repetition priming technique with the recording of event-related potentials (ERPs). In a semantic categorization task ERPs were recorded to repeated and unrelated target pictures of common objects that were immediately preceded by briefly presented pattern masked prime objects. Three sequential ERP effects were found between 100 and 650 ms post-target onset. These effects included an early posterior positivity/anterior negativity (N/P190) that was suggested to reflect early feature processing in visual cortex. This early effect was followed by an anterior negativity (N300) that was suggested to reflect processing of object-specific representations and finally by a widely distributed negativity (N400) that was argued to reflect more domain general semantic processing.
ERP; N400; N300; Masked priming; Object recognition