In spite of the excellent temporal resolution of event-related EEG potentials
(ERPs), the overlapping potentials evoked by masked and masking stimuli are hard
to disentangle. However, when both masked and masking stimuli consist of pairs
of relevant and irrelevant stimuli, one left and one right from fixation, with
the side of the relevant element varying between pairs, effects of masked and
masking stimuli can be distinguished by means of the contralateral preponderance
of the potentials evoked by the relevant elements, because the relevant elements
may independently change sides in masked and masking stimuli. Based on a
reanalysis of data from which only selected contralateral-ipsilateral effects
had been previously published, the present contribution will provide a more
complete picture of the ERP effects in a masked-priming task. Indeed, effects
evoked by masked primes and masking targets heavily overlapped in conventional
ERPs and could be disentangled to a certain degree by contralateral-ipsilateral
differences. Their major component, the N2pc, is interpreted as indicating
preferential processing of stimuli matching the target template, which process
can neither be identified with conscious perception nor with shifts of spatial
attention. The measurements showed that the triggering of response preparation
by the masked stimuli did not depend on their discriminability, and their
priming effects on the processing of the following target stimuli were
qualitatively different for stimulus identification and for response
preparation. These results provide another piece of evidence for the
independence of motor-related and perception-related effects of masked
event-related potentials; masking; masked priming; N2pc; LRP; N2cc
We studied visual representation in the parietal cortex by recording whole-scalp neuromagnetic responses to luminance stimuli of varying eccentricities. The stimuli were semicircles (5.5 degrees in radius) presented at horizontal eccentricities from 0 degree to 16 degrees, separately in the right and left hemifields. All stimuli evoked responses in the contralateral occipital and medial parietal areas. The waveforms and distributions of the occipital responses varied with stimulus side (left, right) and eccentricity, whereas the parietal responses were remarkably similar to all stimuli. The equivalent sources of the parietal signals clustered within 1 cm3 in the medial parieto-occipital sulcus and did not differ significantly between the stimuli. The strength of the parietal activation remained practically constant with increasing stimulus eccentricity, suggesting that the visual areas in the parieto-occipital sulcus lack the enhanced foveal representation typical of most other visual areas. This result strengthens our previous suggestion that the medial parieto-occipital sulcus is the human homologue of the monkey V6 complex, characterized by, for example, lack of retinotopy and the absence of relative foveal magnification.
Visual input from the left and right visual fields is processed predominantly in the contralateral hemisphere. Here we investigated whether this preference for contralateral over ipsilateral stimuli is also found in high-level visual areas that are important for the recognition of objects and faces. Human subjects were scanned with functional magnetic resonance imaging (fMRI) while they viewed and attended faces, objects, scenes, and scrambled images in the left or right visual field. With our stimulation protocol, primary visual cortex responded only to contralateral stimuli. The contralateral preference was smaller in object- and face-selective regions, and it was smallest in the fusiform gyrus. Nevertheless, each region showed a significant preference for contralateral stimuli. These results indicate that sensitivity to stimulus position is present even in high-level ventral visual cortex.
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Electrophysiology; EEG; ERP; Multisensory; SOA
Previous studies have shown that emotion can have 2-fold effects on perception. At the object-level, emotional stimuli benefit from a stimulus-specific boost in visual attention at the relative expense of competing stimuli. At the visual feature-level, recent findings indicate that emotion may inhibit the processing of small visual details and facilitate the processing of coarse visual features. In the present study, we investigated whether emotion can boost the activation and inhibition of automatic motor responses that are generated prior to overt perception. To investigate this, we tested whether an emotional cue affects covert motor responses in a masked priming task. We used a masked priming paradigm in which participants responded to target arrows that were preceded by invisible congruent or incongruent prime arrows. In the standard paradigm, participants react faster, and commit fewer errors responding to the directionality of target arrows, when they are preceded by congruent vs. incongruent masked prime arrows (positive congruency effect, PCE). However, as prime-target SOAs increase, this effect reverses (negative congruency effect, NCE). These findings have been explained as evidence for an initial activation and a subsequent inhibition of a partial response elicited by the masked prime arrow. Our results show that the presentation of fearful face cues, compared to neutral face cues, increased the size of both the PCE and NCE, despite the fact that the primes were invisible. This is the first demonstration that emotion prepares an individual's visuomotor system for automatic activation and inhibition of motor responses in the absence of visual awareness.
emotion; masking; motor priming; fearful faces; activation; inhibition
It is known that neural responses become less dependent on the stimulus size and location along the visual pathway. This study aimed to use this property to find evidence of neural feedback in visually evoked potentials (VEP). High-density VEPs evoked by a contrast reversing checkerboard were collected from 15 normal observers using a 128-channel EEG system. Surface Laplacian method was used to calculate skull-scalp currents corresponding to the measured scalp potentials. This allowed us to identify several distinct foci of skull-scalp currents and to analyse their individual time-courses. Response nonlinearity as a function of the stimulus size increased markedly from the occipital to temporal loci. Similarly, the nonlinearity of reactivations (late evoked response peaks) over the occipital, lateral-occipital, and frontal scalp regions increased with the peak latency. Response laterality (contralateral vs. ipsilateral) was analysed in lateral-occipital and temporal loci. Early lateral-occipital responses were strongly contralateral but the response laterality decreased and then disappeared for later peaks. Responses in temporal loci did not differ significantly between contralateral and ipsilateral stimulation. Overall, the results suggest that feedback from higher-tier visual areas, e.g., those in temporal cortices, may significantly contribute to reactivations in early visual areas.
The present study used event-related potentials (ERPs) to examine the time course of orthographic and phonological priming in the masked priming paradigm. Participants monitored visual target words for occasional animal names, and ERPs to nonanimal critical items were recorded. These critical items were preceded by different types of primes: Orthographic priming was examined using transposed-letter (TL) primes (e.g., barin-BRAIN) and their controls (e.g., bosin-BRAIN); phonological priming was examined using pseudohomophone primes (e.g., brane-BRAIN) and their controls (e.g., brant-BRAIN). Both manipulations modulated the N250 ERP component, which is hypothesized to reflect sublexical processing during visual word recognition. Orthographic (TL) priming and phonological (pseudohomophone) priming were found to have distinct topographical distributions and different timing, with orthographic effects arising earlier than phonological effects.
There have been conflicting findings as to whether the P3 brain potential to targets in oddball tasks is reduced in depressed patients. The P3 to novel distracter stimuli in a three-stimulus oddball task has a more frontocentral topography than P3 to targets and is associated with different cognitive operations and neural generators. The novelty P3 potential was predicted to be reduced in depressed patients. EEG was recorded from 30 scalp electrodes (nose reference) in 20 unmedicated depressed patients and 20 matched healthy controls during a novelty oddball task with three stimuli: infrequent target tones (12%), frequent standard tones (76%) and nontarget novel stimuli, e.g., animal or environment sounds (12%). Novel stimuli evoked a P3 potential with shorter peak latency and more frontocentral topography than the parietal-maximum P3b to target stimuli. The novelty P3 was markedly reduced in depressed patients compared to controls. Although there was a trend for patients to also have smaller parietal P3b to targets, this group difference was not statistically significant. Nor was there a group difference in the earlier N1 or N2 potentials. The novelty P3 reduction in depressed patients is indicative of a deficit in orienting of attention and evaluation of novel environmental sounds.
Depression; ERP; P3; Novelty; Attention
The underlying specificity of visual object categorization and discrimination can be elucidated by studying different types of repetition priming. Here we focused on this issue in face processing. We investigated category priming (i.e. the prime and target stimuli represent different exemplars of the same object category) and item priming (i.e. the prime and target stimuli are exactly the same image), using an immediate repetition paradigm. Twenty-three subjects were asked to respond as fast and accurately as possible to categorize whether the target stimulus was a face or a building image, but to ignore the prime stimulus. We recorded event-related potentials (ERPs) and reaction times (RTs) simultaneously. The RT data showed significant effects of category priming in both face trials and building trials, as well as a significant effect of item priming in face trials. With respect to the ERPs, in face trials, no priming effect was observed at the P100 stage, whereas a category priming effect emerged at the N170 stage, and an item priming effect at the P200 stage. In contrast, in building trials, priming effects occurred already at the P100 stage. Our results indicated that distinct neural mechanisms underlie separable kinds of immediate repetition priming in face processing.
Category priming; Item priming; P100; N170; P200
Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.
Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.
These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli.
One's own name constitutes a unique part of conscious awareness – but does this also hold true for unconscious processing? The present study shows that the own name has the power to bias a person's actions unconsciously even in conditions that render any other name ineffective. Participants judged whether a letter string on the screen was a name or a non-word while this target stimulus was preceded by a masked prime stimulus. Crucially, the participant's own name was among these prime stimuli and facilitated reactions to following name targets whereas the name of another, yoked participant did not. Signal detection results confirmed that participants were not aware of any of the prime stimuli, including their own name. These results extend traditional findings on “breakthrough” phenomena of personally relevant stimuli to the domain of unconscious processing. Thus, the brain seems to possess adroit mechanisms to identify and process such stimuli even in the absence of conscious awareness.
The time course of cross-script translation priming and repetition priming was examined in two different scripts using a combination of the masked priming paradigm with the recording of event-related potentials (ERPs). Japanese-English bilinguals performed a semantic categorization task in their second language (L2) English and in their first language (L1) Japanese. Targets were preceded by a visually presented related (translation equivalent/repeated) or unrelated prime. The results showed that the amplitudes of the N250 and N400 ERP components were significantly modulated for L2-L2 repetition priming, L1-L2 translation priming, and L1-L1 repetition priming, but not for L2-L1 translation priming. There was also evidence for priming effects in an earlier 100-200 ms time window for L1-L1 repetition priming and L1-L2 translation priming. We argue that a change in script across primes and targets provides optimal conditions for prime word processing, hence generating very fast-acting translation priming effects when primes are in L1.
Translation Priming; Cross-Script Priming; ERPs
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Attending to a conversation in a crowded scene requires selection of relevant information, while ignoring other distracting sensory input, such as speech signals from surrounding people. The neural mechanisms of how distracting stimuli influence the processing of attended speech are not well understood. In this high-density electroencephalography (EEG) study, we investigated how different types of speech and non-speech stimuli influence the processing of attended audiovisual speech. Participants were presented with three horizontally aligned speakers who produced syllables. The faces of the three speakers flickered at specific frequencies (19 Hz for flanking speakers and 25 Hz for the center speaker), which induced steady-state visual evoked potentials (SSVEP) in the EEG that served as a measure of visual attention. The participants' task was to detect an occasional audiovisual target syllable produced by the center speaker, while ignoring distracting signals originating from the two flanking speakers. In all experimental conditions the center speaker produced a bimodal audiovisual syllable. In three distraction conditions, which were contrasted with a no-distraction control condition, the flanking speakers either produced audiovisual speech, moved their lips, and produced acoustic noise, or moved their lips without producing an auditory signal. We observed behavioral interference in the reaction times (RTs) in particular when the flanking speakers produced naturalistic audiovisual speech. These effects were paralleled by enhanced 19 Hz SSVEP, indicative of a stimulus-driven capture of attention toward the interfering speakers. Our study provides evidence that non-relevant audiovisual speech signals serve as highly salient distractors, which capture attention in a stimulus-driven fashion.
crossmodal; EEG; bimodal; SSVEP; oscillatory
Patients with unilateral hemispheric lesions were given visual target cancellation tasks. As expected, marked contralateral and less severe ipsilateral visual inattention were observed in patients with right-sided cerebral lesions whereas those with left-sided lesions showed only mild contralateral neglect. Stimulus material (shapes vs letters) and array (random vs structured) interacted in a complex manner to influence target detection only in patients with right-sided lesions. Furthermore, the search strategy of these patients tended to be erratic, particularly when the stimuli were in an unstructured array. A structured array prompted a more systematic and efficient search. It appears, therefore, that stimulus content and spatial array affect neglect behaviour in patients with right-sided lesions and that a lack of systematic visual exploration within the extrapersonal space is one factor that contributes to visual hemispatial inattention.
In a post-cued letter identification task, participants were presented with 7-letter nonword target stimuli that were formed of a random string of consonants (DCMFPLR) or a pronounceable sequence of consonants and vowels (DAMOPUR). Targets were preceded by briefly presented pattern-masked primes that could be the same sequence of letters as the target, composed of seven different letters, or sharing either the first or last five letters of the target. There was some evidence for repetition priming effects that were independent of target type in an early component, the N/P150, thought to reflect the mapping of visual features onto letter representations, and that is insensitive to orthographic structure. Following this, pronounceable nonwords showed significantly greater repetition priming effects than consonant strings, in line with the behavioral results. Initial versus final overlap only started to influence target processing at around 200–250 ms post-target onset, at about the same time as the effects of target type emerged. The results are in line with a model where the initial parallel mapping of visual features onto a location-specific orthographic code is followed by the subsequent activation of location-invariant orthographic and phonological codes.
Pseudoword superiority effect; ERPs; Nonword processing; Masked priming
The impact of task relevance on event-related potential amplitudes of early visual processing was previously demonstrated. Study designs, however, differ greatly, not allowing simultaneous investigation of how both degree of distraction and task relevance influence processing variations. In our study, we combined different features of previous tasks. We used a modified 1-back task in which task relevant and task irrelevant stimuli were alternately presented. The task irrelevant stimuli could be from the same or from a different category as the task relevant stimuli, thereby producing high and low distracting task irrelevant stimuli. In addition, the paradigm comprised a passive viewing condition. Thus, our paradigm enabled us to compare the processing of task relevant stimuli, task irrelevant stimuli with differing degrees of distraction, and passively viewed stimuli. EEG data from twenty participants was collected and mean P100 and N170 amplitudes were analyzed. Furthermore, a potential connection of stimulus processing and symptoms of attention deficit hyperactivity disorder (ADHD) was investigated.
Our results show a modulation of peak N170 amplitudes by task relevance. N170 amplitudes to task relevant stimuli were significantly higher than to high distracting task irrelevant or passively viewed stimuli. In addition, amplitudes to low distracting task irrelevant stimuli were significantly higher than to high distracting stimuli. N170 amplitudes to passively viewed stimuli were not significantly different from either kind of task irrelevant stimuli. Participants with more symptoms of hyperactivity and impulsivity showed decreased N170 amplitudes across all task conditions. On a behavioral level, lower N170 enhancement efficiency was significantly correlated with false alarm responses.
Our results point to a processing enhancement of task relevant stimuli. Unlike P100 amplitudes, N170 amplitudes were strongly influenced by enhancement and enhancement efficiency seemed to have direct behavioral consequences. These findings have potential implications for models of clinical disorders affecting selective attention, especially ADHD.
Selective attention; Working memory; Cognitive control; P100; N170; ADHD
This study investigates the effects of profound acquired unilateral deafness on the adult human central auditory system by analyzing long-latency auditory evoked potentials (AEPs) with dipole source modeling methods. AEPs, elicited by clicks presented to the intact ear in 19 adult subjects with profound unilateral deafness and monaurally to each ear in eight adult normal-hearing controls, were recorded with a 31-channel system. The responses in the 70–210 ms time window, encompassing the N1b/P2 and Ta/Tb components of the AEPs, were modeled by a vertically and a laterally oriented dipole source in each hemisphere. Peak latencies and amplitudes of the major components of the dipole waveforms were measured in the hemispheres ipsilateral and contralateral to the stimulated ear. The normal-hearing subjects showed significant ipsilateral–contralateral latency and amplitude differences, with contralateral source activities that were typically larger and peaked earlier than the ipsilateral activities. In addition, the ipsilateral–contralateral amplitude differences from monaural presentation were similar for left and for right ear stimulation. For unilaterally deaf subjects, the previously reported reduction in ipsilateral–contralateral amplitude differences based on scalp waveforms was also observed in the dipole source waveforms. However, analysis of the source dipole activity demonstrated that the reduced inter-hemispheric amplitude differences were ear dependent. Specifically, these changes were found only in those subjects affected by profound left ear unilateral deafness.
Auditory evoked potentials (AEPs); dipoles; unilateral deafness; human; plasticity
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged Steady State Visual Evoked Potentials (SSVEPs) and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self-terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of approximately 30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This “contrast-contrast” invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Masked priming is used in psycholinguistic studies to assess questions about lexical access and representation. We present two masked priming experiments using MEG. If the MEG signal elicited by words reflects specific aspects of lexical retrieval, then one expects to identify specific neural correlates of retrieval that are sensitive to priming. To date, the electrophysiological evidence has been equivocal. We report findings from two experiments. Both employed identity priming, where the prime and target are the same lexical item but differ in case (NEWS-news). The first experiment used only forward masking, while the prime in the second experiment was both preceded and followed by a mask (backward masking). In both studies, we find a significant behavioral effect of priming. Using MEG, we identified a component peaking approximately 225 ms post-onset of the target, whose latency was sensitive to repetition. These findings support the notion that properties of the MEG response index specific lexical processes and demonstrate that masked priming can be effectively combined with MEG to investigate the nature of lexical processing.
Magnetoencephalography; MEG; Masked Priming; Backward Masking; Identity Priming; Immediate Repetition Priming
We report three experiments that combine the masked priming paradigm with the recording of event-related potentials in order to examine the time-course of cross-modal interactions during word recognition. Visually presented masked primes preceded either visually or auditorily presented targets that were or were not the same word as the prime. Experiment 1 used the lexical decision task, and in Experiments 2 and 3 participants monitored target words for animal names. The results show a strong modulation of the N400 and an earlier ERP component (N250 ms) in within-modality (visual-visual) repetition priming, and a much weaker and later N400-like effect (400–700 ms) in the cross-modal (visual-auditory) condition with prime exposures of 50 ms (Experiments 1 & 2). With a prime duration of 67 ms (Experiment 3), cross-modal ERP priming effects arose earlier during the traditional N400 epoch (300–500 ms) and were also larger overall than at the shorter prime duration.
word recognition; cross-modal priming; event-related potentials
The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings) embedded amongst a of character strings. Beginning at 130 ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180 ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g., interhemispheric) processing of those stimuli shortly later. Additional early (130–150 ms) negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300 ms) were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
word reading; ERPs; visual cortex; visual orthography
Information about object-associated manipulations is lateralized to left parietal regions, while information about the visual form of tools is represented bilaterally in ventral occipito-temporal cortex. It is unknown how lateralization of motor-relevant information in left hemisphere dorsal regions may affect the visual processing of manipulable objects. We used a lateralized masked priming paradigm to test for a Right Visual Field (RVF) advantage in tool processing. Target stimuli were tools and animals, and briefly presented primes were identical to, or scrambled versions of the targets. In Experiment 1, primes were presented either to the left or right of the centrally presented target, while in Experiment 2 primes were presented in one of 8 locations arranged radially around the target. In both experiments there was a RVF advantage in priming effects for tool but not for animal targets. Control experiments showed that participants were at chance for matching the identity of the lateralized primes in a picture-word matching experiment, and also ruled out a general RVF speed-of-processing advantage for tool images. These results indicate that the overrepresentation of tool knowledge in the left hemisphere affects visual object recognition, and suggest that interaction between the dorsal and ventral streams occurs during object categorization.
The present study used event-related potentials (ERPs) to examine the time-course of visual word recognition using a masked repetition priming paradigm. In two experiments participants monitored a stream of words for occasional animal names, and ERPs were recorded to non-animal critical target items that were either repetitions or were unrelated to the immediately preceding masked prime word. In Experiment 1 the onset interval between the prime and target (stimulus-onset-asynchrony – SOA) was manipulated across four levels (60, 180, 300 and 420 ms) and the duration of primes was held constant at 40 ms. In Experiment 2 the SOA between the prime and target was held constant at 60 ms and the prime duration was manipulated across four levels (10, 20 30 and 40 ms). Both manipulations were found to have distinct effects on the N250 and N400 ERP components. The results provide converging evidence that the N250 reflects processing at the level of form representations (orthography and phonology) while the N400 reflects processing at the level of meaning.
Visual word processing; word recognition; N400; N250; masked priming
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness.
visual perception; visual awareness; response priming; rapid-chase; feature-based attention; objects; phobias; lightness