In spite of the excellent temporal resolution of event-related EEG potentials
(ERPs), the overlapping potentials evoked by masked and masking stimuli are hard
to disentangle. However, when both masked and masking stimuli consist of pairs
of relevant and irrelevant stimuli, one left and one right from fixation, with
the side of the relevant element varying between pairs, effects of masked and
masking stimuli can be distinguished by means of the contralateral preponderance
of the potentials evoked by the relevant elements, because the relevant elements
may independently change sides in masked and masking stimuli. Based on a
reanalysis of data from which only selected contralateral-ipsilateral effects
had been previously published, the present contribution will provide a more
complete picture of the ERP effects in a masked-priming task. Indeed, effects
evoked by masked primes and masking targets heavily overlapped in conventional
ERPs and could be disentangled to a certain degree by contralateral-ipsilateral
differences. Their major component, the N2pc, is interpreted as indicating
preferential processing of stimuli matching the target template, which process
can neither be identified with conscious perception nor with shifts of spatial
attention. The measurements showed that the triggering of response preparation
by the masked stimuli did not depend on their discriminability, and their
priming effects on the processing of the following target stimuli were
qualitatively different for stimulus identification and for response
preparation. These results provide another piece of evidence for the
independence of motor-related and perception-related effects of masked
event-related potentials; masking; masked priming; N2pc; LRP; N2cc
Visual input from the left and right visual fields is processed predominantly in the contralateral hemisphere. Here we investigated whether this preference for contralateral over ipsilateral stimuli is also found in high-level visual areas that are important for the recognition of objects and faces. Human subjects were scanned with functional magnetic resonance imaging (fMRI) while they viewed and attended faces, objects, scenes, and scrambled images in the left or right visual field. With our stimulation protocol, primary visual cortex responded only to contralateral stimuli. The contralateral preference was smaller in object- and face-selective regions, and it was smallest in the fusiform gyrus. Nevertheless, each region showed a significant preference for contralateral stimuli. These results indicate that sensitivity to stimulus position is present even in high-level ventral visual cortex.
We studied visual representation in the parietal cortex by recording whole-scalp neuromagnetic responses to luminance stimuli of varying eccentricities. The stimuli were semicircles (5.5 degrees in radius) presented at horizontal eccentricities from 0 degree to 16 degrees, separately in the right and left hemifields. All stimuli evoked responses in the contralateral occipital and medial parietal areas. The waveforms and distributions of the occipital responses varied with stimulus side (left, right) and eccentricity, whereas the parietal responses were remarkably similar to all stimuli. The equivalent sources of the parietal signals clustered within 1 cm3 in the medial parieto-occipital sulcus and did not differ significantly between the stimuli. The strength of the parietal activation remained practically constant with increasing stimulus eccentricity, suggesting that the visual areas in the parieto-occipital sulcus lack the enhanced foveal representation typical of most other visual areas. This result strengthens our previous suggestion that the medial parieto-occipital sulcus is the human homologue of the monkey V6 complex, characterized by, for example, lack of retinotopy and the absence of relative foveal magnification.
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Electrophysiology; EEG; ERP; Multisensory; SOA
It is known that neural responses become less dependent on the stimulus size and location along the visual pathway. This study aimed to use this property to find evidence of neural feedback in visually evoked potentials (VEP). High-density VEPs evoked by a contrast reversing checkerboard were collected from 15 normal observers using a 128-channel EEG system. Surface Laplacian method was used to calculate skull-scalp currents corresponding to the measured scalp potentials. This allowed us to identify several distinct foci of skull-scalp currents and to analyse their individual time-courses. Response nonlinearity as a function of the stimulus size increased markedly from the occipital to temporal loci. Similarly, the nonlinearity of reactivations (late evoked response peaks) over the occipital, lateral-occipital, and frontal scalp regions increased with the peak latency. Response laterality (contralateral vs. ipsilateral) was analysed in lateral-occipital and temporal loci. Early lateral-occipital responses were strongly contralateral but the response laterality decreased and then disappeared for later peaks. Responses in temporal loci did not differ significantly between contralateral and ipsilateral stimulation. Overall, the results suggest that feedback from higher-tier visual areas, e.g., those in temporal cortices, may significantly contribute to reactivations in early visual areas.
Previous studies have shown that emotion can have 2-fold effects on perception. At the object-level, emotional stimuli benefit from a stimulus-specific boost in visual attention at the relative expense of competing stimuli. At the visual feature-level, recent findings indicate that emotion may inhibit the processing of small visual details and facilitate the processing of coarse visual features. In the present study, we investigated whether emotion can boost the activation and inhibition of automatic motor responses that are generated prior to overt perception. To investigate this, we tested whether an emotional cue affects covert motor responses in a masked priming task. We used a masked priming paradigm in which participants responded to target arrows that were preceded by invisible congruent or incongruent prime arrows. In the standard paradigm, participants react faster, and commit fewer errors responding to the directionality of target arrows, when they are preceded by congruent vs. incongruent masked prime arrows (positive congruency effect, PCE). However, as prime-target SOAs increase, this effect reverses (negative congruency effect, NCE). These findings have been explained as evidence for an initial activation and a subsequent inhibition of a partial response elicited by the masked prime arrow. Our results show that the presentation of fearful face cues, compared to neutral face cues, increased the size of both the PCE and NCE, despite the fact that the primes were invisible. This is the first demonstration that emotion prepares an individual's visuomotor system for automatic activation and inhibition of motor responses in the absence of visual awareness.
emotion; masking; motor priming; fearful faces; activation; inhibition
The present study used event-related potentials (ERPs) to examine the time course of orthographic and phonological priming in the masked priming paradigm. Participants monitored visual target words for occasional animal names, and ERPs to nonanimal critical items were recorded. These critical items were preceded by different types of primes: Orthographic priming was examined using transposed-letter (TL) primes (e.g., barin-BRAIN) and their controls (e.g., bosin-BRAIN); phonological priming was examined using pseudohomophone primes (e.g., brane-BRAIN) and their controls (e.g., brant-BRAIN). Both manipulations modulated the N250 ERP component, which is hypothesized to reflect sublexical processing during visual word recognition. Orthographic (TL) priming and phonological (pseudohomophone) priming were found to have distinct topographical distributions and different timing, with orthographic effects arising earlier than phonological effects.
There have been conflicting findings as to whether the P3 brain potential to targets in oddball tasks is reduced in depressed patients. The P3 to novel distracter stimuli in a three-stimulus oddball task has a more frontocentral topography than P3 to targets and is associated with different cognitive operations and neural generators. The novelty P3 potential was predicted to be reduced in depressed patients. EEG was recorded from 30 scalp electrodes (nose reference) in 20 unmedicated depressed patients and 20 matched healthy controls during a novelty oddball task with three stimuli: infrequent target tones (12%), frequent standard tones (76%) and nontarget novel stimuli, e.g., animal or environment sounds (12%). Novel stimuli evoked a P3 potential with shorter peak latency and more frontocentral topography than the parietal-maximum P3b to target stimuli. The novelty P3 was markedly reduced in depressed patients compared to controls. Although there was a trend for patients to also have smaller parietal P3b to targets, this group difference was not statistically significant. Nor was there a group difference in the earlier N1 or N2 potentials. The novelty P3 reduction in depressed patients is indicative of a deficit in orienting of attention and evaluation of novel environmental sounds.
Depression; ERP; P3; Novelty; Attention
Selective visual attention is the process by which the visual system enhances behaviorally relevant stimuli and filters out others. Visual attention is thought to operate through a cortical mechanism known as biased competition. Representations of stimuli within cortical visual areas compete such that they mutually suppress each others' neural response. Competition increases with stimulus proximity and can be biased in favor of one stimulus (over another) as a function of stimulus significance, salience, or expectancy. Though there is considerable evidence of biased competition within the human visual system, the dynamics of the process remain unknown.
Here, we used scalp-recorded electroencephalography (EEG) to examine neural correlates of biased competition in the human visual system. In two experiments, subjects performed a task requiring them to either simultaneously identify two targets (Experiment 1) or discriminate one target while ignoring a decoy (Experiment 2). Competition was manipulated by altering the spatial separation between target(s) and/or decoy. Both experimental tasks should induce competition between stimuli. However, only the task of Experiment 2 should invoke a strong bias in favor of the target (over the decoy). The amplitude of two lateralized components of the event-related potential, the N2pc and Ptc, mirrored these predictions. N2pc amplitude increased with increasing stimulus separation in Experiments 1 and 2. However, Ptc amplitude varied only in Experiment 2, becoming more positive with decreased spatial separation.
These results suggest that N2pc and Ptc components may index distinct processes of biased competition—N2pc reflecting visual competitive interactions and Ptc reflecting a bias in processing necessary to individuate task-relevant stimuli.
One's own name constitutes a unique part of conscious awareness – but does this also hold true for unconscious processing? The present study shows that the own name has the power to bias a person's actions unconsciously even in conditions that render any other name ineffective. Participants judged whether a letter string on the screen was a name or a non-word while this target stimulus was preceded by a masked prime stimulus. Crucially, the participant's own name was among these prime stimuli and facilitated reactions to following name targets whereas the name of another, yoked participant did not. Signal detection results confirmed that participants were not aware of any of the prime stimuli, including their own name. These results extend traditional findings on “breakthrough” phenomena of personally relevant stimuli to the domain of unconscious processing. Thus, the brain seems to possess adroit mechanisms to identify and process such stimuli even in the absence of conscious awareness.
The time course of cross-script translation priming and repetition priming was examined in two different scripts using a combination of the masked priming paradigm with the recording of event-related potentials (ERPs). Japanese-English bilinguals performed a semantic categorization task in their second language (L2) English and in their first language (L1) Japanese. Targets were preceded by a visually presented related (translation equivalent/repeated) or unrelated prime. The results showed that the amplitudes of the N250 and N400 ERP components were significantly modulated for L2-L2 repetition priming, L1-L2 translation priming, and L1-L1 repetition priming, but not for L2-L1 translation priming. There was also evidence for priming effects in an earlier 100-200 ms time window for L1-L1 repetition priming and L1-L2 translation priming. We argue that a change in script across primes and targets provides optimal conditions for prime word processing, hence generating very fast-acting translation priming effects when primes are in L1.
Translation Priming; Cross-Script Priming; ERPs
Attending to a conversation in a crowded scene requires selection of relevant information, while ignoring other distracting sensory input, such as speech signals from surrounding people. The neural mechanisms of how distracting stimuli influence the processing of attended speech are not well understood. In this high-density electroencephalography (EEG) study, we investigated how different types of speech and non-speech stimuli influence the processing of attended audiovisual speech. Participants were presented with three horizontally aligned speakers who produced syllables. The faces of the three speakers flickered at specific frequencies (19 Hz for flanking speakers and 25 Hz for the center speaker), which induced steady-state visual evoked potentials (SSVEP) in the EEG that served as a measure of visual attention. The participants' task was to detect an occasional audiovisual target syllable produced by the center speaker, while ignoring distracting signals originating from the two flanking speakers. In all experimental conditions the center speaker produced a bimodal audiovisual syllable. In three distraction conditions, which were contrasted with a no-distraction control condition, the flanking speakers either produced audiovisual speech, moved their lips, and produced acoustic noise, or moved their lips without producing an auditory signal. We observed behavioral interference in the reaction times (RTs) in particular when the flanking speakers produced naturalistic audiovisual speech. These effects were paralleled by enhanced 19 Hz SSVEP, indicative of a stimulus-driven capture of attention toward the interfering speakers. Our study provides evidence that non-relevant audiovisual speech signals serve as highly salient distractors, which capture attention in a stimulus-driven fashion.
crossmodal; EEG; bimodal; SSVEP; oscillatory
Patients with unilateral hemispheric lesions were given visual target cancellation tasks. As expected, marked contralateral and less severe ipsilateral visual inattention were observed in patients with right-sided cerebral lesions whereas those with left-sided lesions showed only mild contralateral neglect. Stimulus material (shapes vs letters) and array (random vs structured) interacted in a complex manner to influence target detection only in patients with right-sided lesions. Furthermore, the search strategy of these patients tended to be erratic, particularly when the stimuli were in an unstructured array. A structured array prompted a more systematic and efficient search. It appears, therefore, that stimulus content and spatial array affect neglect behaviour in patients with right-sided lesions and that a lack of systematic visual exploration within the extrapersonal space is one factor that contributes to visual hemispatial inattention.
This study investigates the effects of profound acquired unilateral deafness on the adult human central auditory system by analyzing long-latency auditory evoked potentials (AEPs) with dipole source modeling methods. AEPs, elicited by clicks presented to the intact ear in 19 adult subjects with profound unilateral deafness and monaurally to each ear in eight adult normal-hearing controls, were recorded with a 31-channel system. The responses in the 70–210 ms time window, encompassing the N1b/P2 and Ta/Tb components of the AEPs, were modeled by a vertically and a laterally oriented dipole source in each hemisphere. Peak latencies and amplitudes of the major components of the dipole waveforms were measured in the hemispheres ipsilateral and contralateral to the stimulated ear. The normal-hearing subjects showed significant ipsilateral–contralateral latency and amplitude differences, with contralateral source activities that were typically larger and peaked earlier than the ipsilateral activities. In addition, the ipsilateral–contralateral amplitude differences from monaural presentation were similar for left and for right ear stimulation. For unilaterally deaf subjects, the previously reported reduction in ipsilateral–contralateral amplitude differences based on scalp waveforms was also observed in the dipole source waveforms. However, analysis of the source dipole activity demonstrated that the reduced inter-hemispheric amplitude differences were ear dependent. Specifically, these changes were found only in those subjects affected by profound left ear unilateral deafness.
Auditory evoked potentials (AEPs); dipoles; unilateral deafness; human; plasticity
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged Steady State Visual Evoked Potentials (SSVEPs) and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self-terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of approximately 30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This “contrast-contrast” invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Masked priming is used in psycholinguistic studies to assess questions about lexical access and representation. We present two masked priming experiments using MEG. If the MEG signal elicited by words reflects specific aspects of lexical retrieval, then one expects to identify specific neural correlates of retrieval that are sensitive to priming. To date, the electrophysiological evidence has been equivocal. We report findings from two experiments. Both employed identity priming, where the prime and target are the same lexical item but differ in case (NEWS-news). The first experiment used only forward masking, while the prime in the second experiment was both preceded and followed by a mask (backward masking). In both studies, we find a significant behavioral effect of priming. Using MEG, we identified a component peaking approximately 225 ms post-onset of the target, whose latency was sensitive to repetition. These findings support the notion that properties of the MEG response index specific lexical processes and demonstrate that masked priming can be effectively combined with MEG to investigate the nature of lexical processing.
Magnetoencephalography; MEG; Masked Priming; Backward Masking; Identity Priming; Immediate Repetition Priming
We report three experiments that combine the masked priming paradigm with the recording of event-related potentials in order to examine the time-course of cross-modal interactions during word recognition. Visually presented masked primes preceded either visually or auditorily presented targets that were or were not the same word as the prime. Experiment 1 used the lexical decision task, and in Experiments 2 and 3 participants monitored target words for animal names. The results show a strong modulation of the N400 and an earlier ERP component (N250 ms) in within-modality (visual-visual) repetition priming, and a much weaker and later N400-like effect (400–700 ms) in the cross-modal (visual-auditory) condition with prime exposures of 50 ms (Experiments 1 & 2). With a prime duration of 67 ms (Experiment 3), cross-modal ERP priming effects arose earlier during the traditional N400 epoch (300–500 ms) and were also larger overall than at the shorter prime duration.
word recognition; cross-modal priming; event-related potentials
The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings) embedded amongst a of character strings. Beginning at 130 ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180 ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g., interhemispheric) processing of those stimuli shortly later. Additional early (130–150 ms) negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300 ms) were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.
word reading; ERPs; visual cortex; visual orthography
The present study used event-related potentials (ERPs) to examine the time-course of visual word recognition using a masked repetition priming paradigm. In two experiments participants monitored a stream of words for occasional animal names, and ERPs were recorded to non-animal critical target items that were either repetitions or were unrelated to the immediately preceding masked prime word. In Experiment 1 the onset interval between the prime and target (stimulus-onset-asynchrony – SOA) was manipulated across four levels (60, 180, 300 and 420 ms) and the duration of primes was held constant at 40 ms. In Experiment 2 the SOA between the prime and target was held constant at 60 ms and the prime duration was manipulated across four levels (10, 20 30 and 40 ms). Both manipulations were found to have distinct effects on the N250 and N400 ERP components. The results provide converging evidence that the N250 reflects processing at the level of form representations (orthography and phonology) while the N400 reflects processing at the level of meaning.
Visual word processing; word recognition; N400; N250; masked priming
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness.
visual perception; visual awareness; response priming; rapid-chase; feature-based attention; objects; phobias; lightness
In an experiment combining masked repetition priming and the recording of event-related potentials (ERPs) the location of prime stimuli relative to centrally located target words was manipulated. Prime words could appear at the same location as targets or shifted one letter position to the right or to the left. Repetition priming effects (amplitude differences across the repeat vs. unrelated prime conditions) were found in a series of ERP components starting at around 100 msec posttarget onset. The earliest of these, the N/P150 component, was found to be sensitive to prime location. Repetition priming was only apparent with centrally located primes in this component. Repetition priming effects in later components (N250 and N400), on the other hand, were not affected by prime location. The results are interpreted in terms of location-specific letter detectors that map onto a higher level, location-invariant orthographic code for printed words.
Participants performed a priming task during which emotional faces served as prime stimuli and emotional words served as targets. Prime-target pairs were congruent or incongruent and two levels of prime visibility were obtained by varying the duration of the masked primes. To probe a neural signature of the impact of the masked primes, lateralized readiness potentials (LRPs) were recorded over motor cortex. During the high-visibility condition, responses to word targets were faster when the prime-target pairs were congruent than when they were incongruent, providing evidence of priming effects. In line with the behavioral results, the electrophysiological data showed that high-visibility face primes resulted in LRP differences between congruent and incongruent trials, suggesting that prime stimuli initiated motor preparation. Contrary to the above pattern, no evidence for reaction time or LRP differences was observed during the low-visibility condition, revealing that the depth of facial expression processing is dependent upon stimulus visibility.
Visual evoked potentials (VEPs) to lateralised light flashes were recorded from two acallosal patients. In one patient, these recordings were made while he performed a choice-reaction time task, and in the other patient the VEPs were obtained during a simple reaction time task. In both cases the patient's VEPs from electrode sites contralateral to the visual field of stimulus delivery resembled those of controls. Their VEPs from ipsilateral sites were aberrant, however, in that while control subjects showed a smaller and slightly delayed ipsilateral N160 component, this was not discernible in the patients' data. It is concluded that the ipsilateral N160 relies for its generation on the transcallosal transfer of information processed initially by the contralateral hemisphere.
Most visual stimuli we experience on a day-to-day basis are continuous sequences, with spatial structure highly correlated in time. During rapid serial visual presentation (RSVP), this correlation is absent. Here we study how subjects' target detection responses, both behavioral and electrophysiological, differ between continuous serial visual sequences (CSVP), flashed serial visual presentation (FSVP) and RSVP. Behavioral results show longer reaction times for CSVP compared to the FSVP and RSVP conditions, as well as a difference in miss rate between RSVP and the other two conditions. Using mutual information, we measure electrophysiological differences in the electroencephalography (EEG) for these three conditions. We find two peaks in the mutual information between EEG and stimulus class (target vs. distractor), with the second peak occurring 30–40 ms earlier for the FSVP and RSVP conditions. In addition, we find differences in the persistence of the peak mutual information between FSVP and RSVP conditions. We further investigate these differences using a mutual information based functional connectivity analysis and find significant fronto-parietal functional coupling for RSVP and FSVP but no significant coupling for the CSVP condition. We discuss these findings within the context of attentional engagement, evidence accumulation and short-term visual memory.
target detection; visual presentation; electroencephalography; mutual information
We examined whether individual differences in hemispheric utilization can interact with the intrinsic attentional biases of the cerebral hemispheres. Evidence suggests that the hemispheres have competing biases to direct attention contralaterally, with the left hemisphere (LH) having a stronger bias than the right hemisphere. There is also evidence that individuals have characteristic biases to utilize one hemisphere more than the other for processing information, which can induce a bias to direct attention to contralateral space. We predicted that LH-biased individuals would display a strong rightward attentional bias, which would create difficulty in selectively attending to target stimuli in the left visual field (LVF) as compared to right in the performance of a bilateral flanker task.
Consistent with our hypothesis, flanker interference effects were found on the N2c event-related brain potential and error rate for LH-biased individuals in the Attend-LVF condition. The error rate effect was correlated with the degree of hemispheric utilization bias for the LH-Bias group.
We conclude that hemispheric utilization bias can enhance a hemisphere's contralateral attentional bias, at least for individuals with a LH utilization bias. Hemispheric utilization bias may play an important and largely unrecognized role in visuospatial attention.