The extent to which age-related differences in neural markers of visual processing are influenced by changes in visual acuity has not been systematically investigated. Studies often indicate that their subjects had normal or corrected-to-normal vision, but the assessment of visual acuity seems to most frequently be based only on self-report. Consistent with prior research, to be included in the current study, subjects had to report normal or corrected-to-normal vision. Additionally, visual acuity was formally tested using a Snellen eye chart. Event-related potentials (ERPs) were studied in young adults (18–32 years old), young-old adults (65–79 years old), and old-old adults (80+ years old) while they performed a visual processing task involving selective attention to color. Age-related differences in the latency and amplitude of ERP markers of early visual processing, the posterior P1 and N1 components, were examined. All results were then re-analyzed after controlling for visual acuity. We found that visual acuity declined as a function of age. Accounting for visual acuity had an impact on whether older and younger adults differed significantly in the size and latency of the posterior P1 and N1 components. After controlling for visual acuity, age-related increases in P1 and N1 latency did not remain significant, and older adults were found to have a larger P1 amplitude than young adults. Our results suggest that until the relationship between age-associated differences in visual acuity and early ERPs is clearly established, investigators should be cautious when interpreting the meaning of their findings. Self-reports about visual acuity may be inaccurate, necessitating formal measures. Additional investigation is needed to help establish guidelines for future research, especially of very old adults.
Aging; Visual Processing; Visual Acuity; ERPs
Few studies have focused on language processing across modalities. Two experiments examined between-modality interactions across three prime–target intervals (0, 200, and 800 ms) in a cross-modal repetition priming paradigm. Event-related potentials were recorded to auditory targets following visual primes (Experiment 1) or visual targets following auditory primes (Experiment 2). In Experiment 1 robust repetition effects were found for auditory targets as early as 100 ms, and continued through the N400 epoch. Moreover, these visual–auditory repetition effects were large across all three prime–target intervals although they onset 200 ms later at the shortest interval. In Experiment 2 repetition effects to visual targets started later (at 200 ms), but also offset relatively later (∼1000 ms). These auditory–visual repetition effects were both smaller overall and absent for the two shortest prime–target intervals during the typical N400 window.
ERPs; N400; Repetition priming; Word recognition
The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain’s electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects.
Language production; Lexical access; Task intention; ERPs; Top-down processing
In a post-cued letter identification task, participants were presented with 7-letter nonword target stimuli that were formed of a random string of consonants (DCMFPLR) or a pronounceable sequence of consonants and vowels (DAMOPUR). Targets were preceded by briefly presented pattern-masked primes that could be the same sequence of letters as the target, composed of seven different letters, or sharing either the first or last five letters of the target. There was some evidence for repetition priming effects that were independent of target type in an early component, the N/P150, thought to reflect the mapping of visual features onto letter representations, and that is insensitive to orthographic structure. Following this, pronounceable nonwords showed significantly greater repetition priming effects than consonant strings, in line with the behavioral results. Initial versus final overlap only started to influence target processing at around 200–250 ms post-target onset, at about the same time as the effects of target type emerged. The results are in line with a model where the initial parallel mapping of visual features onto a location-specific orthographic code is followed by the subsequent activation of location-invariant orthographic and phonological codes.
Pseudoword superiority effect; ERPs; Nonword processing; Masked priming
Using event-related potentials, we investigated how the brain extracts information from another’s face and translates it into relevant action in real-time. In Study 1, participants made between-hand sex categorizations of sex-typical and sex-atypical faces. Sex-atypical faces evoked negativity between 250-550 ms (N300/N400 effects), reflecting the integration of accumulating sex-category knowledge into a coherent sex-category interpretation. Additionally, the lateralized readiness potential (LRP) revealed that the motor cortex began preparing for a correct hand response while social category knowledge was still gradually evolving in parallel. In Study 2, participants made between-hand eye-color categorizations as part of go/no-go trials that were contingent on a target’s sex. On no-go trials, although the hand did not actually move, information about eye color partially prepared the motor cortex to move the hand before perception of sex had finalized. Together, these findings demonstrate the dynamic continuity between person perception and action, such that ongoing results from face processing are immediately and continuously cascaded into the motor system over time. The preparation of action begins based on tentative perceptions of another’s face before perceivers have finished interpreting what they just saw.
person perception; action; face perception; social categorization; motor processes; ERPs
Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: 1) Normal sequences with both structure and meaning, 2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), 3) Structural Only sequences (narrative structure but no semantic relatedness), and 4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning.
Coherence; Comics; Discourse; ERPs; Event-related Potentials; Film; Image Sequence; LAN; N300; N400; Narrative; Pictures; Structure; Visual Language
We describe the results of a study that combines ERP recordings and sandwich priming, a variant of masked priming that provides a brief preview of the target prior to prime presentation (Lupker & Davis, 2009). This has been shown to increase the size of masked priming effects seen in behavioral responses. We found the same increase in sensitivity to ERP priming effects in an orthographic priming experiment manipulating the position of overlap of letters shared by primes and targets. Targets were 6-letter words and primes were formed of the 1st, 3rd, 4th, and 6th letters of targets in the related condition. Primes could be concatenated or hyphenated and could be centered on fixation or displaced by two letter spaces to the left or right. Priming effects with concatenated and/or displaced primes only started to emerge at 250ms post-target onset, whereas priming effects from centrally located hyphenated primes emerged about 100ms earlier.
sandwich priming; ERPs; orthographic priming; relative-position priming; location invariance
When a word is preceded by a supportive context such as a semantically associated word or a strongly constraining sentence frame, the N400 component of the ERP is reduced in amplitude. An ongoing debate is the degree to which this reduction reflects a passive spread of activation across long-term semantic memory representations as opposed to specific predictions about upcoming input. We addressed this question by embedding semantically associated prime-target pairs within an experimental context that encouraged prediction to a greater or lesser degree. The proportion of related items was used to manipulate the predictive validity of the prime for the target while holding semantic association constant. A semantic category probe detection task was used to encourage semantic processing and to preclude the need for a motor response on the trials of interest. A larger N400 reduction to associated targets was observed in the high than the low relatedness proportion condition, consistent with the hypothesis that predictions about upcoming stimuli make a substantial contribution to the N400 effect. We also observed an earlier priming effect (205–240 ms) in the high proportion condition, which may reflect facilitation due to form-based prediction. In sum, the results suggest that predictability modulates N400 amplitude to a greater degree than the semantic content of the context.
The present study used ERPs to provide precise temporal information about the modulation of masked repetition priming effects by word frequency during the course of target word recognition. Contrary to the pattern seen with behavioral response times in prior research, we predicted that high-frequency words should generate larger and earlier peaking repetition priming effects than low-frequency words in the N400 time window. This prediction was supported by the results of two experiments. Furthermore, repetition priming effects in the N250 time window were found for low-frequency words in both experiments, whereas for high-frequency words these effects were only seen at the shorter (50 ms) SOA used in Experiment 2, and not in Experiment 1 (70 ms SOA). We explain this pattern as resulting from reset mechanisms operating on the form representations activated by prime stimuli when primes and targets are processed as separate perceptual events.
masked priming; ERPs; word frequency; repetition; sigmoid activation function
We measured Event-Related Potentials (ERPs) and naming times to picture targets preceded by masked words (stimulus onset asynchrony: 80 ms) that shared one of three different types of relationship with the names of the pictures: (1) Identity related, in which the prime was the name of the picture (“socks” –
Semantic interference; Lexical selection; Response selection; Speech production; ERP; N400
In two experiments, the effect of the duration (40, 80 and 120 ms) of pattern masked prime words on subsequent target word processing was measured using event-related potentials. In Experiment 1, target words were either repetitions of the prior masked prime (car-CAR) or were another unrelated word (job-CAR). In Experiment 2, primes and targets were either semantically related (cap-HAT) or were unrelated (car-HAT). Unrelated target words produced larger N400s than did repeated (Exp 1) or semantically related (Exp 2) words across the different prime durations and these N400 priming effects tended to be smaller overall for semantic than repetition priming. Moreover, there was only a modest decline in the size of N400 repetition priming at the shortest prime durations, and there was no relationship between this N400 effect and a measure of prime categorization performance. However, the size of semantic priming at the shortest durations was relatively smaller than at longer durations and was correlated with prime categorization performance. The findings are discussed in the context of the functional significance of the N400 as well as a model that argues for different mechanisms during masked repetition and semantic priming.
ERPs; N400; Masked priming; Semantic priming; Repetition priming
In two experiments the effects of word repetition, synonymy, and coreference on event-related brain potentials during text processing were studied. Participants read one (Experiment 1) or two sentence (Experiment 2) texts in which critical nouns were preceded by the definite (the) or indefinite (a) articles. Experiment 1 was run as a control to verify that differences in article processing in the second sentences of Experiment 2 would not contaminate the ERPs to critical noun items. They did not. In Experiment 2, an initial sentence was used to set up a context and contained either a first presentation or synonym of the critical word from the second sentence. N400 (but not Late Positive Component; LPC) priming effects were found for repetitions and synonyms (larger for repetitions) in second sentences. This extends observations of priming in word lists and single sentences to two-sentence texts. There was also a greater left anterior negativity or “LAN” for coreferential critical nouns (those following the article “The”) compared to non-coreferential critical nouns (those following the article “A”) suggesting that ERPs are sensitive to working memory processes engaged during referential assignment. In response to the articles themselves, there was a greater N400-700 elicited by the article “A” vs. “The.” Finally, there was a greater N400-like negativity to the final words of non-coreferential sentences implying that the meanings of these sentences were difficult to integrate with the discourse level representation established by the prior sentence.
N400; LAN; ERPs; Coreference; Anaphoric processing; Sentence processing
Prominent models of face perception posit that the encoding of social categories begins after initial structural encoding has completed. In contrast, we hypothesized that social category encoding may occur simultaneously with structural encoding. While event-related potentials were recorded, participants categorized the sex of sex-typical and sex-atypical faces. Results indicated that the face-sensitive right N170, a component involved in structural encoding, was larger for sex-typical relative to sex-atypical faces. Moreover, its amplitude predicted the efficiency of sex-category judgments. The right P1 component also peaked earlier for sex-typical faces. These findings show that social category encoding and the extraction of lower-level face information operate in parallel, suggesting that they may be accomplished by a single dynamic process rather than two separate mechanisms.
face perception; N170; P1; person perception; social categorization; visual encoding
Numerous studies have demonstrated that selective attention to color is associated with a larger neural response under attend than ignore conditions, but have not addressed whether this difference reflects enhanced activity under attend or suppressed activity under ignore. In this study, a color-neutral condition was included, which presented stimuli physically identical to those under attend and ignore conditions, but in which color was not task relevant. Attention to color did not modulate the early sensory-evoked P1 and N1 components. Traditional ERP markers of early selection (the anterior Selection Positivity and posterior Selection Negativity) did not differ between the attend and neutral conditions, arguing against a mechanism of enhanced activity. However, there were markedly reduced responses under the ignore relative to the neutral condition, consistent with the view that early selection mechanisms reflect suppression of neural activity under the ignore condition.
The present study used event-related potentials (ERPs) to examine the time-course of visual word recognition using a masked repetition priming paradigm. In two experiments participants monitored a stream of words for occasional animal names, and ERPs were recorded to non-animal critical target items that were either repetitions or were unrelated to the immediately preceding masked prime word. In Experiment 1 the onset interval between the prime and target (stimulus-onset-asynchrony – SOA) was manipulated across four levels (60, 180, 300 and 420 ms) and the duration of primes was held constant at 40 ms. In Experiment 2 the SOA between the prime and target was held constant at 60 ms and the prime duration was manipulated across four levels (10, 20 30 and 40 ms). Both manipulations were found to have distinct effects on the N250 and N400 ERP components. The results provide converging evidence that the N250 reflects processing at the level of form representations (orthography and phonology) while the N400 reflects processing at the level of meaning.
Visual word processing; word recognition; N400; N250; masked priming
The current experiment examined invariance to pictures of objects rotated in depth using event-related potentials (ERPs) and masked repetition priming. Specifically we rotated objects 30°, 60° or 150° from their canonical view and, across two experiments, varied the prime duration (50 or 90 milliseconds (ms)). We examined three ERP components, the P/N190, N300 and N400. In Experiment 1, only the 30° rotation condition produced repetition priming effects on the N/P190, N300 and N400. The other rotation conditions only showed repetition priming effects on the early perceptual component, the N/P190. Experiment 2 extended the prime duration to 90 ms to determine whether additional exposure to the prime may produce invariance on the N300 and N400 for the 60° and 150° rotation conditions. Repetition priming effects were found for all rotation conditions across the N/P190, N300 and N400 components. We interpret these results to suggest that whether or not view invariant priming effects are found depends partly on the extent to which representation of an object has been activated.
masked priming; object recognition; event-related potentials; view-point invariance; view-point dependence
This study reports a new approach to studying the time-course of the perceptual processing of objects by combining for the first time the masked repetition priming technique with the recording of event-related potentials (ERPs). In a semantic categorization task ERPs were recorded to repeated and unrelated target pictures of common objects that were immediately preceded by briefly presented pattern masked prime objects. Three sequential ERP effects were found between 100 and 650 ms post-target onset. These effects included an early posterior positivity/anterior negativity (N/P190) that was suggested to reflect early feature processing in visual cortex. This early effect was followed by an anterior negativity (N300) that was suggested to reflect processing of object-specific representations and finally by a widely distributed negativity (N400) that was argued to reflect more domain general semantic processing.
ERP; N400; N300; Masked priming; Object recognition
The present study used event-related potentials (ERPs) to examine the time course of visual word recognition using a masked repetition priming paradigm. Participants monitored target words for occasional animal names, and ERPs were recorded to nonanimal critical items that were full repetitions, partial repetitions, or unrelated to the immediately preceding masked prime word. The results showed a strong modulation of the N400 and three earlier ERP components (P150, N250, and the P325) that we propose reflect sequential overlapping steps in the processing of printed words.
Event-related potentials were recorded during the visual presentation of words in the three languages of French-English-Spanish trilinguals. Participants monitored a mixed list of unrelated non-cognate words in the three languages while performing a semantic categorization task. Words in L1 generated earlier N400 peak amplitudes than both L2 and L3 words, which peaked together. On the other hand, L2 and L3 words did differ significantly in terms of N400 amplitude, with L3 words generating greater mean amplitudes compared with L2 words. We interpret the effects of peak N400 latency as reflecting the special status of the L1 relative to later acquired languages, rather than proficiency in that language per se. On the other hand, the mean amplitude difference between L2 and L3 is thought to reflect different levels of fluency in these two languages.
language effects; trilingualism; visual word recognition; N400
This study took advantage of the subsecond temporal resolution of ERPs to investigate mechanisms underlying age- and performance-related differences in working memory. Young and old subjects participated in a verbal n-back task with three levels of difficulty. Each group was divided into high and low performers based on accuracy under the 2-back condition. Both old subjects and low-performing young subjects exhibited impairments in preliminary mismatch/match detection operations (indexed by the anterior N2 component). This may have undermined the quality of information available for the subsequent decision-making process (indexed by the P3 component), necessitating the appropriation of more resources. Additional anterior and right hemisphere activity was recruited by old subjects. Neural efficiency and the capacity to allocate more resources to decision-making differed between high and low performers in both age groups. Under low demand conditions, high performers executed the task utilizing fewer resources than low performers (indexed by the P3 amplitude). As task requirements increased, high-performing young and old subjects were able to appropriate additional resources to decision-making, whereas their low-performing counterparts allocated fewer resources. Higher task demands increased utilization of processing capacity for operations other than decision-making (e.g., sustained attention) that depend upon a shared pool of limited resources. As demands increased, all groups allocated additional resources to the process of sustaining attention (indexed by the posterior slow wave). Demands appeared to have exceeded capacity in low performers, leading to a reduction of resources available to the decision-making process, which likely contributed to a decline in performance.
Comparisons of word and picture processing using Event-Related Potentials (ERPs) are contaminated by gross physical differences between the two types of stimuli. In the present study, we tackle this problem by comparing picture processing with word processing in an alphabetic and a logographic script, that are also characterized by gross physical differences. Native Mandarin Chinese speakers viewed pictures (line drawings) and Chinese characters (Experiment 1), native English speakers viewed pictures and English words (Experiment 2), and naïve Chinese readers (native English speakers) viewed pictures and Chinese characters (Experiment 3) in a semantic categorization task. The varying pattern of differences in the ERPs elicited by pictures and words across the three experiments provided evidence for i) script-specific processing arising between 150–200 ms post-stimulus onset, ii) domain-specific but script-independent processing arising between 200–300 ms post-stimulus onset, and iii) processing that depended on stimulus meaningfulness in the N400 time window. The results are interpreted in terms of differences in the way visual features are mapped onto higher-level representations for pictures and words in alphabetic and logographic writing systems.
ERP; N170; N400; word processing; picture processing; Chinese character
This study investigated the influence of direction of attention on the early detection of visual novelty, as indexed by the anterior N2. The anterior N2 was measured in young subjects (n=32) under an Attend and Ignore condition. Subjects were presented standard, target/rare, and perceptually novel visual stimuli under both conditions, but under the Ignore condition, attention was directed towards an auditory n-back task. The size of the anterior N2 to novel stimuli did not differ between conditions and was significantly larger than the anterior N2 to all other stimulus types. Furthermore, under the Ignore condition, the anterior N2 to visual novel stimuli was not affected by the level of difficulty of the auditory n-back task (3-back vs. 2-back). Our findings suggest that the early processing of visual novelty, as measured by the size of the anterior N2, is not strongly modulated by direction of attention.
In a previous study of native-English speaking university learners of a second language (Spanish) we observed an asymmetric pattern of ERP translation priming effects in L1 and L2 (Alvarez et al., 2003, Brain & Language, 87, 290-304) with larger and earlier priming on the N400 component in the L2 to L1, compared with the L1 to L2 direction. In the current study 20 native-Russian speakers who were also highly proficient in English participated in a mixed language lexical decision task in which critical words were presented in Russian (L1) and English (L2) and repetitions of these words (within and between languages) were presented on subsequent trials. ERPs were recorded to all items allowing for comparisons of repetition effects within and between (translation) languages. The results revealed a symmetrical pattern of within-language repetition and between-language translation ERP priming effects, which in conjunction with Alvarez et al (2003), supports the hypothesis that L2 proficiency level rather than age or order of language acquisition is responsible for the observed patterns of translation priming. The ramifications of these results for models of bilingual word processing are discussed.
ERPs were used to explore the different patterns of processing of cognate and noncognate words in the first (L1) and second (L2) language of a population of second language learners. L1 English students of French were presented with blocked lists of L1 and L2 words, and ERPs to cognates and noncognates were compared within each language block. For both languages, cognates had smaller amplitudes in the N400 component when compared with noncognates. L1 items that were cognates showed early differences in amplitude in the N400 epoch when compared with noncognates. L2 items showed later differences between cognates and noncognates than L1 items. The results are discussed in terms of how cognate status affects word recognition in second language learners.
In this study, English–French bilinguals performed a lexical decision task while reaction times (RTs) and event related potentials (ERPs) were measured to L2 targets, preceded by noncognate L1 translation primes versus L1 unrelated primes (Experiment 1a) and vice versa (Experiment 1b). The prime–target stimulus onset asynchrony was 120 ms. Significant masked translation priming was observed, indicated by faster reaction times and a decreased N400 for translation pairs as opposed to unrelated pairs, both from L1 to L2 (Experiment 1a) and from L2 to L1 (Experiment 1b), with the latter effect being weaker (RTs) and less longer lasting (ERPs). A translation priming effect was also found in the N250 ERP component, and this effect was stronger and earlier in the L2 to L1 priming direction than the reverse. The results are discussed with respect to possible mechanisms at the basis of asymmetric translation priming effects in bilinguals.
N250; N400; Bilingualism; Visual word recognition; Masked translation priming