Bilinguals and musicians exhibit behavioral advantages on tasks with high demands on executive functioning, particularly inhibitory control, but the brain mechanisms supporting these differences are unclear. Of key interest is whether these forms of experience influence cognition through similar or distinct information processing mechanisms. Here, we recorded event-related potentials (ERPs) in three groups – bilinguals, musicians, and controls – who completed a visual go-nogo task that involved the withholding of key presses to rare targets. Participants in each group achieved similar accuracy rates and responses times but the analysis of cortical responses revealed significant differences in ERP waveforms. Success in withholding a prepotent response was associated with enhanced stimulus-locked N2 and P3 wave amplitude relative to go trials. For nogo trials, there were altered timing-specific ERP differences and graded amplitude differences observed in the neural responses across groups. Specifically, musicians showed an enhanced early P2 response accompanied by reduced N2 amplitude whereas bilinguals showed increased N2 amplitude coupled with an increased late positivity wave relative to controls. These findings demonstrate that bilingualism and music training have differential effects on the brain networks supporting executive control over behavior.
Humans are better at recognizing human faces than faces of other species. However, it is unclear whether this species sensitivity can be seen at early perceptual stages of face processing and whether it involves species sensitivity for important facial features like the eyes. These questions were addressed by comparing the modulations of the N170 ERP component to faces, eyes and eyeless faces of humans, apes, cats and dogs, presented upright and inverted. Although all faces and isolated eyes yielded larger responses than the control object category (houses), the N170 was shorter and smaller to human than animal faces and larger to human than animal eyes. Most importantly, while the classic inversion effect was found for human faces, animal faces yielded no inversion effect or an opposite inversion effect, as seen for objects, suggesting a different neural process involved for humans faces compared to faces of other species. Thus, in addition to its general face and eye categorical sensitivity, the N170 appears particularly sensitive to the human species for both faces and eyes. The results are discussed in the context of a recent model of the N170 response involving face and eye sensitive neurons (Itier et al., 2007) where the eyes play a central role in face perception. The data support the intuitive idea that eyes are what make animal head fronts look face-like and that proficiency for the human species involves visual expertise for the human eyes.
PMID: 20650321 CAMSID: cams3880
Face; Eyes; Species; Perception; N170; Inversion
Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and to segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs) using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL) were presented without, with low (45 dBA SPL), or with moderate (65 dBA SPL) Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than in the normal-hearing adults. In addition, the object-related negativity (ORN) elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.
aging; MEG; hearing loss; auditory cortex; inhibition (psychology)
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults’ difficulty in ignoring task-irrelevant stimuli.
Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.
Attentional blink (AB) describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe) nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20%) that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.
Although many types of learning require associations to be formed, little is known about the brain mechanisms engaged in association formation. In the present study, we measured event-related potentials (ERPs) while participants studied pairs of semantically related words, with each word of a pair presented sequentially. To narrow in on the associative component of the signal, the ERP difference between the first and second words of a pair (Word2-Word1) was derived separately for subsequently recalled and subsequently not-recalled pairs. When the resulting difference waveforms were contrasted, a parietal positivity was observed for subsequently recalled pairs around 460 ms after the word presentation onset, followed by a positive slow wave that lasted until around 845 ms. Together these results suggest that associations formed between semantically related words are correlated with a specific neural signature that is reflected in scalp recordings over the parietal region.
Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.
auditory scene analysis; multistability; change deafness; informational masking; priming; attentional blink
The present study pursues findings from earlier behavioral research with children showing the superior ability of bilinguals to make grammaticality judgments in the context of misleading semantic information. The advantage in this task was attributed to the greater executive control of bilinguals, but this impact on linguistic processing has not been demonstrated in adults. Here, we recorded event-related potentials in young adults who were either English monolinguals or bilinguals as they performed two different language judgment tasks. In the acceptability task, participants indicated whether or not the sentence contained an error in either grammar or meaning; in the grammaticality task, participants indicated only whether the sentence contained an error in grammar, in spite of possible conflicting information from meaning. In both groups, sentence violations generated N400 and P600 waves. In the acceptability task, bilinguals were less accurate than monolinguals, but in the grammaticality task which requires more executive control, bilingual and monolingual groups showed a comparable level of accuracy. Importantly, bilinguals generated smaller P600 amplitude and a more bilateral distribution of activation than monolinguals in the grammaticality task requiring more executive control. Our results show that bilinguals use their enhanced executive control for linguistic processing involving conflict in spite of no apparent advantage in linguistic processing under simpler conditions.
ERP; language; Bilingualism; Control; Executive functions; Syntax; Semantic; N400; P600
We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing.
adaptation; auditory system; aging; fMRI; spatial localization
Attending and responding to sound location generates increased activity in parietal cortex which may index auditory spatial working memory and/or goal-directed action. Here, we used an n-back task (Experiment 1) and an adaptation paradigm (Experiment 2) to distinguish memory-related activity from that associated with goal-directed action. In Experiment 1, participants indicated, in separate blocks of trials, whether the incoming stimulus was presented at the same location as in the previous trial (1-back) or two trials ago (2-back). Prior to a block of trials, participants were told to use their left or right index finger. Accuracy and reaction times were worse for the 2-back than for the 1-back condition. The analysis of functional magnetic resonance imaging data revealed greater sustained task-related activity in the inferior parietal lobule (IPL) and superior frontal sulcus during 2-back than 1-back after accounting for response-related activity elicited by the targets. Target detection and response execution were also associated with enhanced activity in the IPL bilaterally, though the activation was anterior to that associated with sustained task-related activity. In Experiment 2, we used an event-related design in which participants listened (no response required) to trials that comprised four sounds presented either at the same location or at four different locations. We found larger IPL activation for changes in sound location than for sounds presented at the same location. The IPL activation overlapped with that observed during the auditory spatial working memory task. Together, these results provide converging evidence supporting the role of parietal cortex in auditory spatial working memory which can be dissociated from response selection and execution.
attention; auditory; fMRI; parietal cortex; spatial; dorsal stream; motor response; goal-directed action
When presented with alternating low and high tones, listeners are more likely to perceive 2 separate streams of tones (“streaming”), rather than a single coherent stream, when the frequency separation (Δf) between tones is greater and the number of tone presentations is greater (“buildup”). However, the same large-Δf sequence reduces streaming for subsequent patterns presented after a gap of up to several seconds. Buildup occurs at a level of neural representation with sharp frequency tuning, supporting the theory that streaming is a peripheral phenomenon. Here, we used adaptation to demonstrate that the contextual effect of prior Δf arose from a representation with broad frequency tuning, unlike buildup. Separate adaptation did not occur in a representation of Δf independent of frequency range, suggesting that any frequency-shift detectors undergoing adaptation are also frequency specific. A separate effect of prior perception was observed, dissociating stimulus-related (i.e., Δf) and perception-related (i.e., 1 stream vs. 2 streams) adaptation. Viewing a visual analogue to auditory streaming had no effect on subsequent perception of streaming, suggesting adaptation in auditory-specific brain circuits. These results, along with previous findings on buildup, suggest that processing in at least three levels of auditory neural representation underlies segregation and formation of auditory streams.
auditory scene analysis; adaptation; buildup; frequency shift detector; cross-modal
The capability of processing rapid fluctuations in the temporal envelope of sound declines with age and this contributes to older adults' difficulties in understanding speech. Although, changes in central auditory processing during aging have been proposed as cause for communication deficits, an open question remains which stage of processing is mostly affected by age related changes. We investigated auditory temporal resolution in young, middle-aged, and older listeners with neuromagnetic evoked responses to gap stimuli with different leading marker and gap durations. Signal components specific for processing the physical details of sound stimuli as well as the auditory objects as a whole were derived from the evoked activity and served as biological markers for temporal processing at different cortical levels. Early oscillatory 40-Hz responses were elicited by the onsets of leading and lagging markers and indicated central registration of the gap with similar amplitude in all three age groups. High-gamma responses were predominantly related to the duration of no-gap stimuli or to the duration of gaps when present, and decreased in amplitude and phase locking with increasing age. Correspondingly, low-frequency activity around 200 ms and later was reduced in middle aged and older participants. High-gamma band, and long-latency low-frequency responses were interpreted as reflecting higher order processes related to the grouping of sound items into auditory objects and updating of memory for these objects. The observed effects indicate that age-related changes in auditory acuity have more to do with higher-order brain functions than previously thought.