The brain basis of bilinguals’ ability to use two languages at the same time has been a hotly debated topic. On the one hand, behavioral research has suggested that bilingual dual language use involves complex and highly principled linguistic processes. On the other hand, brain-imaging research has revealed that bilingual language switching involves neural activations in brain areas dedicated to general executive functions not specific to language processing, such as general task maintenance. Here we address the involvement of language-specific versus cognitive-general brain mechanisms for bilingual language processing by studying a unique population and using an innovative brain-imaging technology: bimodal bilinguals proficient in signed and spoken languages and functional Near-Infrared Spectroscopy (fNIRS; Hitachi ETG-4000), which, like fMRI, measures hemodynamic change, but which is also advanced in permitting movement for unconstrained speech and sign production. Participant groups included (i) hearing ASL-English bilinguals, (ii) ASL monolinguals, and (iii) English monolinguals. Imaging tasks included picture naming in “Monolingual mode” (using one language at a time) and in “Bilingual mode” (using both languages either simultaneously or in rapid alternation). Behavioral results revealed that accuracy was similar among groups and conditions. By contrast, neuroimaging results revealed that bilinguals in Bilingual mode showed greater signal intensity within posterior temporal regions (“Wernicke’s area”) than in Monolingual mode. Significance: Bilinguals’ ability to use two languages effortlessly and without confusion involves the use of language-specific posterior temporal brain regions. This research with both fNIRS and bimodal bilinguals sheds new light on the extent and variability of brain tissue that underlies language processing, and addresses the tantalizing questions of how language modality, sign and speech, impact language representation in the brain.
Many recent studies demonstrate that both languages are active when bilinguals and second language (L2) learners are reading, listening, or speaking one language only. The parallel activity of the two languages has been hypothesized to create competition that must be resolved. Models of bilingual lexical access have proposed an inhibitory control mechanism to effectively limit attention to the intended language (e.g., Green, 1998). Critically, other recent research suggests that a lifetime of experience as a bilingual negotiating the competition across the two languages confers a set of benefits to cognitive control processes more generally (e.g., Bialystok, Craik, Klein, & Viswanathan, 2004). However, few studies have examined the consequences of individual differences in inhibitory control for performance on language processing tasks. The goal of the present work was to determine whether there is a relation between enhanced executive function and performance for L2 learners and bilinguals on lexical comprehension and production tasks. Data were analyzed from two studies involving a range of language processing tasks, a working memory measure, and also the Simon task, a nonlinguistic measure of inhibitory control. The results demonstrate that greater working memory resources and enhanced inhibitory control are related to a reduction in cross-language activation in a sentence context word naming task and a picture naming task, respectively. Other factors that may be related to inhibitory control are identified. The implications of these results for models of bilingual lexical comprehension and production are discussed.
Contemporary research on bilingualism has been framed by two major discoveries. In the realm of language processing, studies of comprehension and production show that bilinguals activate information about both languages when using one language alone. Parallel activation of the two languages has been demonstrated for highly proficient bilinguals as well as second language learners and appears to be present even when distinct properties of the languages themselves might be sufficient to bias attention towards the language in use. In the realm of cognitive processing, studies of executive function have demonstrated a bilingual advantage, with bilinguals outperforming their monolingual counterparts on tasks that require ignoring irrelevant information, task switching, and resolving conflict. Our claim is that these outcomes are related and have the overall effect of changing the way that both cognitive and linguistic processing are carried out for bilinguals. In this article we consider each of these domains of bilingual performance and consider the kinds of evidence needed to support this view. We argue that the tendency to consider bilingualism as a unitary phenomenon explained in terms of simple component processes has created a set of apparent controversies that masks the richness of the central finding in this work: the adult mind and brain are open to experience in ways that create profound consequences for both language and cognition.
Decades of research have shown that, from an early age, proficient bilinguals can speak each of their two languages separately (similar to monolinguals) or rapidly switch between them (dissimilar to monolinguals). Thus we ask do monolingual and bilingual brains process language similarly or dissimilarly, and is this affected by the language context? Using an innovative brain imaging technology, functional Near Infrared Spectroscopy (fNIRS), we investigated how adult bilinguals process semantic information, both in speech and in print, in a monolingual language context (one language at a time) or in a bilingual language context (two languages in rapid alternation). While undergoing fNIRS recording, ten early-exposed, highly-proficient Spanish-English bilinguals completed a Semantic Judgment task in monolingual and bilingual contexts, and were compared to ten English monolingual controls. Two hypotheses were tested: the Signature Hypothesis predicts that early, highly proficient bilinguals will recruit neural tissue to process language differently from monolinguals across all language contexts. The Switching Hypothesis predicts that bilinguals will recruit neural tissue to process language similarly to monolinguals, when using one language at a time. Supporting the Signature Hypothesis, in the monolingual context, bilinguals and monolinguals showed differences in both hemispheres in the recruitment of DLPFC (BA 46/9) and IFC (BA 47/11), but similar recruitment of Broca’s area (BA 44/45). In particular, in the monolingual context, bilinguals showed greater signal intensity in channels maximally overlaying DLPFC and IFC regions as compared to monolinguals. In the bilingual context, bilinguals demonstrated a more robust recruitment of right DLPFC and right IFC. These findings reveal how extensive early bilingual exposure modifies language organization in the brain—thus imparting a possible “bilingual signature.” They further shed fascinating new light on how the bilingual brain may reveal the biological extent of the neural architecture underlying all human language and the language processing potential not fully recruited in the monolingual brain.
Findings from recent psycholinguistic studies of bilingual processing support the hypothesis that both languages of a bilingual are always active and that bilinguals continually engage in processes of language selection. This view aligns with the convergence hypothesis of bilingual language representation (Abutalebi & Green, 2008). Furthermore, it is hypothesized that when bilinguals perform a task in one language they need to inhibit their other, non-target language(s) (e.g., Costa, Miozzo, & Caramazza, 1999) and that stronger inhibition is required when the task is performed in the weaker language than in the stronger one (e.g., Costa & Santesteban, 2004). The study of multilingual individuals who acquire aphasia resulting from a focal brain lesion offers a unique opportunity to test the convergence hypothesis and the inhibition asymmetry. We report on a trilingual person with chronic non-fluent aphasia who at the time of testing demonstrated greater impairment in her first acquired language (Persian) than in her third, later-learned language (English). She received treatment in English followed by treatment in Persian. An examination of her connected language production revealed improvement in her grammatical skills in each language following intervention in that language, but decreased grammatical accuracy in English following treatment in Persian. The increased error rate was evident in structures that are not shared by the two languages (e.g., use of auxiliary verbs). The results support the prediction that greater inhibition is applied to the stronger language than to the weaker language, regardless of their age of acquisition. We interpret the findings as consistent with convergence theories that posit overlapping neuronal representation and simultaneous activation of multiple languages, and with proficiency-dependent asymmetric inhibition in multilinguals.
Behavioral and event-related potential (ERP) measures are reported for a study in which relatively proficient Chinese-English bilinguals named identical pictures in each of their two languages. Production occurred only in Chinese (the first language, L1) or only in English (the second language, L2) in a given block with the order counterbalanced across participants. The repetition of pictures across blocks was expected to produce facilitation in the form of faster responses and more positive ERPs. However, we hypothesized that if both languages are activated when naming one language alone, there might be evidence of inhibition of the stronger L1 to enable naming in the weaker L2. Behavioral data revealed the dominance of Chinese relative to English, with overall faster and more accurate naming performance in L1 than L2. However, reaction times for naming in L1 after naming in L2 showed no repetition advantage and the ERP data showed greater negativity when pictures were named in L1 following L2. This greater negativity for repeated items suggests the presence of inhibition rather than facilitation alone. Critically, the asymmetric negativity associated with the L1 when it followed the L2 endured beyond the immediate switch of language, implying long-lasting inhibition of the L1. In contrast, when L2 naming followed L1, both behavioral and ERP evidence produced a facilitatory pattern, consistent with repetition priming. Taken together, the results support a model of bilingual lexical production in which candidates in both languages compete for selection, with inhibition of the more dominant L1 when planning speech in the less dominant L2. We discuss the implications for modeling the scope and time course of inhibitory processes.
lexical selection; language production; inhibition; bilingualism
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension.
spoken language comprehension; modeling speech processing; connectionist models; self-organizing maps; language interaction
It has been debated how bilinguals select the intended language and prevent interference from the unintended language when speaking. Here, we studied the nature of the mental representations accessed by late fluent bilinguals during a rhyming judgment task relying on covert speech production. We recorded event-related brain potentials in Chinese–English bilinguals and monolingual speakers of English while they indicated whether the names of pictures presented on a screen rhymed. Whether bilingual participants focussed on rhyming selectively in English or Chinese, we found a significant priming effect of language-specific sound repetition. Surprisingly, however, sound repetitions in Chinese elicited significant priming effects even when the rhyming task was performed in English. This cross-language priming effect was delayed by ∼200 ms as compared to the within-language effect and was asymmetric, since there was no priming effect of sound repetitions in English when participants were asked to make rhyming judgments in Chinese. These results demonstrate that second language production hinders, but does not seal off, activation of the first language, whereas native language production appears immune to competition from the second language.
ERP; bilingualism; language production; cognitive control; inhibition
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
American Sign Language; audio-visual English; bimodal bilinguals; PET; fMRI
In a quantitative meta-analysis, using the activation likelihood estimation method, we examined the neural regions involved in bilingual cognitive control, particularly when engaging in switching between languages. The purpose of this study was to evaluate the bilingual cognitive control model based on a qualitative analysis [Abutalebi, J., & Green, D. W. (2008). Control mechanisms in bilingual language production: Neural evidence from language switching studies. Language and Cognitive Processes, 23, 557–582.]. After reviewing 128 peer-reviewed articles, ten neuroimaging studies met our inclusion criteria and in each study, bilinguals switched between languages in response to cues. We isolated regions involved in voluntary language switching, by including reported contrasts between the switching conditions and high level baseline conditions involving similar tasks but requiring the use of only one language. Eight brain regions showed significant and reliable activation: left inferior frontal gyrus, left middle temporal gyrus, left middle frontal gyrus, right precentral gyrus, right superior temporal gyrus, midline pre-SMA and bilateral caudate nuclei. This quantitative result is consistent with bilingual aphasia studies that report switching deficits associated with lesions to the caudate nuclei or prefrontal cortex. It also extends the previously reported qualitative model. We discuss the implications of the findings for accounts of bilingual cognitive control.
PMID: 24795491 CAMSID: cams2989
bilingualism; meta-analysis; functional neuroimaging; cognitive control; language switching
Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g., “paard”–“horse”). Multivariate pattern analysis (MVPA) was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination) and generalize meaning across two languages (across-language generalization). Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50–620 ms) after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550–600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in comprehension and production.
EEG decoding; EEG oscillations; speech perception; spoken word recognition; bilinguals; semantic representations; conceptual representation
Bilingual language production requires that speakers recruit inhibitory control (IC) to optimally balance the activation of more than one linguistic system when they produce speech. Moreover, the amount of IC necessary to maintain an optimal balance is likely to vary across individuals as a function of second language (L2) proficiency and inhibitory capacity, as well as the demands of a particular communicative situation. Here, we investigate how these factors relate to bilingual language production across monologue and dialogue spontaneous speech. In these tasks, 42 English–French and French–English bilinguals produced spontaneous speech in their first language (L1) and their L2, with and without a conversational partner. Participants also completed a separate battery that assessed L2 proficiency and inhibitory capacity. The results showed that L2 vs. L1 production was generally more effortful, as was dialogue vs. monologue speech production although the clarity of what was produced was higher for dialogues vs. monologues. As well, language production effort significantly varied as a function of individual differences in L2 proficiency and inhibitory capacity. Taken together, the overall pattern of findings suggests that both increased L2 proficiency and inhibitory capacity relate to efficient language production during spontaneous monologue and dialogue speech.
bilingualism; dialogue; monologue; inhibition; proficiency
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending—expressions in both speech and sign simultaneously—an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.
bimodal bilingualism; bilingual development; code-blending; language mixing; interlocutor sensitivity
We report two experiments that investigate the effects of sentence context on bilingual lexical access in Spanish and English. Highly proficient Spanish-English bilinguals read sentences in Spanish and English that included a marked word to be named. The word was either a cognate with similar orthography and/or phonology in the two languages, or a matched non-cognate control. Sentences appeared in one language alone (i.e., Spanish or English) and target words were not predictable on the basis of the preceding semantic context. In Experiment 1, we mixed the language of the sentence within a block such that sentences appeared in an alternating run in Spanish or in English. These conditions partly resemble normally occurring inter-sentential code-switching. In these mixed-language sequences, cognates were named faster than non-cognates in both languages. There were no effects of switching the language of the sentence. In Experiment 2, with Spanish-English bilinguals matched closely to those who participated in the first experiment, we blocked the language of the sentences to encourage language-specific processes. The results were virtually identical to those of the mixed-language experiment. In both cases, target cognates were named faster than non-cognates, and the magnitude of the effect did not change according to the broader context. Taken together, the results support the predictions of the Bilingual Interactive Activation + Model (Dijkstra and van Heuven, 2002) in demonstrating that bilingual lexical access is language non-selective even under conditions in which language-specific cues should enable selective processing. They also demonstrate that, in contrast to lexical switching from one language to the other, inter-sentential code-switching of the sort in which bilinguals frequently engage, imposes no significant costs to lexical processing.
bilingualism; language switching; switch costs; lexical access; sentence context; cognates
Upon being presented with a familiar name-known image, monolingual infants and adults implicitly generate the image's label (Meyer et al., 2007; Mani and Plunkett, 2010, 2011; Mani et al., 2012a). Although the cross-linguistic influences on overt bilingual production are well studied (for a summary see Colomé and Miozzo, 2010), evidence that bilinguals implicitly generate the label for familiar objects in both languages remains mixed. For example, bilinguals implicitly generate picture labels in both of their languages, but only when tested in L2 and not L1 (Wu and Thierry, 2011) or when immersed in their L2 (Spivey and Marian, 1999; Marian and Spivey, 2003a,b) but not when immersed in their L1 (Weber and Cutler, 2004). The current study tests whether bilinguals implicitly generate picture labels in both of their languages when tested in their L1 with a cross-modal ERP priming paradigm. The results extend previous findings by showing that not just do bilinguals implicitly generate the labels for visually fixated images in both of their languages when immersed in their L1, but also that these implicitly generated labels in one language can prime recognition of subsequently presented auditory targets across languages (i.e., L2–L1). The current study provides support for cascaded models of lexical access during speech production, as well as a new priming paradigm for the study of bilingual language processing.
bilingualism; implicit naming; phonological priming; lexical access; ERP
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child’s life.
This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity/type of hearing disorder, age of identification, and type of hearing technology.
This review will provide evidence on the effectiveness of using sign language in combination with oral language therapies for developing spoken language in children with hearing loss who are identified at a young age. The information from this review can provide guidance to parents and intervention specialists, inform policy decisions and provide directions for future research.
PROSPERO registration number
Children; Hearing loss; Spoken language; Sign language; Outcomes; Systematic review
How do the two languages of bilingual individuals interact in everyday communication? Numerous behavioral- and event-related brain potential studies have suggested that information from the non-target language is spontaneously accessed when bilinguals read, listen, or speak in a given language. While this finding is consistent with predictions of current models of bilingual processing, most paradigms used so far have mixed the two languages by using language ambiguous stimuli (e.g., cognates or interlingual homographs) or explicitly engaging the two languages because of experimental task requirements (e.g., word translation or language selection). These paradigms will have yielded different language processing contexts, the effect of which has seldom been taken into consideration. We propose that future studies should test the effect of language context on cross-language interactions in a systematic way, by controlling and manipulating the extent to which the experiment implicitly or explicitly prompts activation of the two languages.
bilingual comprehension and production; language context; language selection and inhibition; cognate; inter-lingual homograph
To examine how age of immersion and proficiency in a 2nd language influence speech movement variability and speaking rate in both a 1st language and a 2nd language.
A group of 21 Bengali–English bilingual speakers participated. Lip and jaw movements were recorded. For all 21 speakers, lip movement variability was assessed based on productions of Bengali (L1; 1st language) and English (L2; 2nd language) sentences. For analyses related to the influence of L2 proficiency on speech production processes, participants were sorted into low- (n = 7) and high-proficiency (n = 7) groups. Lip movement variability and speech rate were evaluated for both of these groups across L1 and L2 sentences.
Surprisingly, adult bilingual speakers produced equally consistent speech movement patterns in their production of L1 and L2. When groups were sorted according to proficiency, highly proficient speakers were marginally more variable in their L1. In addition, there were some phoneme-specific effects, most markedly that segments not shared by both languages were treated differently in production. Consistent with previous studies, movement durations were longer for less proficient speakers in both L1 and L2.
In contrast to those of child learners, the speech motor systems of adult L2 speakers show a high degree of consistency. Such lack of variability presumably contributes to protracted difficulties with acquiring nativelike pronunciation in L2. The proficiency results suggest bidirectional interactions across L1 and L2, which is consistent with hypotheses regarding interference and the sharing of phonological space. A slower speech rate in less proficient speakers implies that there are increased task demands on speech production processes.
bilingual; speech motor control; variability; Bengali
The need for executive control (EC) during bilingual language processing is thought to enhance these abilities, conferring a “bilingual advantage” on EC tasks. Recently, the reliability and robustness of the bilingual advantage has been questioned, with many variables reportedly affecting the size and presence of the bilingual advantage. This study investigates one further variable that may affect bilingual EC abilities: the similarity of a bilingual's two languages. We hypothesize that bilinguals whose two languages have a larger degree of orthographic overlap will require greater EC to manage their languages compared to bilinguals who use two languages with less overlap. We tested three groups of bilinguals with language pairs ranging from high- to low-similarity (German-English (GE), Polish-English (PE), and Arabic-English (AE), respectively) and a group of English monolinguals on a Stroop and Simon task. Two components of the bilingual advantage were investigated: an interference advantage, such that bilinguals have smaller interference effects than monolinguals; and a global RT advantage, such that bilinguals are faster overall than monolinguals. Between bilingual groups, these effects were expected to be modulated by script similarity. AE bilinguals showed the smallest Stroop interference effects, but the longest overall RTs in both tasks. These seemingly contradictory results are explained by the presence of cross-linguistic influences in the Stroop task. We conclude that similar-script bilinguals demonstrated more effective domain-general EC than different-script bilinguals, since high orthographic overlap creates more cross-linguistic activation and increases the daily demands on cognitive control. The role of individual variation is also discussed. These results suggest that script similarity is an important variable to consider in investigations of bilingual executive control abilities.
bilingualism; executive control; script; Stroop task; Simon task
This study asks whether early bilingual speakers who have already developed a language control mechanism to handle two languages control a dominant and a late acquired language in the same way as late bilingual speakers. We therefore, compared event-related potentials in a language switching task in two groups of participants switching between a dominant (L1) and a weak late acquired language (L3). Early bilingual late learners of an L3 showed a different ERP pattern (larger N2 mean amplitude) as late bilingual late learners of an L3. Even though the relative strength of languages was similar in both groups (a dominant and a weak late acquired language), they controlled their language output in a different manner. Moreover, the N2 was similar in two groups of early bilinguals tested in languages of different strength. We conclude that early bilingual learners of an L3 do not control languages in the same way as late bilingual L3 learners –who have not achieved native-like proficiency in their L2– do. This difference might explain some of the advantages early bilinguals have when learning new languages.
bilingual proficiency; language control; switch cost; N2 ERP component; LPC
Whether lexical selection is by competition is the subject of current debate in studies of monolingual language production. Here, I consider whether extant data from bilinguals can inform this debate. In bilinguals, theories that accept the notion of lexical selection by competition are divided between those positing competition among all lexical nodes vs. those that restrict competition to nodes in the target language only. An alternative view rejects selection by competition altogether, putting the locus of selection in a phonological output buffer, where some potential responses are easier to exclude than others. These theories make contrasting predictions about how quickly bilinguals should name pictures when non-target responses are activated. In Part 1, I establish the empirical facts for which any successful theory must account. In Part 2, I evaluate how well each theory accounts for the data. I argue that the data do not support theories that reject lexical selection by competition, and that although theories where competition for selection is restricted to the target language can be altered to fit the data, doing so would fundamentally undermine the distinctness of their position. Theories where selection is by competition throughout both target and non-target language lexicons must also be modified to account for the data, but these modifications are relatively peripheral to the theoretical impetus of the model. Throughout, I identify areas where our empirical facts are sparse, weak, or absent, and propose additional experiments that should help to further establish how lexical selection works, in both monolinguals and bilinguals.
bilingual; lexical selection; selection by competition; response exclusion; picture–word interference; multilingual processing model; language-specific selection model; response exclusion hypothesis
Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a “neural signature” of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic “sentence judgment task” [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541-552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a “neural signature” of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.
Alice, a deaf girl who was implanted after age three years of age was exposed to four weeks of storybook sessions conducted in American Sign Language (ASL) and speech (English). Two research questions were address: (1) how did she use her sign bimodal/bilingualism, codeswitching, and code mixing during reading activities and (2) what sign bilingual code-switching and code-mixing strategies did she use while attending to stories delivered under two treatments: ASL only and speech only. Retelling scores were collected to determine the type and frequency of her codeswitching/codemixing strategies between both languages after Alice was read to a story in ASL and in spoken English. Qualitative descriptive methods were utilized. Teacher, clinician and student transcripts of the reading and retelling sessions were recorded. Results showed Alice frequently used codeswitching and codeswitching strategies while retelling the stories retold under both treatments. Alice increased in her speech production retellings of the stories under both the ASL storyreading and spoken English-only reading of the story. The ASL storyreading did not decrease Alice's retelling scores in spoken English. Professionals are encouraged to consider the benefits of early sign bimodal/bilingualism to enhance the overall speech, language and reading proficiency of deaf children with cochlear implants.
Speech–sign or “bimodal” bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal–manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
This article provides an overview of bilingualism research on visual word recognition in isolation and in sentence context. Many studies investigating the processing of words out-of-context have shown that lexical representations from both languages are activated when reading in one language (language-non-selective lexical access). A newly developed research line asks whether language-non-selective access generalizes to word recognition in sentence contexts, providing a language cue and/or semantic constraint information for upcoming words. Recent studies suggest that the language of the preceding words is insufficient to restrict lexical access to words of the target language, even when reading in the native language. Eye tracking studies revealing the time course of word activation further showed that semantic constraint does not restrict language-non-selective access at early reading stages, but there is evidence that it has a relatively late effect. The theoretical implications for theories of bilingual word recognition are discussed in light of the Bilingual Interactive Activation+ model (Dijkstra and van Heuven, 2002).
bilingualism; visual word recognition; sentence processing; eye tracking