PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Cogn Sci. Author manuscript; available in PMC 2017 May 1.
Published in final edited form as:
Published online 2015 July 22. doi:  10.1111/cogs.12257
PMCID: PMC4723295
NIHMSID: NIHMS690625

The Relationship between Artificial and Second Language Learning

INTRODUCTION

Research on language has seen a remarkable rise in the use of artificial language learning experiments since their introduction nearly 90 years ago (Esper, 1925). The term Artificial Language Learning generally refers to an experimental paradigm where participants learn a language, or language-like system, in a lab setting and are then tested on what they learned. The reasons for using artificial languages are diverse and include allowing for a controlled exploration of the principles of universal grammar (Culbertson, Smolensky, & Legendre, 2012; Ettlinger, Bradlow, & Wong, 2013; Finley & Badecker, 2009), of how domain-general cognitive mechanisms might support language and language learning (Saffran, Aslin, & Newport, 1996), principles of language change (Esper, 1925), of the relationship between first and second and adult and child language learning (Finn & Hudson Kam, 2008), and of the processes involved in second language learning (Friederici, Steinhauer, & Pfeifer, 2002; Morgan-Short, Faretta-Stutenberg, Brill-Schuetz, Carpenter, & Wong, 2014; Morgan-Short, Sanz, Steinhauer, & Ullman, 2010).

Despite the recent ubiquity of this research, there has been little work that has established a clear relationship between performance in lab-based artificial language learning experiments and learning a natural language in an ecologically valid environment, which is the primary question addressed by the current study. In considering this question, two related questions also arise: Given interest in a possible dissociation between language learning and general cognitive capabilities (Hauser, Chomsky, & Fitch, 2002; Pinker & Jackendoff, 2009), does a relationship between natural and artificial language learning still hold after controlling for general intelligence? Does this relationship differ for different measures of artificial language learning?

In the present study, we seek to bridge this gap between artificial and natural language learning research by examining the performance of adult learners in both second language (L2) and artificial language learning environments. We elicited participation from a cohort of students enrolled in a Spanish language class for our artificial language learning study. We obtained a number of different measures of their classroom performance, their Spanish ability and general intelligence. The artificial language learning task included a number of different measures in a single task to represent some of the different types of artificial languages that have been used in other studies. Our analysis also examined the relationship between artificial and second language learning measures and a measure of general intelligence, IQ.

Artificial Language Learning

Researchers generally use the term Artificial Language Learning to refer to an experimental paradigm where participants learn a language, or language-like system, in a lab setting and are then tested on what they learned (e.g., Friederici et al., 2002; Gomez & Gerken, 2000). Artificial language learning studies, however, take many different forms and go by many different names. The original terminology of “artificial linguistic system” (Esper, 1925, p. 1) has been expanded to include artificial grammar learning, which generally focuses on the combinatoric aspects of language, often in the absence of meaning (Reber, 1967; Saffran et al., 1996); miniature language learning, which generally refers to learning aspects of an invented or part of a natural language, e.g., a determiner system with semantics (Hudson Kam & Newport, 2005) or the case and classifier system of Japanese (Mueller, Hahne, Fujii, & Friederici, 2005); miniature artificial language, which refers to made-up grammatical categories and word-order rules (Braine, 1963); and semi-artificial language learning, which generally refers to a portion of a natural language, often modified for experimental purposes (Williams, 2005).

The paradigm of the first artificial language learning experiment (Esper, 1925) would be quite familiar to experimenters today. The study examined biases in language learning by exposing participants to pairings of words with pictures of abstract shapes of different colors. After training, participants were presented with just the abstract pictures, which they then had to name. In this, and many of the original artificial language learning experiments, the experimenters looked at the mistakes participants made as indicative of learning biases. For example, in one experiment in Esper (1925), the words presented were bi-morphemic with the first consonant-vowel-consonant-consonant (CVCC) morpheme representing color and the second VC morpheme representing shape. The most common error was that participants would often re-segment the stimuli into two consonant-vowel-consonant (CVC) morphemes, suggesting that a principle of linguistic change involved the alignment of morpheme breaks with syllable breaks. This finding has also been interpreted to reflect a bias against morphemes with complex codas and without onsets.

The 1960s and 1970s saw a significant shift in the nature of artificial language learning experiments. Exemplified by Reber (1967), which is often cited as the first modern artificial language learning experiment, studies started to focus on the combinatoric elements of language, reflecting a shift in the focus of research on language from the study of language change to syntax and generative grammar (Chomsky, 1957, 1965). In Reber’s study, participants were trained on sequences of letters generated by a finite-state grammar without any corresponding meaning associated with the sequences. Participants successfully learned which novel letter sequences were valid with respect to the finite state grammar and crucially reported no explicit awareness of the rules, suggesting an implicit process enabling the extraction of regularities from an input. A similar method of artificial grammar learning has been used with syllables and transitional probabilities (Saffran et al., 1996), words and different types of grammars (Opitz & Friederici, 2002, 2003, 2004), musical notes and finite state grammars (Tillmann, Bharucha, & Bigand, 2000) and various other permutations of these paradigms and stimuli.

The subsequent decades saw a dramatic rise, not only in the frequency, but also in the variety of artificial language learning studies. One recent advance is the use of different participant populations, including children and infants (Gomez & Gerken, 2000; Newport & Aslin, 2004; Saffran et al., 1996), primates (Fitch & Hauser, 2004) and songbirds (Gentner, Fenn, Margoliash, & Nusbaum, 2006). Artificial language learning paradigms are also used in brain imaging studies (Friederici et al., 2002; McNealy, 2006; Morgan-Short, Steinhauer, Sanz, & Ullman, 2012) to explore the neural bases of language learning. In addition to continued research on the precise syntactic properties that facilitate language learning (Friederici, Bahlmann, Heim, Schubotz, & Anwander, 2006; Knowlton & Squire, 1996), artificial language learning studies have explored all levels of linguistic structure from phonetics (Wong et al., 2008) and phonology (Ettlinger et al., 2013; Finley & Badecker, 2009; Moreton, 2008; Wilson, 2006) to semantics (Mirkovic, Forrest, & Gaskell, 2011; Mori & Moeser, 1983; Petersson, Folia, & Hagoort, 2010) and pragmatics (Galantucci & Garrod, 2011; Nagata, 1987). Artificial language learning studies have also been used to explore the degree to which language learning may be explained by domain-general learning mechanisms (Aslin & Newport, 2008; Newport, Hauser, Spaepen, & Aslin, 2004; Saffran et al., 1996). Finally, artificial language learning experiments have recently been incorporated into iterated learning studies where findings come not from seeing whether the languages are learned, but rather, seeing what happens to the artificial languages after several cycles of learning and transmission as a method of exploring principles of language evolution and change (Galantucci, 2005; Kirby, Cornish, & Smith, 2008; Rafferty, Griffiths, & Ettlinger, 2013; Reali & Griffiths, 2009; Smith & Wonnacott, 2010). In addressing these and other issues, artificial languages can be viewed as serving as test-tube models of natural language that allow researchers to examine precise issues about language that are not readily testable with natural language (e.g., Morgan-Short et al., 2012).

As a Predictor of Second Language Acquisition

Despite the use of the artificial language learning paradigm as a way to explore the human language faculty, very little research has explored the relationship between artificial and natural language learning, particularly with respect to second language learning. Indeed, many papers acknowledge the important caveats associated with their findings. For example, Braine (1963) acknowledges that, “[a]lthough experiments with artificial languages provide a vehicle for studying learning and generalization processes hypothetically involved in learning the natural language, they cannot, of course, yield any direct information about how the natural language is actually learned [emphasis added]” (p. 324). Similarly, Ferman, Olshtain, Schechtman, and Karni (2009) note that, “[…] one may argue that the simplified language and laboratory conditions afforded in artificial language paradigms may not express the complexity of natural language or of real-life learning conditions. These arguments, however, express a classic dilemma that inevitably arises as the price of experimental control in laboratory research” (p. 387).

A few studies have explored the relationship between artificial and natural language learning indirectly. Some have shown that people with language-related impairments including aphasia (Christiansen, Kelly, Shillcock, & Greenfield, 2010; Dominey, Hoen, Blanc, & Lelekov-Boissard, 2003; Goschke, Friederici, Kotz, & van Kampen, 2001), specific language impairment (Evans, Saffran, & Robe-Torres, 2009) and developmental dyslexia (Pothos & Kirk, 2004) perform worse on artificial language learning tasks than healthy controls. For example, Evans et al. (2009) showed that children with specific language impairment performed worse in an artificial language learning experiment involving the acquisition of transitional probabilities for both syllables and notes. Furthermore, for individuals in both groups, native language receptive vocabulary correlated with their performance in the artificial language learning task. Similarly, Misyak and Christiansen (2012) found a correlation between artificial language learning and some aspects of native language ability, including vocabulary and the comprehension of complex sentences and Misyak, Christiansen, and Tomblin (2010a) found a correlation between learning an artificial language with non-adjacent dependencies and the processing of long distance dependencies in natural language.

An indirect relationship between artificial and natural language learning may also be inferred by virtue of the fact that both artificial and second language learning correlate with a third variable, verbal working memory. Examples of the relationship between language learning and working memory are well documented (Michael & Gollan, 2005; Robinson, 2005a, 2002; Williams, 2012) and include both first and second language skill. Examples of research showing a relationship between artificial language learning and working memory include Misyak and Christiansen (2012), where a measure of verbal working memory correlated significantly with the statistical learning of adjacent (r = .46) and non-adjacent (r = .53) transitional probabilities. Thus, artificial and second language learning may be related to each other by virtue of being supported by working memory. Were this to be the case, it raises the question of whether artificial language learning studies tap into language-specific learning abilities or whether artificial language learning studies assess participants’ general learning abilities or general intelligence, which in turn play a role in second language learning (Genesee, 1976).

Finally, Robinson (2005b, 2010) directly explored the relationship between artificial and second language learning by comparing performance on two artificial grammar learning tasks with a brief second language learning task. The artificial language component of the study included standard explicit and implicit artificial grammar tasks, which required participants to view and later judge strings of letters generated by an artificial grammar (implicit) or to view a series of letters and choose the letter that best completed the series (explicit). The second language learning task involved exposing the participants to sentences reflecting three different grammatical rules from a natural language (Samoan), and then testing learners on the grammatical rules of the language. In addition, Robinson assessed participants’ language learning aptitude, working memory and intelligence.

No relationship was found between either of the artificial language learning tasks and the natural language learning tasks. In addition, artificial and second language learning tasks correlated with different cognitive abilities: (a) the implicit artificial grammar learning task correlated negatively with IQ, (b) the explicit artificial language learning task correlated positively with aptitude, and (c) the natural language learning task correlated positively with working memory. Robinson suggests that the lack of a relationship between the two learning tasks may be attributed to the fact that the artificial language learning task relied primarily on implicit learning mechanisms and lacked the semantics that are crucially involved in natural second language learning. Similarly, Brooks and Kempe (2013) explored the relationship between learning a small portion of Russian grammar and learning an artificial syntactic grammar using pseudo-words presented auditorily. The results showed that there was no relationship between the auditory sequence learning and L2 learning after controlling for metalinguistic awareness as assessed in a post-hoc interview.

The result in these two studies may be limited to the particular artificial grammar learning paradigm, which did not include semantics, and the limited nature of L2 learning assessment. The findings may not necessarily generalize to the relationship between other types of artificial and second language grammar learning or between artificial grammar learning and other aspects of language, such as vocabulary, word segmentation, phonology and pronunciation, literacy or any other measures of second language aptitude. Crucially, in these previous studies, the second language learning took place over the course of less than a week, whereas typical second language learning generally occurs over the course of weeks, months and years.

Thus, previous research on the relationship between artificial and natural language has primarily focused on language disorders or on first language ability, whereas the studies that focused on the relationship between artificial and second language learning showed null results, perhaps due to the limited nature of the experiments used. Indeed, the majority of research in this area has focused on one specific type of artificial language learning, that of artificial grammar learning, where no meaning is assigned to the artificial structures being acquired. Limiting research to artificial grammar learning studies also fails to provide comparisons of different artificial language learning paradigms.

In the present study, we explored (a) the relationship between artificial and second language learning, (b) whether such a relationship still holds after controlling for general intelligence, and finally (c) whether this relationship differs for different measures of artificial language learning. We addressed these issues in the following manner: We used an artificial language learning paradigm that included semantics; we measured participants’ IQ to examine the relationship between artificial language learning and L2 controlling for general intelligence; we included a number of different artificial language learning measures including recall versus a simple grammar versus a complex grammar; and we included a more comprehensive assessments of L2 ability. This allowed us to consider which aspects of L2 learning are tapped into by artificial language learning experiments instead of thinking of it as a monolithic cognitive function. Given the diversity of metrics that are used to quantify L2 ability, different measures of artificial language learning may reflect different facets of L2 learning.

Because artificial language learning studies are attempts at simplifying language learning for a lab setting, we predicted that the complex artificial language learning measure would most closely correlate with objective measures of L2 ability. By the same token, because classroom performance incorporates a number of different skills, including language learning, homework completion, memorization, test preparation, etc., we predicted that the composite measure of artificial language learning, which includes recall, simple grammar learning and complex learning, would most closely correlate with overall measures of classroom performance. We also predicted that IQ would mediate the relationship between the composite measure of artificial language learning and classroom performance as they both incorporate more general cognitive capabilities beyond just language learning including skills associated with IQ. Conversely, we predicted that IQ would not mediate the relationship between complex artificial language learning and L2 ability, as we hypothesize these measures are indexed more exclusively to language learning.

METHODS

Participants

Participants were 44 adults (23 female), enrolled in a fourth semester Spanish language class at a university in Chicago, Illinois that focused on learning and using vocabulary, grammar and culture for communicative purposes. Participants were recruited over two separate semesters with participants receiving monetary compensation. Participants’ average age was 21.7 years (SD = 2.9) and the mean age of initial exposure to Spanish was 13.5 years old (SD = 5). None of the participants had more than five years of classroom experience with Spanish, though 8 participants indicate ages of acquisition of less than 11 years of age based on general exposure to the language. Thirty-two of the 44 participants were monolingual native English speakers aside from their experience with Spanish and none of them were heritage speakers of the language. The 12 bilingual participants had experience with languages other than Spanish (i.e., 6 Gujarati, 2 Tagalog, 1 ASL, 1 Haitian Creole, 1 Hindi, 1 Tamil). None of the languages participants knew shared the properties critical to the morphophonological system of the artificial language that participants learned in the study.

Instruments

We evaluated participants using measures of artificial language learning skill, measures of Spanish learning skill and measures of general intelligence. The artificial language learning test made use of a morphophonological grammar learning paradigm that included a semantic component. Participants were tested on both recall of the artificial language and on generalization for two morphophonological processes – simple and complex – to assess artificial language learning ability. The evaluation of Spanish classroom learning incorporated separate measures of classroom performance, subjective teacher assessments, and objective measures of Spanish interpretation and production. The general intelligence assessment included standardized measures of both verbal and non-verbal IQ.

Artificial language learning

The artificial language in this study has previously been used to explore the relationship between language learning and domain-general cognitive abilities (Ettlinger et al., 2013; Wong, Ettlinger, & Zheng, 2013). In this paradigm, participants were trained on a morphophonological system for combining affixes with words to form new words. Participants were tested on the words they were trained on and then tested on their ability to extend the grammar to another set of withheld words.

Artificial Language Stimulus

The language consisted of 30 noun stems and two affixes: a prefix, [ka-], marking the diminutive (e.g., as in English doggy) and a suffix, [-il], marking the plural (e.g., dogs). The nouns represented 30 different animals and freely combined with the affixes to produce 120 different words.

The phonological inventory consisted of American English consonants and three American English vowels, [i, e, a] each used within a CVC structure to produce ten unique nouns for each vowel. No English words or Spanish words were used. Given the diversity of other languages known by bilingual participants, words from other languages were not overtly avoided.

The grammar of the language had two word formation rules as depicted in Figure 1. The SIMPLE type, applicable to i-stems and a-stems, consisted of concatenating the stems with the suffix [-il] and/or prefix [ka-] without changing any vowels. The COMPLEX type, applicable to e-stems, consisted of concatenation plus changing vowels in the stem and affix. More specifically, the changes reflected two processes absent from English or Spanish. First, vowel harmony changed vowels in the suffix so they had the same (jaw) height as the stem vowel (e.g., the plural of [mez], ‘cat’, became [mez-el] ‘cats’ (compare [vab-il] ‘cows’)); second, vowel harmony was also triggered by the prefix [ka-], which changed stem vowels to low, (e.g., [ka-maz], ‘little cat’). When combined, they yielded complex e-stem words [ka-maz-el] as contrasted with simple i-stem words [ka-bis-il]. Vowel harmony is a relatively common phonological phenomenon and is estimated to occur in hundreds of languages (out of ~6,500) around the world (van der Hulst & van de Weijer, 1995). The particular vowel harmony grammatical system used in the current study was based on the language Shimakonde (Ettlinger, 2008).

Figure 1
Example words from the artificial grammar. Arrows point from the trigger to the target of a pattern. In the plural, the [e] in the stem [mez] changes the suffix to [el]. In the diminutive, the [a] in the prefix [ka-] changes the stem to [maz]. In the ...

A native English speaker was recorded saying each of the words spoken at a normal rate with English prosody and phonology so as to sound natural and fluent using Praat (Boersma & Weenink, 2005). Each word had a corresponding picture of an easily recognizable animal/set of animals, with the small animal picture being a shrunken, diminutive version of the standard sized picture. All stimulus and test items are shown in Appendix I.

Artificial Language Learning Procedure

Participants were only told that they would be exposed to a language and then tested on what they learned. They were given no instruction on the rules of the language or told that there were even rules to learn. Auditory stimulus was presented over headphones. Visual stimulus (pictures of the words’ meaning) was presented on a computer monitor and participants recorded responses on a button box.

Training consisted of passive exposure to word-picture pairings, with no feedback. During the 20 minutes of exposure, each participant was exposed to four repetitions of 12 nouns in all four forms for a total of 192 tokens in random order. Four nouns were complex (/e/), four were simple with /i/-stems and four were simple with /a/-stems. Each picture was displayed for three seconds with a one second ISI. Each audio clip was about one second long and started 500 ms after the picture appeared.

At the end of training, participants were tested on their recollection of the 48 words for which they had received training. During the testing of trained items, participants saw a picture and heard two words in a two-alternative forced-choice task. The foil reflected the incorrect form of the suffix (e.g. kagadel vs. kagadil) or stem (e.g., kagad vs. kaged) – foils for each item are detailed in the Appendix I. Each trained item was tested once, in random order. The first alternative was heard 500 milliseconds after the picture appeared, the second word 1500 milliseconds after the first (with an ISI of around 500 milliseconds, depending on word length). Order of presentation for the answer and foil were randomized and balanced across the study. Participants had 3 seconds after the beginning on the second word to respond.

After the testing of trained items, participants were tested on their ability to apply the grammar they learned to new words in a version of a wug-test (Berko, 1958). Wug-tests, which are used to assess grammatical knowledge, particularly in children, involve prompting a participant with a new word (e.g., wug) and then asking them to produce the word with a modified meaning (e.g., wugs), ensuring that participants are displaying knowledge of the grammar, rather than recall of an inflected word. Here, participants saw a picture (from the group of 18 withheld nouns) for 1500 ms and heard the corresponding new word. After seeing this word-picture pairing, participants saw a blank screen (one second), then another picture of the same animal but either as a plural, diminutive or diminutive-plural (e.g., first a lion, then many small lions). After seeing the second picture, participants had to choose from two heard alternative words for naming the second picture they saw using a button-press. The trials, which included both simple and complex untrained test items, were presented in random order with no feedback provided. The foils for this two-alternative forced choice wug-test are shown in Appendix I.

General Intelligence

After the artificial language learning test, participants were administered the Kauffman Brief Intelligence Test, Second Edition (KBIT; A. S. Kaufman & Kaufman, 2004). This test measures IQ, including verbal and non-verbal subcomponents. The test takes approximately 20 minutes and was administered in English.

Spanish Ability

There were several components of the Spanish language assessment used to measure participants’ success in learning Spanish in the classroom. One component was the participants’ final classroom grades. Instructors reported students’ overall final grade in the course, which was comprised of students’ grades on (a) chapter exams (55%), which included assessment of vocabulary, grammatical concepts, cultural readings and videos from particular chapters in the instructional text (VanPatten, Leeser, Keating, & Roman-Mendoza, 2005); (b) pop quizzes (15%), comprised of ten quizzes administered over the course of the semester that assessed any area that the class instructor wanted to test; (c) online homework (15%), which reflected participants’ performance on weekly online homework assignments on vocabulary, grammar and culture throughout the semester (students could attempt each homework activity up to three times, with scores reflecting the best attempt only) and (d) participation in class (15%), which was the average of a daily score based on attendance, preparedness, and participation in the Spanish classroom. The instructors also reported the students’ grades specific to each of the four graded elements included in the final classroom grade. In addition to reporting student grades, we also asked each instructor to rate participating students on a 1-5 scale based on their reading, writing, speaking and comprehension abilities.

We also used two other more objective assessment instruments. The first was the Elicited Imitation Task (Vinther, 2002). Used for decades as a measure of implicit knowledge of second language (Naiman, 1974), the Spanish version of the Elicited Imitation Task involves listening to sentences in Spanish, then repeating the same sentence in Spanish within a limited time-span. Because people are limited in what they can repeat based on what they can process (Gray & Ryan, 1973), the Elicited Imitation Task serves as a useful tool for rapid language assessment, and correlates significantly with more time-intensive measures (Erlam, 2006). The specific Elicited Imitation Task used here was adopted from Ortega (2000) and was modified to reflect Mexican Spanish, which is the dialect reflected in the course textbook. The Elicited Imitation Task is comprised of 30 sentences that vary in their grammatical complexity as well as syllable count (with a range of 7 – 17 syllables). Participants were instructed to listen to the recorded sentences in Spanish, which were presented one at a time, and to repeat each sentence out loud after hearing a beep that sounded after each sentence. Participants’ responses were digitally recorded. The digital recordings were transcribed by two independent raters and then scored following the protocol from Ortega (2000). Each sentence could earn a score of 0, 1, 2, 3, or 4 based on the accuracy of the repetition, yielding a maximum possible score of 120. Any discrepancies between the raters one and two were resolved by a third rater who listened to the recordings independently and provided a final score.

The second objective measure was a brief test of a specific aspect of grammatical knowledge in Spanish, the subjunctive of doubt construction, which is taught explicitly in Spanish language classrooms. It had been originally taught during the previous semester of Spanish and was targeted for review in the present semester, ensuring adequate opportunity to use it. The test was adopted from Farley (2001) and was comprised of two sections: a comprehension portion and a production portion, with the order counterbalanced across participants. For the 24 interpretation items, participants were required to choose between two possible main clauses to begin a sentence whose ending was provided. The critical items were designed to assess knowledge of clauses that require the use of the subjunctive such that participants had to interpret the subjunctive form of the verb in the sentence ending in order to decide which of the two possible main clauses could begin the sentence. Eleven of the test items were distractors and the thirteen critical items were scored as correct or incorrect resulting in a percent correct for each participant.

The production portion of the language assessment was a fill-in-the-blank task where participants were required to complete a provided sentence with the correct form of a verb (provided in infinitive form). This portion of the task included 18 items with ten critical items that were designed to elicit a choice between the subjunctive and indicative moods.

Examples questions for both tests are provided in Appendix II.

Procedure

Participants came in to the lab over a two-week span in the sixth and seventh weeks of the semester. This served to control for the amount of instruction in this class that they had received prior to the study and to minimize the differences in skill level between participants, though some differences remained. Participants began with the Elicited Imitation Task, then filled out the LEAP-Q questionnaire regarding their language background (Marian, Blumenfeld, & Kaushanskaya, 2007). Participants then took the artificial language learning test followed by the KBIT and Spanish test of the subjunctive construction. Information about the participants that was provided by the instructors (i.e., classroom grades, teacher assessments) was obtained the week after the last day of the semester.

RESULTS

The means, standard deviations and ranges for all measures obtained are provided in Table 1.

Table 1
Averages, standard deviations, and ranges for all measures obtained for the 44 participants in our study.

For the artificial language learning test participants performed significantly above chance on the recall of the trained items (t(43) = 7.6, p < .001) (Figure 2). Participants also performed significantly above chance on the simple untrained items (t(43) = 4.3, p < .001), but were not above chance on complex untrained items (t(43) = .11, p = .91) (Figure 2). However, fourteen of the 44 participants successfully learned the complex grammar and performed significantly above chance for the complex measure (at p < .05 for binomial probability, proportion correct > .66), reflecting a substantial range in learning success across participants for complex items. As highlighted in Ettlinger et al. (2013), below chance performance on complex items may indicate an interesting aspect of learning and performance. Therefore, we conducted additional statistical analyses taking that into consideration and included these analyses in Appendix III.

Figure 2
Performance on artificial language learning tests. Wide marker indicates average. Dotted line indicates significantly above/below chance; dashed line is chance.

With respect to second language learning, there were significant positive correlations amongst all measures of Spanish ability (Table 2) suggesting that there is internal consistency amongst the different measures. The Elicited Imitation Task, in particular, does correlate significantly with almost all of the other measures of Spanish ability. The two measures that do not correlate with the Elicited Imitation Task are homework and class participation grades, which arguably measure effort rather than acumen. All measures of teachers’ subjective evaluations of the students in reading, writing, speaking and comprehension were highly correlated with each other, suggesting minimal distinctiveness among the measures. Also, only final exam score correlates with IQ among the Spanish ability measures, corroborating previous research suggesting that general intelligence or IQ only explains a small portion of the variance observed in language learning (Robinson, 2005a). Correlations also show positive, but not significant, correlations between the three artificial language learning measures (recall vs. simple: r(43) = .14, p = .34; recall vs. complex: r(43) = .24, p = .11; simple vs. complex: r(43) = .03; p = .87).

Table 2
Correlations across all measures of artificial and second language learning.

Our primary interest is in the relationship between the artificial language learning and natural language learning measures. The three main questions that are being addressed are: Is there a relationship between artificial language learning and natural language learning? Does this relationship still hold after correcting for IQ? Does this relationship differ for different measures of artificial language learning?

As a preliminary exploratory analysis to address the first question, we consider the overall correlations shown in Table 2. There were a number of positive correlations between the three artificial language learning measures and the different measures of Spanish learning ability. These extend up to r = .49 for class grades, and r = .58 for teacher evaluated performance, which compares favorably to previous studies showing a relationship between natural language learning ability and measures of working memory and artificial grammar learning, which show correlations around r = .40 (Misyak & Christiansen, 2012; Robinson, 2005b).

To address the second question on the relationship between artificial language learning and second language learning, independent of the effects of general intelligence, we performed a first-level analysis using correlations amongst key measures controlling for IQ (Table 3). Importantly, there was still an overall significant correlation between overall final class grade and composite artificial language learning performance (r(41) = .44, uncorrected p = .001).

Table 3
Correlations across artificial language learning measures and classroom performance, correcting for IQ. Abbreviations as in Table 2.

A more conservative analysis utilizes Bonferroni correction, given the large number of comparison involved in this study. For the 50 comparisons (10 classrooms measures x 4 artificial language learning measures + IQ), a very conservative threshold of p < .001 may be used. After this correction, and when controlling for IQ, Composite artificial language learning was still significantly correlated with Exam Score, Quiz Score, and Reading, Writing and Comprehension assessment score, and Complex artificial language learning was still significantly correlated with Elicited Imitation Task.

The relatively low number of participants for this individual differences study motivates additional significance testing. First, a Monte Carlo simulation can be used to estimate the likelihood of obtaining the correlations reported in Table 3 simply by chance. For 10,000 simulation iterations of the correlations, scores were generated for 44 participants for 17 performance measures using R (R Development Core Team, 2010). The scores were generated by randomly sampling, with replacement, a value from the actual scores for each performance measure. Thus, the Recall scores for each of the 44 simulation participants were generated by randomly selecting from one of the 44 actual Recall ALL scores; then the Complex ALL scores were randomly selected from the actual Complex ALL scores, and so on for all 17 measures. This ensures the distributional properties of the scores are retained, even if they are not normal, and simulates what the results of our study would be had there been no relationship between artificial language learning, natural language learning and IQ for each participant. The correlations between these randomly generated scores were calculated in the same manner as the results in Table 3. A histogram of the correlation coefficients is shown in Figure 3. This histogram shows the correlation values one would get for conducting these analysis on random results.

Figure 3
Histogram of correlation coefficients obtained from a Monte Carlo simulation replicating this study.

As expected, most of the correlations are close to zero. Correlations above 0.27 were present in 5% of the time, correlations above .39 were present in only 0.1% of the simulations and there were no correlations as large as .45 in any of the 10,000 simulations of 48 correlations. This suggests that the correlations observed in Table 3, particular those above .39, are extremely unlikely to be due to chance.

Second, the possibility of any individual participant significantly biasing the findings can be mitigated using a leave-one-out analysis. The correlations between composite artificial language learning score and overall classroom grade and Elicited Imitation Task performance were calculated, controlling for IQ, 44 times, each time leaving out one participant. The correlation between composite artificial language learning score and overall classroom performance ranged from .40 to .51, with a mean of .44 and standard deviation of .020 and the correlation between composite artificial language learning score and Elicited Imitation Task performance ranged from .35 to .52, with a mean of .39 and standard deviation of .025. While leaving out certain participants changed the significance value for these correlations, the results were still significant and there is no evidence that a small number of participants drove the correlations found.

Considering the third question, on the differing relationship between the different measures of artificial language learning and classroom learning, there were a number of positive correlations to consider among the different artificial language learning measures.

After controlling for IQ, Complex artificial language learning was found to be correlated to exam grade (r(41) = .42, p = .005), overall class score (r(41) = .32, p = .03), teacher-rating of comprehension ability (r(41) = .34, p = ..025), production on the Spanish test of subjunctive (r(41) = .30, p = .048), and the Elicited Imitation Task (r(41) = .49, p < .001).

Simple artificial language learning performance was correlated with exam grade (r(41) = .32, p = .041) and teaching ratings for reading and writing (r(41) = .56, p < .001, and r(41) = .58, p < .001, respectively).

Recall of trained items was related to homework grade (r(41) = .32, p = .04), quiz grade (r(41) = .33, p = ..035) and teaching ratings of reading, speaking and comprehension ability (r(41) = .34, p = ..030, r(41) = .40, p = .001, and r(41) = .52, p < .001, respectively).

This different set of relationships for complex and simple artificial language learning (e.g., both show a relationship with Exam grade but only complex artificial language learning shows a relationship with Final Grade, see Figure 4) suggests that second language learning is not a unitary process; it involves a number of different skills and abilities including understanding, speaking, written communication and explicit knowledge of the language (i.e., what is tested on exams).

Figure 4
Correlation between the three artificial language learning measures (recall of training items, generalization for simple items and generalization of complex items) and (A) Final Grade and (B) Total Exam Grade.

Bivariate correlations do not take collinearity into consideration, and our data had a large number of collinearities, particularly since some measures are embedded in others by design (e.g., aggregate and component artificial language learning scores). Furthermore, there was variability in the linguistic background of the participants: Some were bilingual and they all had different amounts of prior exposure to Spanish. Therefore, we used a step-wise regression to look for unique variance explained. We also included number of years of Spanish and whether the participant was bilingual as co-variates. This also allowed us to address the question of how well artificial language learning tests predict second language learning and the reverse question of what aspects of second language learning are tapped into when conducting an artificial language learning test.

We conducted three step-wise multiple regressions with different dependent variables that incorporated both forward selection and backward elimination. The first regression has composite artificial language learning score as the dependent variable, and the initial model included all of the measures of Spanish ability plus IQ, number of years of exposure to Spanish and whether the participant was bilingual as independent measures. After the regression, the final model included Quiz score, Exam score and IQ (Table 4) and accounted for a significant amount of variance in artificial language learning performance (R2 = .41, p = .0001). Crucially, none of the language experience measures – years of Spanish and bilingualism – factored into performance, possibly due to the narrow standard deviation of years of exposure to Spanish and low number of bilinguals.

Table 4
Step-wise multiple regressions showing the unique variance in composite artificial language learning accounted for by IQ, classroom quiz and exam scores.

The second regression included Final Grade as the dependent variable, with the different artificial language learning scores, plus IQ, bilingualism and previous Spanish exposure as the independent measures in the initial model. After the step-wise regression, the final model for this second regression (Table 5) accounted for a significant amount of variance in classroom performance (R2 = .23, p= .01). As shown, IQ does play a role in Final Grade, as expected, but artificial language learning explained variance beyond IQ. In our study, the Complex and Simple AGL scores were included in the model as explaining this variance, while the recall score did not explain any additional variance in performance. As above, years of experience with Spanish and bilingualism did not account for any additional variance.

Table 5
Step-wise multiple regressions showing the unique variance in composite Final Grade accounted for by IQ, Complex artificial language learning and Simple artificial language learning.

Finally, the third step-wise multiple regression included Elicited Imitation Task score as the dependent variable, also with the different artificial language learning scores, plus IQ, bilingualism and previous Spanish exposure as the independent measures in the initial model. The final model for this third step-wise multiple regression (Table 6) accounted for significant variance in Spanish ability (R2 = .24, p < .001). The final model included only Complex AGL as predicting performance on the Elicited Imitation Task, suggesting that this Complex AGL measure is a useful innovation over previous artificial language learning experiments. The fact that IQ and the other artificial language learning measures did not explain additional variance in Elicited Imitation Task performance suggests that this language-learning ability (as contrasted with classroom ability) is distinct from other measures of intelligence such as IQ and recall ability.

Table 6
Step-wise multiple regressions showing the unique variance in Elicited Imitation Task accounted for by Complex AGL.

Although bilingualism and previous Spanish exposure accounted for no additional variance in any of the multiple regression analyses, we compared performance between bilinguals and monolinguals and correlated performance to years of Spanish exposure to further ensure that these are not factors. There was no evidence of a difference between bilinguals and monolinguals for overall artificial language learning (unpaired t-test t(42) = .29, p = .78), classroom performance (t(42) = .62, p =.53) or Elicited Imitation Task (t(42) = 1.3, p = .20) and there was also no evidence for a relationship between previous Spanish exposure and overall artificial language learning (r(41)=.11, p = .47), overall classroom performance (r(41) = -.08, p = .60) or Elicited Imitation Task (r(41) = -.15, p = .31).

Thus, these results show a strong relationship between performance on an artificial language learning task and L2 learning. Artificial language learning performance correlated with a broad range of L2 measures, including classroom performance, teacher evaluation of language ability, and more objective measures of language ability. The Complex artificial language learning task showed the strongest relationship with more objective measures like the Elicited Imitation Task. This is in accordance with the idea that artificial language learning paradigms are simplified versions of language learning, and thus the most complex artificial language learning will most closely resemble L2 learning, though correlations were found for Simple and Recall aspects of artificial language learning, as well. Conversely, measures of overall classroom performance, which are based on a number of non-language learning related skills, correlated most closely with Composite measures of artificial language learning. Crucially, these relationships still hold when controlling for general intelligence, or IQ. Finally, we considered which different aspects of L2 learning are captured by artificial language learning overall. The results of a multiple regression with artificial language learning as the dependent variable suggested that artificial language learning taps primarily into IQ and classroom performance on quizzes and exams.

DISCUSSION

In the present study, we examined the relationship between artificial language learning and natural language learning, how it may differ for different measures of artificial language learning, and how the relationship may be mediated by IQ. Our primary finding—a positive correlation between performance on an artificial language learning task and second language learning in an ecologically valid environment—provides a key link between studies that use artificial language learning experiments and our understanding of second language learning in the classroom.

By virtue of using an artificial language learning task with several measures and meaning, we were also able to show that the more complex grammar tracked most closely with classroom performance and Spanish ability. This suggests that artificial language learning studies that incorporate a semantic component and involve more complicated grammatical systems may closely resemble second language learning. On the other hand, the composite measure of artificial language learning, which included recall and simple and complex grammatical generalization, is most closely related to overall classroom performance, which includes study skills, motivation, and so on. Because different aspects of artificial language learning were related to different aspects of second language learning, we may conclude that not all artificial language learning paradigms would be expected to approximate language learning

Further research can explore the generalizability of these results to other artificial language learning paradigms. A more comprehensive study would be longitudinal and follow students over the course of a number of semesters, to observe changes in proficiency, which is more reflective of learning, and would include a larger sample size, as is important in individual differences studies (e.g., Desmond & Glover, 2002).

Juxtaposing the differences between the present artificial language learning paradigm and other studies, which (1) found no relationship between artificial and second language learning (Robinson, 2005b), (2) found a relationship mediated by other factors (Brooks & Kempe, 2013), or (3) found an indirect relationship (Evans et al., 2009; Misyak et al., 2010a; Misyak, Christiansen, & Tomblin, 2010b) suggests that the specific methods used in an artificial language learning paradigm matter in terms of engaging natural language learning processes. The current paradigm differs from previous studies by having a semantic component, by being multimodal with auditory and visual-picture referents, and by focusing on morphophonology. Future research manipulating modality, semantics and the parts of artificial language acquired can provide further clarity on what is necessary to best approximate natural language learning.

Finally, the results address our question on whether a relationship between artificial language learning and classroom learning still holds when factoring out general intelligence. The correlations are still significant after controlling IQ, suggesting that the ability being tapped into by artificial language learning and L2 learning is distinct from general intelligence.

Further characterizing the nature of this language learning ability remains an interesting challenge. The results of the present study could mean that there is a distinct language-learning skill underpinning second language learning ability and that artificial language learning studies are a useful method of exploring and evaluating that ability. This is generally the assumption made by authors using artificial language learning studies to explore language function, including those showing an overlap between neural mechanisms associated with language processing and neural mechanisms associated with artificial language learning (e.g., Friederici et al., 2002).

Alternatively, the results could be interpreted to mean that there is some third skill or capability that is crucial for both artificial language learning and second language learning distinct from IQ. This skill may be related to perceptual learning in the auditory system (Maye, Werker, & Gerken, 2002), general pattern matching, or different memory subsystems.

Indeed, auditory working memory has been argued to be involved in both first and second language learning (Alan D. Baddeley, 1992; A. D. Baddeley, Gathercole, & Papagno, 1998; Ellis & Sinclair, 1996) as well as in artificial language learning success (Amato & MacDonald, 2010; Misyak & Christiansen, 2012).

The procedural and declarative memory systems have also been suggested to play a role in both artificial and second language learning (Conway, Bauernschmidt, Huang, & Pisoni, 2010; Ettlinger et al., 2013; Morgan-Short et al., 2014; Ullman, 2004, 2005). In particular, previous research has suggested that L2 learning is supported by procedural memory (Ettlinger, 2008; Morgan-Short et al., 2014); that procedural memory is an important component of artificial language learning (Reber, 1967); and that procedural memory is distinct from general intelligence (Cohen & Squire, 1980). Thus, procedural memory may be the common substrate for artificial language learning and L2 learning that is distinct from IQ. However, the fact that the inclusion of semantics may be an important part of a predictive artificial language learning paradigm suggests that there may be more than procedural learning involved.

Implicit statistical learning may also play an important role (Conway et al., 2010; Misyak et al., 2010b) as it has also been shown to be distinct from IQ (S. B. Kaufman et al., 2010). This is further supported by evidence showing a role for dopamine in second language learning (Wong, Morgan-Short, Ettlinger, & Zheng, 2012) and in more general implicit learning processes (Jocham et al., 2009).

Ultimately, there may be some unique learning mechanism (Hauser et al., 2002) or unique combination of general mechanisms (Pinker & Jackendoff, 2009) that are specific to acquiring linguistic systems. The present study provides no evidence to distinguish these possibilities, but understanding the interaction between general cognitive capabilities underlying artificial language learning and second language learning will provide insight into understanding human language learning as a unique ability (Hauser et al., 2002) or as an ability primarily shaped by domain-general cognitive function (Elman, Bates, Johnson, & Karmiloff-Smith, 1996).

CONCLUSION

The results of the current study provide evidence for a relationship between artificial language learning and second language learning. This suggests that previous research using artificial language learning experiments may provide insight into naturalistic language learning, particularly when they include a semantic component and complexity. However, the full theoretical implications of this finding still remain unclear given that the nature of this relationship is still unknown. Future research and larger, longitudinal studies can provide more insight into the specific cognitive components involved in artificial and natural language learning. These future studies can then address the question of whether artificial language learning experiments provide insight into language-specific learning abilities or whether it is more a function of motivation, different memory subsystems, perceptual abilities, some other cognitive ability or, as is likely, some combination of these abilities.

Table 7
Correlations across artificial language learning measures and classroom performance, correcting for IQ with participants below chance performance on complex items removed (less than 33% correct). Abbreviations as in Table 2.

Supplementary Material

Supp AppendixS1-S3

Acknowledgments

Statement of Support:

This work was supported by National Science Foundation Grant BCS-1125144, the Liu CheWoo Institute of Innovative Medicine at The Chinese University of Hong Kong, the US National Institutes of Health grants R01DC008333 and R01DC013315, the Research Grants Council of Hong Kong grants 477513 and 14117514, the Health and Medical Research Fund of Hong Kong grant 01120616, and the Global Parent Child Resource Centre Limited to PCMW.

REFERENCES

  • Amato MS, MacDonald MC. Sentence processing in an artificial language: Learning and using combinatorial constraints. Cognition. 2010;116(1):143–148. doi: 10.1016/j.cognition.2010.04.001. [PMC free article] [PubMed]
  • Aslin RN, Newport EL. What statistical learning can and can’t tell us about language acquisition. In: Colombo PMJ, Freund L, editors. Infant Pathways to Language: Methods, Models, and Research Directions. Lawrence Erlbaum Associates; Mahwah, NJ: 2008.
  • Baddeley AD. Working memory. Science. 1992;255(5044):556–559. [PubMed]
  • Baddeley AD, Gathercole S, Papagno C. The phonological loop as a language learning device. Psychological Review. 1998;105(1):158–173. [PubMed]
  • Berko J. The Childs Learning of English Morphology. Word-Journal of the International Linguistic Association. 1958;14(2-3):150–177.
  • Boersma P, Weenink D. Praat: doing phonetics by computer. (Version 4.3.19) 2005 Retrieved from http://www.praat.org.
  • Braine MD. On learning the grammatical order of words. Psychological Review. 1963;70:323–348. [PubMed]
  • Brooks PJ, Kempe V. Individual Differences in adult foriegn language learning: The mediating effect of metalinguistic awareness. Memory and Cognition. 2013;41:281–296. [PubMed]
  • Chomsky N. Syntactic Structures. Mouton: 1957.
  • Chomsky N. Aspects of the Theory of Syntax. MIT Press; Cambridge: 1965.
  • Christiansen MH, Kelly ML, Shillcock RC, Greenfield K. Impaired artificial grammar learning in agrammatism. Cognition. 2010;116(3):382–393. doi: 10.1016/j.cognition.2010.05.015. [PubMed]
  • Cohen NJ, Squire LR. Preserved learning and retention of pattern-analyzing skill in amnesia: dissociation of knowing how and knowing that. Science. 1980;210(4466):207–210. [PubMed]
  • Conway CM, Bauernschmidt A, Huang SS, Pisoni DB. Implicit statistical learning in language processing: word predictability is the key. Cognition. 2010;114(3):356–371. doi: 10.1016/j.cognition.2009.10.009. [PMC free article] [PubMed]
  • Culbertson J, Smolensky P, Legendre G. Learning biases predict a word order universal. Cognition. 2012;122(3):306–329. doi: 10.1016/j.cognition.2011.10.017. [PubMed]
  • Desmond JE, Glover GH. Estimating sample size in functional MRI (fMRI) neuroimaging studies: statistical power analyses. J Neurosci Methods. 2002;118(2):115–128. [PubMed]
  • Dominey PF, Hoen M, Blanc JM, Lelekov-Boissard T. Neurological basis of language and sequential cognition: evidence from simulation, aphasia, and ERP studies. Brain Lang. 2003;86(2):207–225. doi: S0093934X02005291 [pii] [PubMed]
  • Ellis NC, Sinclair SG. Working Memory in the Acquisition of Vocabulary and Syntax: Putting Language in Good Order. Quarterly Journal of Experimental Psychology. 1996;49A(1):234–250.
  • Elman JL, Bates EA, Johnson MH, Karmiloff-Smith A. Rethinking innateness: A connectionist perspective on development. The MIT Press; Cambridge, MA: 1996.
  • Erlam R. Elicited imitation as a measure of L2 implicit knowledge: An empirical validation study. Applied Linguistics. 2006;27(3):464–491. doi: Doi 10.1093/Applin/Aml001.
  • Esper EA. A Technique for the Experimental Investigation of Associative Interference in Artificial Language Material. Language Monographs. 1925;1:1–47.
  • Ettlinger M. Input-Driven Opacity. (Ph.D. thesis) University of California; Berkeley: 2008.
  • Ettlinger M, Bradlow AR, Wong PCM. Variability in the Acquisition of Complex Morphophonology. Applied Psycholinguistics, FirstView. 2013:1–25.
  • Evans JL, Saffran JR, Robe-Torres K. Statistical Learning in Children With Specific Language Impairment. Journal of Speech Language and Hearing Research. 2009;52(2):321–335. doi: 10.1044/1092-4388(2009/07-0189) [PMC free article] [PubMed]
  • Farley AP. Authentic processing instruction and the Spanish subjunctive. Hispania. 2001;84:289–299.
  • Ferman S, Olshtain E, Schechtman E, Karni A. The acquisition of a linguistic skill by adults: Procedural and declarative memory interact in the learning of an artificial morphological rule. Journal of Neurolinguistics. 2009;22(4):384–412. doi: Doi 10.1016/J.Jneuroling.2008.12.002.
  • Finley S, Badecker W. Artificial grammar learning, and feature-based generalization. Journal of Memory and Language. 2009;61:423–437.
  • Finn AS, Hudson Kam CL. The curse of knowledge: first language knowledge impairs adult learners’ use of novel statistics for word segmentation. Cognition. 2008;108(2):477–499. doi: S0010-0277(08)00098-X [pii] 10.1016/j.cognition.2008.04.002. [PubMed]
  • Fitch WT, Hauser MD. Computational constraints on syntactic processing in a nonhuman primate. Science. 2004;303(5656):377–380. doi: 10.1126/science.1089401. [PubMed]
  • Friederici AD, Bahlmann J, Heim S, Schubotz RI, Anwander A. The brain differentiates human and non-human grammars: functional localization and structural connectivity. Proc Natl Acad Sci U S A. 2006;103(7):2458–2463. doi: 0509389103 [pii] 10.1073/pnas.0509389103. [PubMed]
  • Friederici AD, Steinhauer K, Pfeifer E. Brain signatures of artificial language processing: Evidence challenging the critical period hypothesis. Proceedings of the National Academy of Sciences of the United States of America. 2002;99(1):529–534. [PubMed]
  • Galantucci B. An experimental study of the emergence of human communication systems. Cogn Sci. 2005;29(5):737–767. doi: 10.1207/s15516709cog0000_34. [PubMed]
  • Galantucci B, Garrod S. Experimental semiotics: a review. Front Hum Neurosci. 2011;5:11. doi: 10.3389/fnhum.2011.00011. [PMC free article] [PubMed]
  • Genesee F. The role of intelligence in second language learning. Language Learning. 1976;26:267–280.
  • Gentner TQ, Fenn KM, Margoliash D, Nusbaum HC. Recursive syntactic pattern learning by songbirds. Nature. 2006;440(7088):1204–1207. doi: 10.1038/nature04675. [PMC free article] [PubMed]
  • Gomez RL, Gerken L. Infant artificial language learning and language acquisition. Trends in Cognitive Science. 2000;4(5):178–186. [PubMed]
  • Goschke T, Friederici AD, Kotz SA, van Kampen A. Procedural learning in Broca’s aphasia: dissociation between the implicit acquisition of spatio-motor and phoneme sequences. J Cogn Neurosci. 2001;13(3):370–388. [PubMed]
  • Gray B, Ryan B. A language program for the nonlanguage child. Research Press; Champaign, IL: 1973.
  • Hauser MD, Chomsky N, Fitch WT. The faculty of language: what is it, who has it, and how did it evolve? Science. 2002;298(5598):1569–1579. [PubMed]
  • Hudson Kam CL, Newport E. Regularizing unpredictable variation: The roles of adult and child learner in language formation and change. Language Learning and Development. 2005;1:151–195.
  • Jocham G, Klein TA, Neumann J, von Cramon DY, Reuter M, Ullsperger M. Dopamine DRD2 polymorphism alters reversal learning and associated neural activity. Journal of Neuroscience. 2009;29(12):3695–3704. doi: 29/12/3695 [pii] 10.1523/JNEUROSCI.5195-08.2009. [PMC free article] [PubMed]
  • Kaufman AS, Kaufman NL. Kaufman Brief Intelligence Test. Second ed Pearson, Inc; Bloomington, MN: 2004.
  • Kaufman SB, Deyoung CG, Gray JR, Jimenez L, Brown J, Mackintosh N. Implicit learning as an ability. Cognition. 2010;116(3):321–340. doi: 10.1016/j.cognition.2010.05.011. [PubMed]
  • Kirby S, Cornish H, Smith K. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. PNAS. 2008;105(31):10681–10686. [PubMed]
  • Knowlton BJ, Squire LR. Artificial grammar learning depends on implicit acquisition of both abstract and exemplar-specific information. J Exp Psychol Learn Mem Cogn. 1996;22(1):169–181. [PubMed]
  • Marian V, Blumenfeld HK, Kaushanskaya M. The Language Experience and Proficiency Questionnaire (LEAP-Q): assessing language profiles in bilinguals and multilinguals. J Speech Lang Hear Res. 2007;50(4):940–967. doi: 50/4/940 [pii] 10.1044/1092-4388(2007/067) [PubMed]
  • Maye J, Werker J, Gerken L. Infant sensitivity to distributional information can affect phonetic discrimination. Cognition. 2002;82:B101–B111. [PubMed]
  • McNealy K. Cracking the Language Code: Neural Mechanisms Underlying Speech Parsing. Journal of Neuroscience. 2006;26(29):7629–7639. doi: 10.1523/jneurosci.5501-05.2006. [PMC free article] [PubMed]
  • Michael EB, Gollan TH. Being and becoming bilingual: Individual differences and consequences for language production. In: Kroll JF, Groot A. M. B. d., editors. Handbook of bilingualism: Psycholinguistic approaches. Oxford University Press; New York: 2005.
  • Mirkovic J, Forrest S, Gaskell MG. Semantic Regularities in Grammatical Categories: Learning Grammatical Gender in an Artificial Language. In: Carlson L, Holscher C, Shipley T, editors. Proceedings of the 33rd Annual Conference of the Cognitive Science Society; Austin, TX. Cognitive Science Society; 2011.
  • Misyak JB, Christiansen MH. Statistical learning and language: An individual differences study. Language Learning. 2012;62:302–331.
  • Misyak JB, Christiansen MH, Tomblin JB. On-line individual differences in statistical learning predict language processing. Frontiers in Psychology. 2010a;1(31) [PMC free article] [PubMed]
  • Misyak JB, Christiansen MH, Tomblin JB. Sequential Expectations: The Role of Prediction-Based Learning in Language. Topics in Cognitive Science. 2010b;2:138–153. [PubMed]
  • Moreton E. Analytic bias and phonological typology. Phonology. 2008;25:83–127.
  • Morgan-Short K, Faretta-Stutenberg M, Brill-Schuetz K, Carpenter H, Wong PCM. Declarative and procedural memory as individual differences in second language acquisition. Bilingualism: Language and Cognition. 2014;17(1):56–72.
  • Morgan-Short K, Sanz C, Steinhauer K, Ullman MT. Second language acquisition of gender agreement in explicit and implicit training conditions: An event-related potential study. Language Learning. 2010;60(1):154–193. [PMC free article] [PubMed]
  • Morgan-Short K, Steinhauer K, Sanz C, Ullman MT. Explicit and implicit second language training differentially affect the achievement of native-like brain activation patterns. Journal of Cognitive Neuroscience. 2012;24(4):933–947. [PMC free article] [PubMed]
  • Mori K, Moeser SD. The Role of Syntax Markers and Semantic Referents in Learning an Artificial Language. Journal of Verbal Learning and Verbal Behavior. 1983;22(6):701–718.
  • Mueller JL, Hahne A, Fujii Y, Friederici AD. Native and nonnative speakers’ processing of a miniature version of Japanese as revealed by ERPs. J Cogn Neurosci. 2005;17(8):1229–1244. doi: 10.1162/0898929055002463. [PubMed]
  • Nagata H. Extraction of linguistically relevant pragmatic contrast through language learning. Journal of Psycholinguistic Research. 1987;16(1):43–61.
  • Naiman N. The use of elicited imitation in second language acquisition research. Working Papers on Bilingualism. 1974;2:1–37.
  • Newport EL, Aslin RN. Learning at a distance I. Statistical learning of non-adjacent dependencies. Cognitive Psychology. 2004;48:127–162. [PubMed]
  • Newport EL, Hauser MD, Spaepen G, Aslin RN. Learning at a distance II. Statistical learning of non-adjacent dependencies in a non-human primate. Cogn Psychol. 2004;49(2):85–117. doi: 10.1016/j.cogpsych.2003.12.002. [PubMed]
  • Opitz B, Friederici AD. Artificial language acquisition: Changes in brain activity during the course of learning. Journal of Cognitive Neuroscience. 2002:35–35.
  • Opitz B, Friederici AD. Interactions of the hippocampal system and the prefrontal cortex in learning language-like rules. NeuroImage. 2003;19(4):1730–1737. [PubMed]
  • Opitz B, Friederici AD. Brain correlates of language learning: the neuronal dissociation of rule-based versus similarity-based learning. Journal of Neuroscience. 2004;24(39):8436–8440. doi: 10.1523/JNEUROSCI.2220-04.2004 24/39/8436 [pii] [PubMed]
  • Ortega L. Ph.D. Dissertation. University of Hawaii; 2000. Understanding Syntactic Complexity: The Measurement of Change in the Syntax of Instructed L2 Spanish Learners.
  • Petersson KM, Folia V, Hagoort P. What artificial grammar learning reveals about the neurobiology of syntax. Brain Lang. 2010 doi: S0093-934X(10)00138-0 [pii] 10.1016/j.bandl.2010.08.003. [PubMed]
  • Pinker S, Jackendoff R. The reality of a universal language faculty. Behavioral and Brain Sciences. 2009;32(5):465–466. doi: 10.1017/s0140525x09990720.
  • Pothos EM, Kirk J. Investigating learning deficits associated with dyslexia. Dyslexia. 2004;10(1):61–76. doi: 10.1002/dys.266. [PubMed]
  • R Development Core Team . R: A language and environment for statistical computing. Vienna, Austria: 2010. Retrieved from http://www.R-project.org/
  • Rafferty AN, Griffiths TL, Ettlinger M. Greater learnability is not sufficient to produce cultural universals. Cognition. 2013;129(1):70–87. doi: 10.1016/j.cognition.2013.05.003. [PMC free article] [PubMed]
  • Reali F, Griffiths TL. The evolution of frequency distributions: relating regularization to inductive biases through iterated learning. Cognition. 2009;111(3):317–328. doi: 10.1016/j.cognition.2009.02.012. [PubMed]
  • Reber AS. Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior. 1967;6:855–863.
  • Robinson P. Aptitude and second language acquisition. Annual Review of Applied Linguistics. 2005a;25:46–73.
  • Robinson P. Cognitive abilities, chunk-strength and frequency effects during implicit Artificial Grammar, and incidental second language learning: Replications of Reber, Walkenfeld and Hernstadt (1991) and Knowlton and Squire (1996) and their relevance to SLA. Studies in Second Language Acquisition. 2005b;27:235–268. doi: doi:10.1017/S0272263105050126.
  • Robinson P. Implicit Artificial Grammar and incidental natural second language learning: How comparable are they? Language Learning. 2010;60(Supplement 2):245–263.
  • Robinson P, editor. Individual differences and instructed language learning. Benjamins; Amsterdam: 2002.
  • Saffran JR, Aslin RN, Newport EL. Statistical learning by 8-month-old infants. Science. 1996;274(5294):1926–1928. [PubMed]
  • Smith K, Wonnacott E. Eliminating unpredictable variation through iterated learning. Cognition. 2010;116(3):444–449. doi: 10.1016/j.cognition.2010.06.004. [PubMed]
  • Tillmann B, Bharucha JJ, Bigand E. Implicit learning of tonality: a self-organizing approach. Psychological Review. 2000;107(4):885–913. [PubMed]
  • Ullman MT. Contributions of memory circuits to language: the declarative/procedural model. Cognition. 2004;92(1-2):231–270. [PubMed]
  • Ullman MT. A cognitive neuroscience perspective on second language acquisition: The declarative/procedural model. In: Sanz C, editor. Mind and Context in Adult Second Language Acquisition: Methods, Theory, and Practice. Georgetown University Press; Washington, DC: 2005. pp. 141–178.
  • van der Hulst H, van de Weijer J. Vowel harmony. In: Goldsmith JA, editor. The Handbook of Phonological Theory. Blackwell: 1995. pp. 495–534.
  • VanPatten B, Leeser MJ, Keating GD, Roman-Mendoza E. Sol y viento: Beginning Spanish. Mcgraw-Hill: 2005.
  • Vinther T. Elicited Imitation: A Brief Overview. International Journal of Applied Linguistics (INJAL) 2002
  • Williams JN. Learning without awareness. Studies in Second Language Acquisition. 2005;27(2):269–304. doi: 10.1017/S0272263105050138.
  • Williams JN. Working memory and SLA. In: Gass SM, Mackey A, editors. The Handbook of Second Language Acquisition. Routledge; New York: 2012. pp. 427–442.
  • Wilson C. Learning phonology with substantive bias: An experimental and computational study of velar palatalization. Cognitive Science. 2006;30:945–982. [PubMed]
  • Wong PC, Ettlinger M, Zheng J. Linguistic Grammar Learning and -TAQ-IA Polymorphism. Plos One. 2013;8(5):e64983. doi: 10.1371/journal.pone.0064983. [PMC free article] [PubMed]
  • Wong PC, Morgan-Short K, Ettlinger M, Zheng J. Linking neurogenetics and individual differences in language learning: The dopamine hypothesis. Cortex. 2012 doi: 10.1016/j.cortex.2012.03.017. [PMC free article] [PubMed]
  • Wong PC, Warrier CM, Penhune VB, Roy AK, Sadehh A, Parrish TB, Zatorre RJ. Volume of left Heschl’s Gyrus and linguistic pitch learning. Cereb Cortex. 2008;18(4):828–836. doi: bhm115 [pii] 10.1093/cercor/bhm115. [PMC free article] [PubMed]