In prior work with adults, women were found to outperform men on a paired-associates word-learning task, but only when learning phonologically-familiar novel words. The goal of the present work was to examine whether similar gender differences in word learning would be observed in children. In addition to manipulating phonological familiarity, referent familiarity was also manipulated. Children between the ages of 5 and 7 learned phonologically-familiar or phonologically-unfamiliar novel words in association with pictures of familiar referents (animals) or unfamiliar referents (aliens). Retention was tested via a forced-choice recognition measure administered immediately after the learning phase. Analyses of retention data revealed stronger phonological and referent familiarity effects in girls than in boys. Moreover, girls outperformed boys only when learning phonologically-familiar novel words and when learning novel words in association with familiar referents. These findings are interpreted to suggest that females are more likely than males to recruit native-language phonological and semantic knowledge during novel word learning.
word learning; gender differences; phonology; semantics
The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience.
Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience.
Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents.
Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults.
Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain’s capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon.
Verbal working memory is an essential component of many language functions, including sentence comprehension and word learning. As such, working memory has emerged as a domain of intense research interest both in aphasiology and in the broader field of cognitive neuroscience. The integrity of verbal working memory encoding relies on a fluid interaction between semantic and phonological processes. That is, we encode verbal detail using many cues related to both the sound and meaning of words. Lesion models can provide an effective means of parsing the contributions of phonological or semantic impairment to recall performance.
Methods and Procedures
We employed the lesion model approach here by contrasting the nature of lexicality errors incurred during recall of word and nonword sequences by 3individuals with progressive nonfluent aphasia (a phonological dominant impairment) compared to that of 2 individuals with semantic dementia (a semantic dominant impairment). We focused on psycholinguistic attributes of correctly recalled stimuli relative to those that elicited a lexicality error (i.e., nonword → word OR word → nonword).
Outcomes and results
Patients with semantic dementia showed greater sensitivity to phonological attributes (e.g., phoneme length, wordlikeness) of the target items relative to semantic attributes (e.g., familiarity). Patients with PNFA showed the opposite pattern, marked by sensitivity to word frequency, age of acquisition, familiarity, and imageability.
We interpret these results in favor of a processing strategy such that in the context of a focal phonological impairment patients revert to an over-reliance on preserved semantic processing abilities. In contrast, a focal semantic impairment forces both reliance upon and hypersensitivity to phonological attributes of target words. We relate this interpretation to previous hypotheses about the nature of verbal short-term memory in progressive aphasia.
Working Memory; Recall; Semantic Dementia; Aphasia; Progressive Nonfluent Aphasia
In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed.
We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone.
We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage.
Current theories and models of the structural organization of verbal short-term memory are primarily based on evidence obtained from manipulations of features inherent m the short-term traces of the presented stimuli, such as phonological similarity. In the present study, we investigated whether properties of the stimuli that are not inherent in the short-term traces of spoken words would affect performance in an immediate memory span task. We studied the lexical neighbourhood properties of the stimulus items, which are based on the structure and organization of words in the mental lexicon. The experiments manipulated lexical competition by varying the phonological neighbourhood structure (i.e., neighbourhood density and neighbourhood frequency) of the words on a test list while controlling for word frequency and intra-set phonological similarity (family size). Immediate memory span for spoken words was measured under repeated and nonrepeated sampling procedures. The results demonstrated that lexical competition only emerged when a nonrepeated sampling procedure was used and the participants had to access new words from their lexicons. These findings were not dependent on individual differences in short-term memory capacity. Additional results showed that the lexical competition effects did not interact with proactive interference. Analyses of error patterns indicated that item-type errors, but not positional errors, were influenced by the lexical attributes of the stimulus items. These results complement and extend previous findings that have argued for separate contributions of long-term knowledge and short-term memory rehearsal processes in immediate verbal serial recall tasks.
To construct their first lexicon, infants must determine the relationship between native phonological variation and the meanings of words. This process is arguably more complex for bilingual learners who are often confronted with phonological conflict: phonological variation that is lexically relevant in one language may be lexically irrelevant in the other. In a series of four experiments, the present study investigated English–Mandarin bilingual infants’ abilities to negotiate phonological conflict introduced by learning both a tone and a non-tone language. In a novel word learning task, bilingual children were tested on their sensitivity to tone variation in English and Mandarin contexts. Their abilities to interpret tone variation in a language-dependent manner were compared to those of monolingual Mandarin learning infants. Results demonstrated that at 12–13 months, bilingual infants demonstrated the ability to bind tone to word meanings in Mandarin, but to disregard tone variation when learning new words in English. In contrast, monolingual learners of Mandarin did not show evidence of integrating tones into word meanings in Mandarin at the same age even though they were learning a tone language. However, a tone discrimination paradigm confirmed that monolingual Mandarin learning infants were able to tell these tones apart at 12–13 months under a different set of conditions. Later, at 17–18 months, monolingual Mandarin learners were able to bind tone variation to word meanings when learning new words. Our findings are discussed in terms of cognitive adaptations associated with bilingualism that may ease the negotiation of phonological conflict and facilitate precocious uptake of certain properties of each language.
lexical tone; phoneme discrimination; infant speech perception; Mandarin Chinese; word learning
Performance on three verbal measures (story recall, paired associated learning, category fluency) designed to assess the integration of long-term semantic and linguistic knowledge, phonological working memory and executive resources within the proposed ‘episodic buffer’ of working memory (Baddeley, 2007) was assessed in children with intellectual disabilities (ID). It was hypothesised that children with ID would show equivalent performance to typically developing children of the same mental age. This prediction was based on the hypothesis that, despite poorer phonological short-term memory than mental age matched peers, those with ID may benefit from more elaborate long-term memory representations, because of greater life experience. Children with ID were as able as mental age matched peers to remember stories, associate pairs of words together and generate appropriate items in a category fluency task. Performance did not, however, reach chronological age level on any of the tasks. The results suggest children with ID perform at mental age level on verbal ‘episodic buffer’ tasks, which require integration of information from difference sources, supporting a ‘delayed’ rather than ‘different’ view of their development.
Episodic buffer; Children; Intellectual disabilities; Working memory
The aim of the present study was to dissociate the neural correlates of semantic and phonological processes during word reading and picture naming. Previous studies have addressed this issue by contrasting tasks involving semantic and phonological decisions. However, these tasks engage verbal short-term memory and executive functions that are not required for reading and naming. Here, 20 subjects were instructed to overtly name written words and pictures of objects while their neuronal responses were measured using functional magnetic resonance imaging (fMRI). Each trial consisted of a pair of successive stimuli that were either semantically related (e.g., “ROBIN-nest”), phonologically related (e.g., “BELL-belt”), unrelated (e.g., “KITE-lobster”), or semantically and phonologically identical (e.g., “FRIDGE-fridge”). In addition, a pair of stimuli could be presented in either the same modality (word-word or picture-picture) or a different modality (word-picture or picture-word). We report that semantically related pairs modulate neuronal responses in a left-lateralized network, including the pars orbitalis of the inferior frontal gyrus, the middle temporal gyrus, the angular gyrus, and the superior frontal gyrus. We propose that these areas are involved in stimulus-driven semantic processes. In contrast, phonologically related pairs modulate neuronal responses in bilateral insula. This region is therefore implicated in the discrimination of similar, competing phonological and articulatory codes. The above effects were detected with both words and pictures and did not differ between the two modalities even with a less conservative statistical threshold. In conclusion, this study dissociates the effects of semantic and phonological relatedness between successive items during reading and naming aloud. Hum Brain Mapp, 2007. © 2006 Wiley-Liss, Inc.
fMRI; language; phonology; semantics; reading; naming
Temporal and frontal activations have been implicated in learning of novel word forms, but their specific roles remain poorly understood. The present magnetoencephalography (MEG) study examines the roles of these areas in processing newly-established word form representations. The cortical effects related to acquiring new phonological word forms during incidental learning were localized. Participants listened to and repeated back new word form stimuli that adhered to native phonology (Finnish pseudowords) or were foreign (Korean words), with a subset of the stimuli recurring four times. Subsequently, a modified 1-back task and a recognition task addressed whether the activations modulated by learning were related to planning for overt articulation, while parametrically added noise probed reliance on developing memory representations during effortful perception. Learning resulted in decreased left superior temporal and increased bilateral frontal premotor activation for familiar compared to new items. The left temporal learning effect persisted in all tasks and was strongest when stimuli were embedded in intermediate noise. In the noisy conditions, native phonotactics evoked overall enhanced left temporal activation. In contrast, the frontal learning effects were present only in conditions requiring overt repetition and were more pronounced for the foreign language. The results indicate a functional dissociation between temporal and frontal activations in learning new phonological word forms: the left superior temporal responses reflect activation of newly-established word-form representations, also during degraded sensory input, whereas the frontal premotor effects are related to planning for articulation and are not preserved in noise.
Individuals learn to read by gradually recognizing repeated letter combinations. However, it is unclear how or when neural mechanisms associated with repetition of basic stimuli (i.e., strings of letters) shift to involvement of higher-order language networks. The present study investigated this question by repeatedly presenting unfamiliar letter strings in a one-back matching task during an hour-long period. Activation patterns indicated that only brain areas associated with visual processing were activated during the early period, but additional regions that are usually associated with semantic and phonological processing in inferior frontal gyrus were recruited after stimuli became more familiar. Changes in activation were also observed in bilateral superior temporal cortex, also suggestive of a shift toward a more language-based processing strategy. Connectivity analyses reveal two distinct networks that correspond to phonological and visual processing, which may reflect the indirect and direct routes of reading. The phonological route maintained a similar degree of connectivity throughout the experiment, whereas visual areas increased connectivity with language areas as stimuli became more familiar, suggesting early recruitment of the direct route. This study provides insight about plasticity of the brain as individuals become familiar with unfamiliar combinations of letters (i.e., words in a new language, new acronyms) and has implications for engaging these linguistic networks during development of language remediation therapies.
letter strings; fMRI; connectivity; reading; learning; plasticity
Prior research has put forth at least four possible contributors to the verbal short-term memory (VSTM) deficit in children with developmental reading disabilities (RD): poor phonological awareness which affects phonological coding into VSTM, a less effective phonological store, slow articulation rate, and fewer/poorer quality long-term memory (LTM) representations. This project is among the first to test the four suppositions in one study. Participants included 18 children with RD and 18 controls. VSTM was assessed using Baddeley’s model of the phonological loop. Findings suggest all four suppositions are correct, depending upon the type of material utilized. Children with RD performed comparably to controls in VSTM for common words but worse for less frequent words and nonwords. Furthermore, only articulation rate predicted VSTM for common words, whereas Verbal IQ and articulation rate predicted VSTM for less frequent words, and phonological awareness and articulation rate predicted VSTM for nonwords. Overall, findings suggest that the mechanism(s) used to code and store items by their meaning is intact in RD, and the deficit in VSTM for less frequent words may be a result of fewer/poorer quality LTM representations for these words. In contrast, phonological awareness and the phonological store are impaired, affecting VSTM for items that are coded phonetically. Slow articulation rate likely affects VSTM for most material when present. When assessing reading performance, VSTM predicted decoding skill but not word identification after controlling Verbal IQ and phonological awareness. Thus, VSTM likely contributes to reading ability when words are novel and must be decoded.
learning disabilities; reading disabilities; dyslexia; phonological awareness; short-term memory; verbal learning; children; adolescents
In order to explore verbal–nonverbal integration, we investigated the influence of cognitive and linguistic ability on gaze behavior during spoken language conversation between children with mild-to-moderate hearing impairment (HI) and normal-hearing (NH) peers. Ten HI–NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Cox proportional hazards regression was used to model associations between performance on cognitive and linguistic tasks and the probability of gaze to the conversational partner’s face. Analyses compare the listeners in each dyad (HI: n = 10, mean age = 12; 6 years, SD = 2; 0, mean better ear pure-tone average 33.0 dB HL, SD = 7.8; NH: n = 10, mean age = 13; 7 years, SD = 1; 11). Group differences in gaze behavior – with HI gazing more to the conversational partner than NH – remained significant despite adjustment for ability on receptive grammar, expressive vocabulary, and complex working memory. Adjustment for phonological short term memory, as measured by non-word repetition, removed group differences, revealing an interaction between group membership and non-word repetition ability. Stratified analysis showed a twofold increase of the probability of gaze-to-partner for HI with low phonological short term memory capacity, and a decreased probability for HI with high capacity, as compared to NH peers. The results revealed differences in gaze behavior attributable to performance on a phonological short term memory task. Participants with HI and low phonological short term memory capacity showed a doubled probability of gaze to the conversational partner, indicative of a visual bias. The results stress the need to look beyond the HI in diagnostics and intervention. Acknowledgment of the finding requires clinical assessment of children with HI to be supported by tasks tapping phonological processing.
child hearing impairment; gaze behavior; referential communication; eye tracking; non-word repetition; phonological short term memory; Cox regression
Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed.
Developmental dyslexia; category acquisition; procedural learning; speech; videogame training
Individuals with primary progressive aphasia (PPA) show selective breakdown in regions within the proposed dorsal (articulatory-phonological) and ventral (lexical-semantic) pathways involved in language processing. Phonological short-term memory impairment, which has been attributed to selective damage to dorsal pathway structures, is considered to be a distinctive feature of the logopenic variant of PPA. By contrast, phonological abilities are considered to be relatively spared in the semantic variant and are largely unexplored in the nonfluent/agrammatic variant. Comprehensive assessment of phonological ability in the three variants of PPA has not been undertaken. We investigated phonological processing skills in a group of participants with PPA as well as healthy controls, with the goal of identifying whether patterns of performance support the dorsal versus ventral functional-anatomical framework and to discern whether phonological ability differs amongst PPA subtypes. We also explored the neural bases of phonological performance using voxel-based morphometry (VBM). Phonological performance was impaired in patients with damage to dorsal pathway structures (nonfluent/agrammatic and logopenic variants), with logopenic participants demonstrating particular difficulty on tasks involving nonwords. Binary logistic regression revealed that select phonological tasks predicted diagnostic group membership in the less fluent variants of PPA with a high degree of accuracy, particularly in conjunction with a motor speech measure. Brain-behavior correlations indicated a significant association between the integrity of gray matter in frontal and temporoparietal regions of the left hemisphere and phonological skill. Findings confirm the critical role of dorsal stream structures in phonological processing and demonstrate unique patterns of impaired phonological processing in logopenic and nonfluent/agrammatic variants of PPA.
We used fMRI in 85 healthy participants to investigate whether different parts of the left supramarginal gyrus (SMG) are involved in processing phonological inputs and outputs. The experiment involved 2 tasks (speech production (SP) and one-back (OB) matching) on 8 different types of stimuli that systematically varied the demands on sensory processing (visual vs. auditory), sublexical phonological input (words and pseudowords vs. nonverbal stimuli), and semantic content (words and objects vs. pseudowords and meaningless baseline stimuli). In ventral SMG, we found an anterior subregion associated with articulatory sequencing (for SP > OB matching) and a posterior subregion associated with auditory short-term memory (for all auditory > visual stimuli and written words and pseudowords > objects). In dorsal SMG, a posterior subregion was most highly activated by words, indicating a role in the integration of sublexical and lexical cues. In anterior dorsal SMG, activation was higher for both pseudoword reading and object naming compared with word reading, which is more consistent with executive demands than phonological processing. The dissociation of these four “functionally-distinct” regions, all within left SMG, has implications for differentiating between different types of phonological processing, understanding the functional anatomy of language and predicting the effect of brain damage.
functional MRI; language; parietal lobe; phonological processing; speech production
Background & Aims
The present study examined how phonological and lexical knowledge influences memory in children with specific language impairments (SLI). Previous work showed recall advantages for typical adults and children due to word frequency and phonotactic pattern frequency and a recall disadvantage due to phonological similarity among words. While children with SLI have well documented memory difficulties, it is not clear whether these language knowledge factors also influence recall in this population.
Methods & Procedures
16 children with SLI (mean age 10;2) and CAM controls recalled lists of words differing in phonological similarity, word frequency, and phonotactic pattern frequency. While previous studies used a small set of words appearing in multiple word lists, the current study used a larger set of words, without replacement, so that children could not gain practice with individual test items.
Outcomes & Results
All main effects were significant. Interactions revealed that children with SLI were affected by similarity, but less so than their peers, comparably affected by word frequency, and unaffected by phonotactic pattern frequency.
Results due to phonological similarity suggest that children with SLI use less efficient encoding, while results due to word frequency and phonotactic pattern frequency were mixed. Children with SLI used coarse-grained language knowledge (word frequency) comparably to peers, but were less able to use fine-grained knowledge (phonotactic pattern frequency). Paired with phonological similarity results, this suggests that children with SLI have difficulty establishing robust phonological knowledge for use in language tasks.
verbal memory; specific language impairment; phonological similarity; word frequency; phonotactic pattern frequency
Listeners use their knowledge of how language is structured to aid speech recognition in everyday communication. When it comes to children with congenital hearing loss severe enough to warrant cochlear implants (CIs), the question arises of whether these children can acquire the language knowledge needed to aid speech recognition, in spite of only having spectrally degraded signals available to them. That question was addressed in the current study. Specifically there were three goals: (1) to compare the language structures used by children with CIs to those of children with normal hearing (NH); (2) to assess the amount of variance in the language measures explained by phonological awareness and lexical knowledge; and (3) to assess the amount of variance in the language measures explained by factors related to the hearing loss itself and subsequent treatment.
Language samples were obtained and transcribed for 40 children who had just completed kindergarten: 19 with NH and 21 with CIs. Five measures were derived from Systematic Analysis of Language Transcripts (SALT): (1) mean length of utterance in morphemes, (2) number of conjunctions, excluding and, (3) number of personal pronouns, (4) number of bound morphemes, and (5) number of different words. Measures were also collected on phonological awareness and lexical knowledge. Statistics examined group differences, as well as the amount of variance in the language measures explained by phonological awareness, lexical knowledge, and factors related to hearing loss and its treatment for children with CIs.
Mean scores of children with CIs were roughly one standard deviation below those of children with NH on all language measures, including lexical knowledge, matching outcomes of other studies. Mean scores of children with CIs were closer to two standard deviations below those of children with NH on two out of three measures of phonological awareness (specifically those related to phonemic structure). Lexical knowledge explained significant amounts of variance on three language measures, but only one measure of phonological awareness (sensitivity to word-final phonemic structure) explained any significant amount of unique variance beyond that, and on only one language measure (number of bound morphemes). Age at first implant, but no other factors related to hearing loss or its treatment, explained significant amounts of variance on the language measures, as well.
In spite of early intervention and advances in implant technology, children with CIs are still delayed in learning language, but grammatical knowledge is less affected than phonological awareness. Because there was little contribution to language development measured for phonological awareness independent of lexical knowledge, it was concluded that children with CIs could benefit from intervention focused specifically on helping them learn language structures, in spite of the likely phonological deficits they experience as a consequence of having degraded inputs.
language; children; cochlear implants
The role of written input in second language (L2) phonological and lexical acquisition has received increased attention in recent years. Here we investigated the influence of two factors that may moderate the influence of orthography on L2 word form learning: (i) whether the writing system is shared by the native language and the L2, and (ii) if the writing system is shared, whether the relevant grapheme-phoneme correspondences are also shared. The acquisition of Mandarin via the Pinyin and Zhuyin writing systems provides an ecologically valid opportunity to explore these factors. We first asked whether there is a difference in native English speakers' ability to learn Pinyin and Zhuyin grapheme-phoneme correspondences. In Experiment 1, native English speakers assigned to either Pinyin or Zhuyin groups were exposed to Mandarin words belonging to one of two conditions: in the “congruent” condition, the Pinyin forms are possible English spellings for the auditory words (e.g., < nai> for [nai]); in the “incongruent” condition, the Pinyin forms involve a familiar grapheme representing a novel phoneme (e.g., < xiu> for [ɕiou]). At test, participants were asked to indicate whether auditory and written forms matched; in the crucial trials, the written forms from training (e.g., < xiu>) were paired with possible English pronunciations of the Pinyin written forms (e.g., [ziou]). Experiment 2 was identical to Experiment 1 except that participants additionally saw pictures depicting word meanings during the exposure phase, and at test were asked to match auditory forms with the pictures. In both experiments the Zhuyin group outperformed the Pinyin group due to the Pinyin group's difficulty with “incongruent” items. A third experiment confirmed that the groups did not differ in their ability to perceptually distinguish the relevant Mandarin consonants (e.g., [ɕ]) from the foils (e.g., [z]), suggesting that the findings of Experiments 1 and 2 can be attributed to the effects of orthographic input. We thus conclude that despite the familiarity of Pinyin graphemes to native English speakers, the need to suppress native language grapheme-phoneme correspondences in favor of new ones can lead to less target-like knowledge of newly learned words' forms than does learning Zhuyin's entirely novel graphemes.
second language acquisition (SLA); mandarin; Pinyin; Zhuyin; orthographic input; second language phonology; second language word learning
While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers.
1. Deaf individuals vary in their orthographic and phonological knowledge of English as a function of their language experience.
2. Reading comprehension was best predicted by different factors in oral deaf and deaf native signers.
3. Free recall memory (primacy effect) better predicted reading comprehension in deaf native signers as compared to oral deaf or hearing individuals.
4. Language experience should be taken into account when considering cognitive processes that mediate reading in deaf individuals.
deafness; reading; sign language; orally-trained; short-term memory; phonological awareness; semantic-based memory
Children with developmental dyslexia are characterized by phonological difficulties across languages. Classically, this ‘phonological deficit’ in dyslexia has been investigated with tasks using single‐syllable words. Recently, however, several studies have demonstrated difficulties in prosodic awareness in dyslexia. Potential prosodic effects in short‐term memory have not yet been investigated. Here we create a new instrument based on three‐syllable words that vary in stress patterns, to investigate whether prosodic similarity (the same prosodic pattern of stressed and unstressed syllables) exerts systematic effects on short‐term memory. We study participants with dyslexia and age‐matched and younger reading‐level‐matched typically developing controls. We find that all participants, including dyslexic participants, show prosodic similarity effects in short‐term memory. All participants exhibited better retention of words that differed in prosodic structure, although participants with dyslexia recalled fewer words accurately overall compared to age‐matched controls. Individual differences in prosodic memory were predicted by earlier vocabulary abilities, by earlier sensitivity to syllable stress and by earlier phonological awareness. To our knowledge, this is the first demonstration of prosodic similarity effects in short‐term memory. The implications of a prosodic similarity effect for theories of lexical representation and of dyslexia are discussed. © 2016 The Authors. Dyslexia published by John Wiley & Sons Ltd.
•Children find it more difficult to recall three‐syllable words which have the same prosodic pattern of strong and weak syllables.•This implies that phonological remediation in dyslexia should include a focus on the stress patterning of strong and weak syllables in words.
prosody; phonology; serial recall
Short-term memory (STM) and long-term memory (LTM) have traditionally been considered cognitively distinct. However, it is known that STM can improve when to-be-remembered information appears in contexts that make contact with prior knowledge, suggesting a more interactive relationship between STM and LTM. The current study investigated whether the ability to leverage LTM in support of STM critically depends on the integrity of the hippocampus. Specifically, we investigated whether the hippocampus differentially supports between-domain versus within-domain STM–LTM integration given prior evidence that the representational domain of the elements being integrated in memory is a critical determinant of whether memory performance depends on the hippocampus. In Experiment 1, we investigated hippocampal contributions to within-domain STM–LTM integration by testing whether immediate verbal recall of words improves in MTL amnesic patients when words are presented in familiar verbal contexts (meaningful sentences) compared to unfamiliar verbal contexts (random word lists). Patients demonstrated a robust sentence superiority effect, whereby verbal STM performance improved in familiar compared to unfamiliar verbal contexts, and the magnitude of this effect did not differ from that in controls. In Experiment 2, we investigated hippocampal contributions to between-domain STM–LTM integration by testing whether immediate verbal recall of digits improves in MTL amnesic patients when digits are presented in a familiar visuospatial context (a typical keypad layout) compared to an unfamiliar visuospatial context (a random keypad layout). Immediate verbal recall improved in both patients and controls when digits were presented in the familiar compared to the unfamiliar keypad array, indicating a preserved ability to integrate activated verbal information with stored visuospatial knowledge. Together, these results demonstrate that immediate verbal recall in amnesia can benefit from two distinct types of semantic support, verbal and visuospatial, and that the hippocampus is not critical for leveraging stored semantic knowledge to improve memory performance.
Amnesia; Hippocampus; Medial temporal lobe; Schema; Long-term memory
Previous studies have examined whether difficulties in short-term memory for verbal information, that might be associated with dyslexia, are driven by problems in retaining either information about to-be-remembered items or the order in which these items were presented. However, such studies have not used process-pure measures of short-term memory for item or order information. In this work we adapt a process dissociation procedure to properly distinguish the contributions of item and order processes to verbal short-term memory in a group of 28 adults with a self-reported diagnosis of dyslexia and a comparison sample of 29 adults without a dyslexia diagnosis. In contrast to previous work that has suggested that individuals with dyslexia experience item deficits resulting from inefficient phonological representation and language-independent order memory deficits, the results showed no evidence of specific problems in short-term retention of either item or order information among the individuals with a self-reported diagnosis of dyslexia, despite this group showing expected difficulties on separate measures of word and non-word reading. However, there was some suggestive evidence of a link between order memory for verbal material and individual differences in non-word reading, consistent with other claims for a role of order memory in phonologically mediated reading. The data from the current study therefore provide empirical evidence to question the extent to which item and order short-term memory are necessarily impaired in dyslexia.
dyslexia; item memory; order memory; short-term memory; process dissociation
The goals of this project were threefold: to determine the nature of the memory deficit in children/adolescents with dyslexia, to utilize clinical memory measures in this endeavor, and to determine the extent to which semantic short-term memory (STM) is related to basic reading performance. Two studies were conducted using different samples, one incorporating the Wide Range Assessment of Memory and Learning and the other incorporating the California Verbal Learning Test-Children's Version. Results suggest that phonological STM is deficient in children with dyslexia, but semantic STM and visual–spatial STM are intact. Long-term memory (LTM) for both visual and verbal material also is intact. Regarding reading performance, semantic STM had small correlations with word identification and pseudoword decoding across studies despite phonological STM being moderately to strongly related to both basic reading skills. Overall, results are consistent with the phonological core deficit model of dyslexia as only phonological STM was affected in dyslexia and related to basic reading skill.
Dyslexia; Reading disabilities; Child; Adolescent; Short-term memory; Long-term memory
This study investigated whether phonological or semantic encoding cues improved the fast mapping or word learning performance of preschoolers with specific language impairment (SLI) or typical development (TD) and whether performance varied for words containing high- or low-frequency sublexical sequences that named familiar or unfamiliar objects.
Forty-two preschoolers with SLI, 42 preschoolers with TD matched for age and gender to children with SLI, and 41 preschoolers with TD matched for expressive vocabulary and gender to children with SLI learned words in a supported learning context. Fast mapping, word learning, and post-task performance were assessed.
Encoding cues had no effect on fast mapping performance for any group, nor on the number of words children learned to comprehend. Encoding cues appeared to be detrimental to word production for children with TD. Across groups a clear learning advantage was observed for words with low-frequency sequences and to a lesser extent, words associated with an unfamiliar object.
Results suggest that phonotactic probability and previous lexical knowledge affect word learning in similar ways for children with TD and SLI and that encoding cues were not beneficial for any group.