|Home | About | Journals | Submit | Contact Us | Français|
This study was designed to test the word learning abilities of adults with typical language abilities, those with a history of disorders of spoken or written language (hDSWL), and hDSWL plus attention deficit hyperactivity disorder (+ADHD).
Sixty-eight adults were required to associate a novel object with a novel label, and then recognize semantic features of the object and phonological features of the label. Participants were tested for overt ability (accuracy) and covert processing (reaction time).
The +ADHD group was less accurate at mapping semantic features and slower to respond to lexical labels than both other groups. Different factors correlated with word learning performance for each group.
Adults with language and attention deficits are more impaired at word learning than adults with language deficits only. Despite behavioral profiles like typical peers, adults with hDSWL may use different processing strategies than their peers.
This study was designed to investigate the nature of the word learning deficits that may be present in adults with a history of disorders of spoken and written language (hDSWL) (c.f. Grigorenko, 2007). A fast-mapping technique was used in order to examine the initial skills an adult learner brings to a word-learning task. A word learning task was chosen for two reasons. The first reason is functional; word learning is a task that continues throughout adulthood. As young adults enter the workforce, they must often learn jargon associated with their new careers, or expanding social opportunities. The second reason is that aspects of word learning (e.g., semantics, phonology, working memory) are central deficits in the most common disorders of spoken and written language and may be present in adults with hDSWL. A task that combines semantics, phonology, and working memory has the potential to tap into the deficits most linked with those who have written language deficits (dyslexia and poor comprehenders) as well as those with oral language deficits (specific language impairment).
Few published studies address auditory word learning in adults. Of those that exist, most address word learning in typical adults. For instance, Storkel, Armbruster, and Hogan (2006) focused on the lexical and phonological components of word learning. Specifically, they orthogonally varied nonwords with high and low phonotactic probability and neighborhood density to examine the effects of each of these components on auditory word learning. Phonotactic probability refers to the frequency of the sound sequences or individual sounds in a word. Neighborhood density refers to the number of neighbors a word has if one adds, deletes, or changes a phoneme (Vitevitch & Luce, 1999). Storkel et al. (2006) found a high-probability disadvantage and a high-density advantage for typical adults in a word learning task that used single syllable nonwords with a production task as a measure of word learning. The authors suggested that phonotactic probability triggered new word learning whereas neighborhood density may have influenced the integration of new lexical representations into the lexicon.
Gupta (2003) also studied word-learning in typical adults. Specifically, he examined the relationship between nonword repetition, word-learning, and immediate serial recall in normal adults to see if the associations between these tasks occurred in adults, as they do in children. Results indicated that performance on these tasks were associated for adults. If the skills underlying these tasks (i.e., phonological awareness, short term memory, and integration of auditory and verbal information) coalesce in adults with normal language to facilitate word-learning, continuing deficits in any of these areas might hinder word-learning in adults with DSWL.
At present, it remains unclear whether these findings apply to adults with hDSWL, a population known to have difficulty with phonological processing and short term memory. Di Betta and Romani (2006) found that word-learning deficits in people with written language deficits persist into adulthood. Participants in this study were asked to learn foreign words and nonwords that were associated with pictures and presented in both written and spoken modalities. When the pictures were presented in a cued-recall task, participants had to write or say the foreign word/nonword that was associated with the picture. Their participants exhibited deficits in learning written and spoken words compared to unimpaired peers. Participants were also tested on measures of phonological working memory, visual learning abilities, and written language abilities, among others. Di Betta and Romani were not able to explain the word-learning deficits based solely on phonological deficits. For example, they found that scores on phonological awareness and short term memory tasks were correlated with spoken word-learning, but not written word-learning. They suggested that lexical learning abilities may be “…less dependent on phonological abilities in adults than in children.” (p. 393). The authors proposed that some of the other deficits may be related to resources that are directly related to spelling, but not phonology. It is important to note that the participants in this study were primarily individuals with spelling difficulties.
Other findings in the same study indicated that the adults with written language disorders needed more trials than unimpaired peers to learn new words, and even with the benefit of additional exposure, did not retain newly-learned words as well as unimpaired peers after a week’s delay. The authors tested word-learning across modalities (auditory, visual) as well different aspects of word-learning (semantic and lexical learning). They concluded that their subjects did not have a generalized deficit in that they were able to perform as well as subjects with NL on several tasks including memory for visuospatial patterns.
Although harnessing semantic information is essential for word learning/knowledge, relatively little is known about fast mapping of semantic features in adults. Most of the work related to semantic organization for adults comes from studies of impaired individuals and the types of semantic categories that are spared and impaired (c.f. Caramazza & Mahon, 2003). This body of work relates to how semantic information is organized in the brain and what might explain particular deficits in patients with frank neurological injuries. The motivation for examining semantic learning in adults without frank neurological injuries is based on the evidence for semantic word learning deficits in children with Specific Language Impairment (Alt, Plante, & Creusere, 2004; Alt & Plante, 2006; McGregor, Newman, Reilly, & Capone, 2002). In these studies, children with language impairment were found to either fast-map fewer semantic features than unimpaired peers, or to draw less detailed representation of concepts than unimpaired peers. The semantic component of word learning has not been thoroughly examined for adults with DSWL, despite evidence for semantic impairment in word learning for children with DSWL.
Functional word-learning requires that a person learn not only the lexical label for an object, action, or idea, but also the associated semantic features. That an interaction between learning lexical labels and semantic features might create difficulty for people with language learning difficulties would echo Storkel’s (2001) finding of relations between phonotactic probability and semantic and lexical representations in children. Difficulty with learning lexical labels and semantic features for people with language learning difficulties would be consistent with a “limited capacity” framework. The limited capacity theoretical framework proposes that there is a finite pool of processing resources available for cognitive tasks. As any aspect of a task becomes more difficult, demand on processing resources increases and task performance may suffer globally. This framework has been used previously to explain deficits that occur only under conditions of high processing demands (e.g., Ellis-Weismer & Evans, 2002;Ellis-Weismer & Hesketh, 1996, 1998; Leonard, 1998; Montgomery, 1995a). This framework is particularly germane to successful word-learning, in which multiple sources of information must be encoded simultaneously.
An additional consideration with adults with DSWL is attention. Clinical populations are more likely to have co-morbidity not only of written and spoken language (c.f. Grigorenko, 2007; Smith, 2007), but also to have co-morbidity of attention deficit hyperactivity disorder (ADHD). Co-morbidity of ADHD and DSWL has been documented in numerous studies (cf. Cohen, Vallance, Barwick et al., 2000; Faraone, Biederman, Spencer et al., 2006). While the individual contributions of written and spoken language could conceivably be partialled out, deficits in attention could affect both domains of language. The effect of attention on language has been has been studied in children, but not, to our knowledge, in adults. Both Mathers (2006) and Redmond (2004) found unique profiles of language deficits for children with ADHD (who did not have frank language impairment) as compared to typically developing children. Spaulding, Plante, and Vance (2008) found that sustained selective attention was impaired in children with SLI when compared to peers, but only for an auditory task under high attentional demands. A significant portion of our adult participants reported a diagnosis of attention problems. Knowing that attention may have unique contributions to language learning in children, we decided to analyze our data for adult word learners with comorbid ADHD and DSWL separately from those with DSWL alone.
Disorders of spoken and written language, whether they have roots in oral language (specific language impairment) or written language (dyslexia and poor comprehenders) are thought to be genetic or neurobiological disorders. Both language and reading disorders have been found to have at least a familial, and in some cases a genetic link (Grigorenko 2007; Lane, Foundas, & Leonard, 2001; Smith, 2007). As such, they are not likely to be outgrown. A still unanswered question is the degree to which a history of DSWL affects adult functioning. Although it seems clear that deficits persist, there is currently a dearth of information about the specific nature of the effect of a DSWL in adults. For example, if they need help with written language, is it due to problems with orthography, phonology, prosody, morphosyntax, semantics, or lexical learning? Therefore, it is important to clarify the linguistic performance of adults with DSWL in order to better understand this population and thereby increase our ability to contribute to improved diagnostic and intervention techniques.
The current study proposes to add to the findings of the three studies noted above by expanding the range of participants included, the components of word learning addressed, and the modality of response. In terms of the participants, none of the three adult word learning studies noted above directly address the issue of attention. As such, this study will compare adults with a history of DSWL (hDSWL) to adults with a history of DSWL plus ADHD (+ADHD) to adults with normal language (NL). Our task will focus on the semantic aspect of word learning, in addition to tapping phonological and lexical components. We will specifically test the limited capacity hypothesis by varying the encoding conditions so that some semantic features should be more difficult to learn than others. Our task will be a nonproduction measure that will measure recognition. Finally, we will also try to replicate Gupta’s findings by measuring the correlation between performance on nonword repetition tasks and word learning for all three groups.
Based on the evidence of persistent word-learning deficits in adults with DSWL (DiBetta & Romani, 2006), this study’s main hypothesis is that all adults with hDSWL will continue to have problems fast mapping semantic features and lexical labels compared to peers with normal language skills. We predict that the +ADHD group may have greater deficits than the hDSWL group. The differences should be apparent in accuracy data, reaction time, or both. Another prediction (related to between-group differences) is that the impaired and unimpaired groups will demonstrate different patterns of factors that contribute to word learning tasks. Specifically:
Participants in the study were placed into one of three groups: those with a history of disorders of spoken or written language (hDSWL, n=24), those with a hDSWL plus Attention Deficit Hyperactivity Disorder (+ADHD, n=20), or those with normal language skills (NL, n=24). One reason people use the term DSWL is that, even though both behavioral and genetic research suggest that spoken language and written language disorders are distinct (cf. Catts, Adlof, Hogan, & Ellis-Weismer, 2005; Smith, 2007), there is significant comorbidity between these disorders (Grigorenko, 2007). Due to problems with retrospective reporting, it is difficult to accurately classify adults with hDSWL. Some adult subjects tend to underreport their impairments (Plante, Shenkman, & Clark, 1996). Others, particularly those with spoken language impairments, may confuse language problems with speech problems, articulation problems, or both. Therefore, the hDSWL group included participants who self-identified as having dyslexia, learning disabilities, speech or language impairment, a history of speech language services, a history of special education services, or a combination of these. Only individuals who reported no history of DSWL, no history of ADHD, and no family history of any of these disorders were considered for the NL group (see Table 1).
Participants were recruited via a questionnaire that was distributed to all 100-level sections of psychology and sociology at the University of Arizona. These courses include students without disabilities and students who are registered for special support through the campus Disability Resource Center or the Strategic Alternatives Learning Techniques center, a unique and nationally recognized on-campus resource for students with learning disabilities. All participants were native English speakers. Participants received course credit or were paid for participation in the study. Human subjects’ protection protocols were followed throughout the course of the study.
Potential participants were pre-screened using a questionnaire that temporarily placed them in a group (hDSWL or NL). They filled out a second questionnaire to confirm the original information and to rule out participants who reported a history of head injury, seizure disorder, or history of neurological or psychiatric disorders. All participants were also required to pass a hearing screening that required consistent responses to pure tones at 25dB HL for 1000, 2000, and 4000 Hz (ASHA, 1997).
All participants were tested in a 1½ hour session in a quiet room. All testing was conducted by a certified speech language pathologist, or a trained research assistant. Double-scorers were present for ten percent of all sessions. Participants were given a battery of tests in random order, but always finished the session with the experimental task, which was administered via computer.
Problems inherent to self-reporting include inaccurate reporting by impaired individuals (Plante, Shenkman, & Clark, 1996) and underdiagnosis of impaired individuals (Tomblin et al., 1997). To circumvent these issues, the ultimate determination of inclusion was correct group identification (hDSWL or NL) on a discriminant analysis based on the work of Tomblin, Freese, and Records (1992).
The tasks included in the discriminant analysis procedure were: Peabody Picture Vocabulary Test – Revised (PPVT-R, Dunn & Dunn, 1981), a modified version of the Token Test (Morice & McNichol, 1985), a speaking rate task, and a portion of the written spelling subtest of the Multilingual Aphasia Battery (Benton & Hamsher, 1989). Tomblin et al.’s procedure was modified slightly by the addition of more low-frequency spelling words (NEW), which were also taken from the Multilingual Aphasia Battery, in order to add more weight to this item1. (See Table 2.) Additionally, the Test of Nonverbal Intelligence-III (TONI-III) (Brown, Sherbenou, & Johnsen, 1997) was administered to all participants to ensure normal nonverbal intelligence (70 +1 SEM), although it was not part of the discriminant analysis. According to the results of the discriminant analysis, 23 of 49 NL participants and 15 of 59 impaired participants were misclassified, and therefore excluded from the study. An additional 2 NL participants were excluded in order to match the number of hDSWL participants. Although this average hit rate of 65% is not ideal, it is evidence-based, and conservative.
There were two broad categories of descriptive measures: literacy measures and nonword repetition measures. To characterize literacy, the Broad Reading subtests (i.e. passage comprehension, word attack, reading fluency, and letter-word identification) of the Woodcock Johnson Reading Battery were administered (Woodcock, McGrew, & Mather, 2001). Two tasks were administered to characterize nonword repetition skill: (1) the Nonsense Word Repetition Task, which consists of 15 three-and four-syllable nonwords which are patterned after real words (Kamhi & Catts, 1986), and are scored on a whole word basis: correct or incorrect, and (2) The Dollaghan and Campbell (1998) task – a series of 16 nonwords, four per group in each of one-, two-, three-, and four-syllable categories. This task was scored for correct responses for overall number correct, and number correct in each category. All nonword repetition tasks were recorded digitally to allow for reliability testing. Inter-rater reliability on the literacy tests averaged 98.41% and point-by-point reliability for the nonword repetition tasks averaged 90.20%.
Data were entered into a forward, stepwise, discriminant analysis to determine which participants were impaired and which were not. Participants whose posterior probabilities did not match their original group classification were excluded from the study. The impaired group was further subdivided into those with and without ADHD. The NL group was significantly different from each of the impaired groups on measures related to vocabulary, spelling, reading, and following oral directions. The impaired groups were not significantly different from each other.
The experimental task was a word learning task that was presented via computer using DirectRT software (Jarvis, 2002). Participants were asked to recognize semantic features and lexical labels of novel objects. Participants pressed a key on a computer keyboard to indicate their decisions, and responses were recorded for accuracy and reaction time. In the training phase, participants were exposed to two familiar objects (a ball and a heart) paired with real words and were trained to recognize the lexical label and semantic features of the objects. Participants learned to respond to visual cues, so as to omit any linguistic interference during the task. For example, when a picture of eyes would appear onscreen, the participants knew they had to decide whether the object they had just seen had eyes or not. All participants were required to respond correctly to several objects/labels during a training session in order to continue with the task. All participants passed the training component.
The novel objects used in the experiment were fashioned out of clay, digitally photographed, and had eyes added via computer editing software (Adobe Photoshop). The 24 objects varied along the following dimensions: absence/presence of eyes, color, pattern, and shape (See Table 3 and Figure 1). These objects were used previously in studies with children (omitted for reviewing purposes).
Each novel object was presented in one of four conditions: with silence, with noise, or with a nonword that had high or low phonotactic probability. Noises were selected so that they were not easily associable with a specific event (doorbell ringing, siren, etc.) and were created by distorting sounds from a computer’s sound catalogue. In analysis, the noise and silence conditions were combined to create the nonlinguistic encoding condition. The linguistic encoding condition consisted of six high phonotactic probability (HighPP) nonwords and six low phonotactic probability (LowPP) nonwords. These stimuli were used in (omitted for reviewing purposes). Each of the nonwords consisted of a CVCCVC or a CVCVC structure and was produced with trochaic stress. All words had a low neighborhood density. All but two of the nonwords had no neighbors, and two of the High PP nonwords had 2 neighbors. Mean phonotactic probability was 5.32e−6 (range 5.2e−7 -3.9e−6) for the High PP nonwords and 1.15e−8 (range 0 – 4.8e−8) for the Low PP nonwords. Frequency was calculated by multiplying the frequencies of the onset by the rime (Hammond, 2000).
Two versions of the task were constructed to vary both the match of object to encoding condition and the order of presentation of the stimuli. For each version of the task, the “Yes” and “No” key placements on the keyboard were alternated such that the “Yes” key was on the left side of the keyboard for half the participants and was on the right side of the keyboard for the other participants.
Objects were presented to participants for 5 seconds. After participants saw an object, it was removed from view. Participants then had a semantic feature presented to them, and had to make a yes/no decision about whether or not the object they had just seen had that feature. Participants were given auditory and visual feedback via computer for correct accepts, correct rejects, and incorrect decisions.
When participants were presented with an object in the linguistic condition, they heard its label one time, in the context of a sentence. “See the XXX.” After making decisions about the object’s semantic features, participants were then presented with the real label, a phonologically related 2-syllable foil, a phonologically-related 1-syllable foil, and an unrelated foil (e.g., label: babbin, binbab, wab, reez).
Participants had to make a yes/no decision about each label choice. They were not given feedback so as to not influence their decision about subsequent choices. For each lexical label the choices for the label were presented in random order, so as not to have an order effect.
In addition to testing the main hypotheses statistically, several t-tests were run to ensure findings were not due to artifacts associated with the different versions of the experiment. The two versions differed in terms of the order of items presented, and the link of particular visual stimuli and auditory stimuli. There were no significant differences found between versions. There was also no difference in performance when participants had the ‘yes’ key on the left side of the keyboard versus when the ‘yes’ key was on the right side of the keyboard. In addition, a t-test was run to assess the impact of sex on outcome, and no significant difference was found. Therefore, all subsequent analyses collapsed performance across these factors.
One hypothesis was that adults with hDSWL and + ADHD would fast-map fewer semantic features than peers with NL. Additionally, we predicted the +ADHD group might fare worse than the hDSWL group. To test this hypothesis, the data were analyzed using a fully-crossed mixed analysis of variance (ANOVA) design. Group (hDSWL, + ADHD, NL) served as the between-group factor and Semantic Feature (color, presence/absence of eyes, pattern, and shape) and Encoding Condition (silence, sound, High PP, Low PP) served as the within-group factors. Between-group differences were subjected to post-hoc comparisons (unequal Tukey HSD) and within-group differences were measured using least-squares tests.
There was a significant main effect for Group on overall mapping of nonverbal semantic features (F(2,65)= 7.01, p<.0017; η2 =.1774). The total number of features tested was 96 (4 features × 24 objects). The means for each group were 89.95 (SD= 3.35) for the hDSWL group, 85.95 (SD=7.26) for the +ADHD group, and 91.37 (SD=3.64) for the NL group. These results suggest that only the second part of the main hypothesis was supported, i.e. that adults with +ADHD mapped fewer semantic features than their peers with hDSWL only or NL.
There was also a main effect for Semantic Feature (F(3,195)=37.42, p=.0000; η2 =.3653). A least squares difference test (LSD) with an alpha=.05 revealed that performance on any semantic feature was significantly different from performance on the remaining three semantic features for all groups. The order of performance on semantic features, from best to worst was: eyes, color, shape, then pattern. (See Figure 2.) There were two significant Group by Semantic Feature interactions: the +ADHD group performed worse than the NL group on the most difficult features (shape and pattern). Recall that the highest score for this portion of the task was 24. The mean for the +ADHD group for shape and pattern were 21.1 (SD=2.42) and 20.3 (SD=2.71) and the means for the NL group were 22.00 (SD=1.31) and 22.79 (SD=1.14).
There was also a main effect for Encoding Condition (F(3, 195)=4.01, p=.0084; η2 =.0581). A least squares difference test (LSD) with an alpha=.05 revealed that for all groups, performance on both the High PP and the Low PP conditions was significantly worse than performance on the noise and silence conditions. A planned comparison was used to determine if there were differences between performance on the linguistic and non-linguistic conditions. There was a statistically significant difference in that all participants did significantly better on the nonlinguistic conditions than on the linguistic conditions (F(1, 65)=10.51, p=.0018; η2 =.1617). The mean for the nonlinguistic condition was 45.16 (SD=3.14) and the mean for the linguistic condition was 44.11 (SD=2.77). There was a single significant interaction between Encoding Condition and Diagnosis, such that the +ADHD group performed worse than the NL group in the silence condition. The mean for the +ADHD group was 21.30 (SD=2.29) and the mean for the NL group was 23.12 (SD=1.26)
There was an interaction between Semantic Feature and Encoding condition (F(9, 585)=2.53, p=.0075; η2 =.0374). The difficulties of each condition had a cumulative effect, so that in the noise condition, there were no differences between semantic features. In the silence condition, people mapped presence/absence of eyes more accurately than any other semantic feature. In the High PP condition, people mapped colors better than patterns and eyes better than pattern and shape. In the Low PP condition, people mapped colors and eyes better than patterns or shapes. There was no diagnosis × condition × feature interaction.
To test the hypothesis that adults with hDSWL and + ADHD would fast-map semantic features more slowly than peers with NL, an ANOVA was conducted using reaction times in place of accuracy data. There was a significant main effect for Group on overall reaction time for semantic features (F(2,64)= 6.22, p<.0033; η2 =.1628). The means for each group were 1514.84ms (SD=296.76ms) for the hDSWL group, 1748.29 ms (SD=506.13 ms) for the + ADHD group, and 1361.36 ms (SD=257.48 ms) for the NL group. Adults with +ADHD had significantly slower reaction times than the NL group.
There was also a main effect for Semantic Feature (F(3,192)=11.53, p=.0000; η2 =.1526) A least squares difference test (LSD) with an alpha=.05 revealed that all participants responded more quickly to the eyes and color features than to the shape and pattern features. The performance of the groups on each of the four semantic features is displayed in Figure 4. There were two significant Group by Semantic Feature interactions. The hDSWL group was the only group to show within-group differences on the semantic features. They responded more quickly to color than pattern, and more quickly to eyes than to pattern or shape. The +ADHD group was significantly slower than the NL group when deciding on the semantic feature “color”. The mean for the +ADHD group was 1756.31 ms (SD=854.66 ms) and the mean for the NL group was 1295.27 ms (SD=322.67 ms).
There was no main effect for Encoding Condition (F(3, 192)=1.21, p=.3054; η2 =.0186). However, there was an interaction between Semantic Feature and Encoding Condition (F(9,576)=5.152, p=.0000; η2 =.0745). Post-hoc testing revealed that there were no differences processing presence/absence of eyes in any condition. However, participants responded more quickly to color in the Low PP condition than in the silence conditions. They also responded more quickly to color in the High PP condition than in the noise or silence conditions. By contrast, participants responded more slowly to shapes in the High PP condition than in the noise condition. Finally, responses for patterns in the High PP condition were slower than those in all other encoding conditions. There were no three-way interactions (Group × Semantic Feature × Encoding Condition). Please see Table 4.
We hypothesized that there would be a correlation between performance on a nonword repetition task and performance on word learning for all groups. A significant correlation between nonword repetition and semantic mapping was found for both the hDSWL and the +ADHD groups, although each group had slightly different manifestations. Results indicated a significant correlation between semantic fast mapping and performance on the Nonsense Word repetition task, but only for the +ADHD group (r=.58, p<.01; r2=.33). There was a significant correlation on the Dollaghan nonword repetition task for the hDSWL and +ADHD groups only, such that the number of 4-syllable nonwords correct on the Dollaghan task correlated with performance on semantic feature fast-mapping for the hDSWL group (r=.51, p<.01; r2=.27) and the number of 3-syllable nonwords correct on the Dollaghan task correlated with performance on semantic feature fast-mapping for the +ADHD group (r=.55, p<.01; r2=.30).
Our additional hypotheses related to group differences were that factors that load on DSWL would be related to word learning in the impaired, but not the unimpaired, groups and that there would be a unique profile for word learning for the +ADHD group. The data bore these hypotheses out. For the hDSWL group, spelling and reading (broad reading and reading fluency), in addition to the nonword repetition task, were all associated with semantic fast mapping. (See Table 5). For the + ADHD group, the only other correlated factor was performance on the lexical labeling task (r=.66, p<.00; r2=.43). For the NL group only vocabulary and performance on the Token task were also associated with semantic fast mapping. (See Table 5.)
We hypothesized that hDSWL and + ADHD participants would fast map fewer lexical labels than NL participants. Recall that the lexical label task required the participant to correctly identify the target label and reject a completely unrelated foil and two different, but related, foils. An analysis of variance (ANOVA) with Group (hDSWL, + ADHD, NL) as the between subjects factor and Phonotactic Probability (High PP, Low PP and Item Type (target label, phonologically related 2-syllable foil, phonologically related 1-syllable foil, unrelated foil) as within-group factors was used to test this hypothesis. Given that all groups performed near ceiling, limiting the variance, the planned ANOVA could not be run. Instead, the data were examined using the nonparametric Kruksal-Wallis ANOVA to compare the overall number of items correct (hDSWL mean=47.25, SD=1.07, +ADHD mean=46.35, SD=1.53, NL mean =47.58, SD=.65). This test indicated that the +ADHD group was less accurate than the other two groups on this task (H=11.88, p<.0026).
Given findings that indicate people with hDSWL may often show covert processing differences even when they reach ceiling on a task (Plante, Van Petten, & Senkfor, 2000), we analyzed reaction time from the lexical labeling task. Using a parametric ANOVA, no significant main effects of Group and Phonotactic Probability were found although a there was a significant main effect for Item Type (F(3, 1957)= 17.81, p<.0000; η2 =.2151). A post-hoc analysis (least squares test) revealed that participants took significantly longer to react to the real label and the phonologically-related 2-syllable foil than they did to the unrelated foil and the phonologically-related 1-syllable foil. (See Figure 5.) Post-hoc testing also revealed a significant Group by Foil interaction. Both the NL and the hDSWL group, but not the +ADHD group, took longer to respond to the 2-syllable related foil than to the unrelated foil.
We hypothesized that there would be a significant correlation between fast-mapping of lexical labels and performance on the Nonword Repetition tasks. There was a significant correlation only for the +ADHD group on the Nonsense Word Repetition task (r=.60, p<.00; r2=.36). Performance on the semantic features mapping task was also correlated with lexical labeling performance for the +ADHD group (r=.66, p<.00; r2=.43). The hDSWL and the NL groups did not have any factors correlated with performance on the lexical labeling task.
Instead of the predicted across-the-board hDSWL deficit, only the +ADHD group consistently showed a deficit compared to the NL group. The +ADHD group responded more slowly and less accurately than both the hDSWL and NL groups on the semantic features task (reaction time and accuracy) and the lexical label task (accuracy only). These results suggest that, for college-bound adults, a history of ADHD has more persistent deleterious effects on skills associated with word learning than does a history of DSWL alone.
It is necessary to exercise caution in interpreting findings regarding the role of attention in performance on the experimental tasks. The experimental protocol did not include explicit measurements of attention, therefore the role of attention in the performance of the +ADHD group must be inferred. However, what is clear is that the addition of a history of ADHD produces a profile unique from people with only hDSWL. Examination of the patterns of performance on the experimental tasks provides some insight into where the difference may lie.
The factors that correlated with the word learning tasks seemed to be related to the language processing profiles of each group. For NL participants, vocabulary and auditory language processing correlated with semantic mapping performance. In other words, skills typically associated with learning lexical labels were also associated with the semantic component of word learning.
In contrast, the hDSWL group patterned much like dyslexics; spelling and reading, both phonologically-based skills, were related to their performance. The hDSWL group had lower scores than the NL group on these tasks. The limited capacity hypothesis, which is often used as an explanation for the language problems observed in people with hDSWL, does not work as easily for the hDSWL group in this experiment. For example, their performance on the 4-syllable nonwords was the best of all three groups (although the difference was not statistically significant). The hDSWL could have covert processing differences that were not measured for nonword repetition. An alternative explanation is that phonological skills continue to have a more active contribution to word learning for the hDSWL group than the NL group, perhaps because of their efforts to overcome their impairment.
The +ADHD group showed a unique pattern such that only the nonword repetition task and the lexical label fast mapping task were correlated with their performance on the semantic fast mapping task. However, they did not share any correlating factors with their singly-impaired peers. There was no statistically significant difference between the hDSWL group and the +ADHD on any of the descriptive measures. That the profiles of correlation do not overlap suggests that any differences are not due to inherent differences in spelling or reading ability. Rather, it may be the case that the +ADHD group is relying on different strategies during the word learning process.
The limited capacity hypothesis may explain some of the performance of the +ADHD group. Their performance on the Nonsense word repetition task was correlated with their performance on both the lexical labeling portion of the task and the semantic mapping portion of the task. Whereas the nonwords in the Dollaghan task are true nonwords, the nonwords in the Nonsense word task are based on real words. It is possible that the enhanced wordlikeness of the nonwords on the latter task presented a particular challenge only to the +ADHD group, whose dual impairment limits their processing capacity.
The specific nature of the +ADHD processing profile remains unclear. For example, one might have predicted that, due to impulsivity, the +ADHD group would have a faster reaction times that might account for some of the lack of accuracy. However, the actual difference was in the opposite direction; the +ADHD group was slower than the other groups. Even though the +ADHD group was less accurate on both tasks (semantic and lexical), they were only slower on the semantic features task, and not on the lexical labeling task. This finding may reflect their dual impairment status.
Although there is the possibility of overlap between groups (hDSWL and +ADHD in terms of diagnosis and hDSWL and NL in terms of performance), each group is likely bringing different processing skills to the task of word learning. The +ADHD group had particular difficulty with online processing tasks, such as fast mapping and nonword repetition. These two tasks, in particular, have high attentional demands. Given that these tasks involve novelty and brief presentation times, a lapse in attention could lead to diminished encoding of the new information. The processing components that are correlated with the hDSWL group do not significantly factor into the word learning for the +ADHD group. This suggests that the attentional component of their disability contributes most predominately to their learning profile, although without a direct measure of attention, it is possible that as-yet-unidentified variables were associated with their poorer performance. Similarly, even though the hDSWL group performed as well as the NL group on the task, they evidenced a different profile for the factors that correlated with their performance. This suggests that each group brings a unique processing profile to the task.
Research support for the relation between phonological working memory and word learning in adults is somewhat mixed. Gupta (2003) found that phonological processing was related to lexical word learning in typical adults, whereas Di Betta and Romani (2006) found that it was related to lexical learning for impaired adults, but perhaps less so for adults than for children. Our findings suggest that phonological processing is related to semantic word learning, but only for adults with hDSWL. Task differences could account for the discrepancy in findings between this study and Gupta’s. The lexical learning portion of this task proved relatively easy for most participants. Phonological processing may be a non-issue for typical adults when task demands are low. However, even though there were no significant overt differences in accuracy between the typical and hDSWL group, there appeared to be covert differences related to phonology.
Reaction time data add to these findings and speak to the second hypothesis about encoding condition. Based on Storkel et al.’s (2006) work, we predicted a high phonotactic probability disadvantage. In terms of semantic encoding, there was no evidence of significant difference between high and low phonotactic probability encoding conditions based on accuracy data.
However, there was a semantic feature by encoding condition interaction for reaction time, but its nature was slightly different. Speed of responding seemed to be mediated by the difficulty of the semantic attribute being encoded. For the easiest feature, eyes, there was no difference at all. For the feature of color, adults showed a speed advantage for the linguistic conditions when compared to the nonlinguistic conditions. However, for the more difficult features (shape and pattern) there was a slower reaction time for the high phonotactic probability condition than for noise. In fact, for pattern, the high phonotactic probability condition was slower than all other conditions. Therefore, it may be that phonotactic probability requires additional processing time in that learners must compare the new label to other, existing words in the lexicon that share phonotactic sequences. The extra time it takes to compare the sequences could be interpreted as a high phonotactic probability disadvantage, as per Storkel et al. (2006). However, remember that reaction times were only recorded for correct responses. This seems to be indicative of a speed/accuracy trade-off, in which adults take additional time in order to achieve higher accuracy. The increased time does not seem to have a significant impact on cognitive resources, at least those needed for semantic mapping.
There were some reaction time differences for lexical labels related to the item type that reinforce the findings of a phonotactic probability disadvantage. Participants had to make decisions about a lexical label when presented with the real label, a 2-syllable related foil, a 1-syllable related foil, and an unrelated foil. Participants took longer to respond to the real label and 2-syllable foil than to the 1-syllable and unrelated foils. Processing time differences found for the lexical labeling condition further corroborate a high phonotactic probability disadvantage. Recall that participants were asked to make decisions about 4 possible lexical labels: the real label, a phonologically-related 2-syllable foil, a phonologically-related 1-syllable foil, and a phonologically unrelated foil. The real, newly-learned label shared the most phonotactically with the related 2-syllable foil. It is likely that participants needed more time to make a decision about these choices than they did about foils that were clearly unrelated. In this case, the phonotactic probability effect refers to the likeness of each of the choices for a given label (a within-label effect), and not to the overall phonotactic probability as compared to the rest of the lexicon. Again, this disadvantage was only evident in terms of reaction time, and not accuracy.
The value of the contribution of recognition of semantic features to word learning is still open for debate. Most studies of word-learning that pair auditory-visual stimuli treat the picture or symbol and its lexical label as a single integrated unit. The current study extended the exploration of word-learning by asking participants to identify details (semantic features) of the visual stimuli, making this task mirror word-learning challenges in real life situations. For example, simply recognizing a whole object and its associated label (e.g., scissors) will not provide any clues about an object’s function. Semantic features may serve as indices to more in-depth information about an object. For instance, awareness of even simple visual semantic features of an object (e.g., scissors have sharp points and loops) can provide insights about purpose (e.g., used for cutting, have handles). Certainly, this type of inference would not be valid in all cases, for example, when a patient has semantic dementia. However, if we do learn from semantic features, knowledge about the ability to perceive and remember semantic features may provide insight into word learning.
In terms of semantic deficits, the +ADHD group showed a deficit compared to the other groups on the semantic fast mapping task in terms of both accuracy and reaction time. These results contrast with those of Di Betta and Romani (2006) who did not find semantic or visuospatial deficits in their impaired adult subjects, although their experimental tasks were different than those used in the present study. Whether this particular deficit is driven by limited capacity for cognitive/linguistic processing or a more circumscribed attentional deficit remains an open question.
In terms of recalling semantic features, all participants consistently demonstrated the same general processing patterns. Whereas the +ADHD group was quantitatively less accurate than the other groups in recalling semantic features, their profile was, for the most part, qualitatively the same. The findings from the reaction time data mirror the pattern of difficulty found in the semantic accuracy data. We had not predicted that adults would map one feature more accurately than another. In this task, there was no benefit to recognizing a particular semantic feature more accurately than another. However, all adults showed a preferential pattern for recognizing semantic features. Their order of accuracy was highest for recognizing the presence or absence of eyes in a novel object, followed by accuracy in recognizing colors, shapes, and finally, patterns.
These findings might be explained by mathematical advantages. When deciding about the presence or absence of eyes, participants had to make a dichotomous choice (present/absent). For other semantic features, choices were not dichotomous, and participants had a 25% (color, shape) or 20% (pattern) chance of being correct. Another possibility for the relatively poor performance on patterns is that humans simply are not as good at recognizing patterns as they are at recognizing other semantic features. There are arguments that humans are far more likely to recognize global properties of objects than local ones (Navon, 1977). This preference for the global over local (or whole versus part) is not necessarily due to poorer perception of local features. Another possibility is that impaired and unimpaired adults use similar strategies to prioritize semantic features about novel objects, a strategy advantageous to real-world situations. Given that we only measured participants’ performance on extracting semantic features from whole objects, we cannot say how they might associate or apply that information. However, consider the following example; pattern might be important in determining if a snake is poisonous or not. However, you need not make that distinction unless you have already determined that the pattern is part of something animate (and presence or absence of eyes is one strong indicator of animacy). Noticing the pattern on a carpet is less compelling than noticing the pattern on a potentially poisonous snake. This explanation is supported by the evidence that human beings tend to organize semantic information into large base categories of animate/inanimate (e.g. Warrington & Shallice, 1984). Certainly there are other theories of semantic organization including the sensory/functional theory and the conceptual/structural theory that acknowledge the animate/inanimate finding, but explain it based on saliency of stimuli or overlap of concepts, rather than assuming that animacy is an inherent category in and of itself (c.f. Caramazza & Mahon, 2003). Additional work is required to determine the reason for the animacy advantage found in this study.
However, if weighting is involved in how humans perceive semantic information, it appears, based on accuracy data, that impaired and unimpaired adults weigh semantic features similarly. Despite similar profiles for weighting semantic features, differences in processing semantic features may still be an area that can explain word learning differences. The +ADHD group had lower accuracy than both groups. In children, sparser semantic representation was linked to increased word finding problems (McGregor, Newman, Riley, & Capone, 2002). The same pattern may hold true for adults. Only the hDSWL group showed significant within-group differences in terms of their reaction time to different semantic features, suggesting that they adopted a different approach to the semantic mapping task. It is possible that a more difficult task would point to a different strategy of fast mapping for this group.
The hypothesis specific to the encoding of semantic features was that it would be more taxing to encode semantic features in a linguistic than in a nonlinguistic condition, due to the additional processing demands inherent in linguistic information. Recall that there were four encoding conditions. Novel objects were presented in the presence of: silence, noise, a high phonotactic probability nonword, or a low phonotactic probability nonword. The results supported our hypothesis for all participants.
A finding of note is that the silence condition was not the condition with the highest accuracy; rather, participants performed with the highest accuracy in the noise condition. One might predict that silence would offer the lightest processing load, and result in the highest accuracy. When a similar paradigm was run with children, the silence condition was unexpectedly difficult due to the fact that it broke the pattern of stimuli presentation; all other stimuli were auditory + visual (Alt & Plante, 2006). The lack of auditory stimuli confused the children, who commented on it. Although adults didn’t make explicit comments about the difference, it may be the case that this break in pattern was also noticed by the adults and the act of noticing the difference squandered any cognitive advantage that would have otherwise been present. However all subjects coded semantic features more accurately in the silence condition than in the linguistic conditions.
There was a single group by encoding condition interaction related to the silence condition. The +ADHD performed significantly worse than the NL group in this condition. It may be the case that the break in pattern was most jarring to this group with dual deficits. This finding is in line with Norrix, Plante, and Vance’s (2006) conclusion that adults with a history of DSWL did not integrate incongruent auditory and visual cues as well as adults with normal language. The stimuli in this study were not nearly as incongruent as those presented in Norrix et al.’s McGurk paradigm. This may be why only the dual-deficit group showed a difference and may be further evidence of limited processing capacity for the +ADHD group.
There was a clear interaction between semantic features and encoding conditions, such that the most difficult features and conditions (pattern + linguistic), when combined, were significantly more difficult than all other conditions. All adults were affected by this increased difficulty.
To further the findings from this study, experiments that include both task modifications as well as different study populations are warranted. Future studies might include groups of people with hDSWL who are not in college, as well as a group of people with ADHD who do not have hDSWL. The rationale for recruiting people with hDSWL who are not in college is that college students may not be the most representative group. For example, contrary to expectations, the hDSWL group had a significantly higher score than the NL group on the PPVT, a measure of vocabulary. This higher performance may reflect the fact that this is a group of people with mild impairments, particularly efficient compensatory skills, or both. This group of high achievers is contrasted with students from a state university who are more likely to represent the general population. For example, in academic year 2007, this university admitted 77% of all applicants. This participant pool may have muted the results, which may be amplified if a more representative hDSWL group can be found. Although our goal in this study was not to study attention directly, in future studies, direct measurement of participants’ attention skills will allow for more detailed analysis of the effects of attention on word learning tasks. Recent research suggests that intact frontal executive skills are crucial for rehabilitation (Fillingham, Sage, & Lambon Ralph, 2006; Roberston & Murre, 1999). Although language in dissolution is not the same as language in development, acknowledging the potential role of executive skills such as memory, problem-solving skills, ability to direct attention, and ability to monitor progress and feedback, can guide further research in the word-learning domain.
Task modifications might include production as well as nonproduction measures of word learning in order to examine the effect of output modality on performance. In this vein, future tasks could include written as well as oral responses. Likewise, design considerations such as varying semantic features and equalizing their difficulty might shed light on whether differences in performance across features were a result of differences in raw perceptual ability versus semantic feature weighting. Task difficulty should be increased as well, particularly for the lexical label portion of the task.
The findings from this study suggest some patterns of word learning apply to adults with and without impairment. Adults may process semantic features in a weighted manner, with animacy weighted the most heavily and pattern weighted the least heavily (among the four semantic features tested). Linguistic processing seems to demand more cognitive processing than nonlinguistic processing. Likewise, it takes longer to process difficult semantic features in the presence of high phonotactic probability nonwords than it does in the presence of low phonotactic probability nonwords.
In this particular college setting, adults with +ADHD show word learning deficits compared to peers with and without hDSWL. The dual deficits, and the unique challenges of an attention disorder, appear to have a deleterious effect on word learning and suggest a limited capacity for processing when compared to peers. Although the +ADHD group showed deficits recognizing lexical labels, their greatest deficits were related to semantic feature mapping. College students with hDSWL did not show a word learning deficit compared to peers with NL, although they had different factors that correlated with their performance than their peers with NL, suggesting that each group utilizes different processing strategies in the word learning task. Factors related to phonology (non-word repetition, reading, and spelling) continue to play a larger role in word learning for people with hDSWL than for people with NL, even when their overall performance is equal.
Clinical implications depend upon the actual role that fast-mapping plays in word learning. Given that perception of semantic features is a prerequisite for use of semantic features in inferring function and meaning of objects in the world, adults with dual disorders (+ADHD) are at risk for decreased semantic learning. Care should be taken to ensure that they have adequate exposure to visuo-spatial information when learning words; a quick exposure that will suffice for a non-impaired peer may not be adequate for someone with impaired processing. At the very least, clients with +ADHD should have comprehension checks as per their semantic understanding. For those clients with hDSWL, the results of this study imply that a clinician should take into account the phonological requirements of the task, as well as task difficulty. College-educated hDSWL participants may be continually compensating for their phonological challenges; those who have fewer educational advantages may struggle more than their peers with word learning tasks that have high phonological demands. Results of this study indicate that group differences in the role of attention, as well as in semantic and phonological processing, are implicated in poorer word learning in adults with hDSWL. To more fully understand the nature of these group differences, further research into the cognitive processes that gird language learning is warranted.
This work was supported by R01-DC04726 from the National Institute on Deafness and other Communication Disorders. We would like to thank Rachel Hayes-Harb for calculating phonotactic probabilities and neighborhood densities, and Michael Hammond for sharing the algorithm used for those calculations. We thank Tiffany Hogan and members of the L4Lab for their input on this project. We also thank all of the participants who took part in this study.
1The average log frequency of the words in the NEW spelling task was 8.234 with a range of 5.591-13.667. Log frequency was calculated using the Hyperspace Analogue to Language (HAL) norms found on the English Lexicon Project Website (Balota et al., 2007). This compares to a HAL log frequency of 9.04 (range 4.094-12.572) for the MAE spelling test, used in the original Tomblin et al. (1992) battery.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.