|Home | About | Journals | Submit | Contact Us | Français|
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals.
Memory functioning among deaf individuals has been of theoretical and practical interest to investigators for more than 100 years because of its importance for understanding relations of language and cognition as well as academic outcomes for deaf learners (Emmorey & Lane, 2000; Marschark & Wauters, 2011; Mayberry, 2002). At the interface of (spoken, printed, and signed) language and cognition, particular attention has focused on working memory, the temporary memory system that allows “humans to understand and mentally represent their immediate environment, to retain information about their immediate past experience, to support the acquisition of new knowledge, to solve problems, and to formulate, relate, and act on current goals” (Baddeley & Logie, 1999, p. 28). Centrally involved in language comprehension, problem solving, and learning, working memory among deaf individuals has been found to be a significant predictor of reading (Garrison, Long, & Dowaliby, 1997; Geers, 2003) and mathematics achievement (Gottardis, Nunes, & Lunt, 2011; Lang & Pagliaro, 2007). The present study extends that research with the goal of distinguishing the effects on working memory of hearing loss, which has been associated with poorer memory performance across a number of verbal tasks, from the effects of sign language skill, which has been associated with better memory performance in a number of visual-spatial tasks.
Boutla, Supalla, Newport, and Bavelier (2004) provided evidence that hearing individuals and native-signing deaf individuals have comparable working memory capacities (see Rudner, Andin, & Rönnberg, 2009, for discussion). Native signers also frequently have been found to demonstrate better visual-spatial memory than sequential-verbal memory and better visual-spatial memory than nonsigning individuals (Hall & Bavelier, 2010). Deaf native signers, usually deaf children of deaf parents, however, comprise only about 5% of deaf individuals, and the generality of findings from those studies with regard to the deaf population at large remains unclear. Nevertheless, such findings suggest that the frequently observed differences in memory performance between hearing individuals and deaf individuals may not be a function of hearing status per se but perhaps related to language fluencies (e.g., vocabulary; Mayberry, 2002; Sarchet et al., 2014); cognitive abilities (e.g., memory strategies; Nunes, Barros, Evans, & Burman, 2014); or neurologically based information processing differences (Corina, Lawyer, Hauser, & Hirshorn, 2013). Studies involving deaf children, for example, a group particularly heterogeneous in those domains regardless of whether they rely primarily on signed or spoken language, frequently are found to have poorer performance than hearing peers in memory tasks involving either verbal or nonverbal stimuli, particularly when retention of sequential or temporal information is involved (e.g., Bebko, Bell, Metcalfe-Haggert, & McKinnon, 1998; Burkholder & Pisoni, 2006; Campbell & Wright, 1990; Cormier, Schembri, Vinson, & Orfanidou, 2012; Fagan, Pisoni, Horn, & Dillon, 2007; Geers, 2006; Hamilton, 2011; Pisoni & Cleary, 2003; Pisoni, Conway, Kronenberger, Henning, & Anaya, 2010).1
Memory for sequentially presented items, both in simple span tasks (e.g., forward and backward digit spans) and complex memory span tasks that eliminate verbal rehearsal, has been found to be closely linked to language abilities and vocabulary knowledge in both hearing children (e.g., Gathercole & Baddeley, 1989; Gathercole, Willis, Emslie, & Baddeley, 1992) and deaf children (e.g., Bebko et al., 1998; MacSweeney, Campbell, & Donlan, 1996; Pisoni & Geers, 2000; Pisoni, Kronenberger, Roman, & Geers, 2011; Tzeng, 2002). Language modality is not the critical factor in those studies, as the use of signs or nonverbal materials does not eliminate the memory span differences in children or adults (e.g., Blair, 1957; Boutla et al., 2004; Geraci, Gozzi, Papagno, & Cecchetto, 2008; Krakow & Hanson, 1985). Alternatively, the limited use of verbal rehearsal and serial scanning observed among deaf children who use spoken language (Burkholder & Pisoni, 2006; Pisoni et al., 2010, 2011) or use sign language (Bebko et al., 1998) has suggested that observed differences in some working memory tasks may be related to language-based executive functioning (Hauser, Lukomski, & Hillman, 2008; Kronenberger, Pisoni, Henning, & Colson, 2013; Marschark, Spencer, et al., 2015; Pisoni et al., 2010).
Consistent with the above suggestions, visual-spatial working memory tasks that do not require or benefit from verbal-sequential coding often yield comparable levels of memory performance for deaf and hearing individuals (Arnold & Mills, 2001; Bellugi et al., 1990; Campbell & Wright, 1990; Pisoni & Cleary, 2003). Dawson, Busby, McKay, and Clark (2002), for example, found that hearing children significantly outperformed deaf children (with cochlear implants) on three sequential working memory tasks that involved words and easily labeled pictures, whereas the two groups did not differ in memory for sequences of tones and hand movements that could not be labeled easily. In a related study, Wilson, Bettger, Niculae, and Klima (1997) examined working memory in native-signing deaf children as compared to hearing children. As is usually the case, the hearing children demonstrated longer forward than backward digit spans, and they had significantly longer forward memory spans than the deaf native signers. The deaf children, in contrast, demonstrated comparable forward and backward spans, a finding that suggests that native signers can use visual-spatial coding (e.g., visual imagery) in sequential memory tasks (cf. Boutla et al., 2004; Hall & Bavelier, 2010). In fact, the deaf native signers in the Wilson et al. study demonstrated significantly longer backward memory spans than hearing peers. That result contrasts with findings by Kronenberger et al. (2013) who found significantly shorter backward memory spans, as compared to hearing peers, among deaf 7- to 25-year-olds who used cochlear implants and spoken language.
The Wilson et al. (1997) study also included administration of the Corsi Blocks task, which involves tapping series of up to nine randomly placed blocks of increasing length in the same order as indicated by an experimenter. They found that the native-signing deaf children significantly outperformed their hearing peers in that visual-spatial memory task. Marschark, Morrison, Lukomski, Borgna, and Convertino (2013), however, found hearing college students to outperform deaf peers who were not native signers on the same task. Alamargot, Lambert, Thebault, and Dansac (2007) obtained similar findings with middle school children, and Stiles, McGregor, and Bentler (2012) found hearing children to outperform peers with mild to moderate hearing losses who used spoken language. Logan, Maybery, and Fletcher (1996) found no difference on the Corsi Blocks task between deaf and hearing adults, all of whom were fluent signers, whereas hearing participants significantly outscored the deaf participants on memory tasks involving verbal stimuli (signs and written words). Finally, Capirci, Cattani, Rossini, and Volterra (1998) found that teaching Italian Sign Language to hearing elementary school children increased their scores on the Corsi Blocks task after receiving sign language instruction for only 1hr/week over two school years. Taken together, these results suggest that neither hearing status nor sign skill alone is entirely responsible for the working memory decrements or advantages seen among deaf individuals relative to hearing age-mates with various stimulus materials.
The above conclusion is supported by findings obtained by Marschark, Spencer, et al. (2015) who used the Corsi Blocks task in a study involving deaf students who used cochlear implants, deaf students who did not use implants, and hearing students, all of whom varied in their formally assessed sign language skills. The groups did not differ significantly in their Corsi Blocks performance, nor were there differences between students who reported having deaf parents and hearing parents or those who reported being native signers and those who indicated that they learned to sign later. Importantly, however, the investigators found that Corsi Blocks performance was associated only with spoken language (speech recognition scores) and not sign language among the deaf students who used cochlear implants, whereas performance was associated with sign language (reception) and age of acquisition among the deaf students who did not use cochlear implants. It should not be assumed that the implant users and nonusers necessarily comprised groups of individuals who depended exclusively on spoken language and sign language, respectively. Nevertheless, the Marschark, Spencer, et al. results suggest that visual-spatial working memory performance may be related more to deaf individuals’ levels of language ability, regardless of its modality, rather than to sign language per se (López-Crespo, Daza, & Méndez-López, 2012).
In an effort to clarify the roles of hearing status and sign language abilities in working memory among deaf individuals, the study described below employed three complex memory span tasks. Such tasks measure the ability to maintain and retrieve goal-relevant information in the face of distraction by combining a measure of memory storage and an attention-demanding processing task. In this paradigm, a processing task (e.g., determining the correctness of a math equation) is interleaved with a list of sequentially presented, to-be-remembered items. Because the processing task interrupts rehearsal, a limited-term memory store is required to keep items accessible during and after the processing task.
Complex memory span tasks have been shown to be valid and reliable measures of working memory capacity across a variety of populations (Conway et al., 2005). Performance on such tasks correlates with a broad range of higher- and lower-order cognitive tasks including reading and listening comprehension (Daneman & Carpenter, 1983; Daneman & Merikle, 1996; King & Just, 1991; Turner & Engle, 1989), vocabulary learning (Daneman & Green, 1986), and writing (Benton, Kraft, Glover, & Plake, 1984). The association between reading and complex memory spans, in particular, makes them potentially useful tools in studies involving deaf learners, who frequently lag behind hearing peers in reading comprehension across a broad age range (Marschark, Shaver, Nagle, & Newman, 2015; Qi & Mitchell, 2012). To the extent that deaf students’ reading challenges might be directly attributed to aspects of working memory (e.g., Garrison et al., 1997; see below), complex memory span studies may provide a better understanding of and directions for ameliorating their reading difficulties.
Turner and Engle (1989) demonstrated that complex memory span tasks involving the verification of arithmetic operations as the primary processing task (Operation Span task) were just as effective at predicting reading comprehension as those using judgments of sentence acceptability (Reading Span task). The link between working memory and reading was taken as indicative of common processing and storage requirements, leading Turner and Engle to conclude that some individuals are poor readers precisely because they have relatively less working memory capacity available than good readers. Consistent with that conclusion, Garrison et al. (1997) suggested that better deaf readers likely utilize less of their working memory capacity for comprehension than poorer deaf readers, leaving them with greater residual capacity for memory storage. Using an Operation Span task like that of Turner and Engle (1989), they found deaf college students’ memory performance to be significantly related to their reading comprehension scores. Following Engle and colleagues, Garrison et al. concluded that greater and lesser reading abilities among deaf learners can be directly attributed to relative efficiency in their use of working memory capacity independent of effects due to word and world knowledge (Convertino, Borgna, Marschark, & Durkin, 2014).
Incorporating sign language into a complex memory span task, Wang and Napier (2013) conducted a study involving sign language interpreters and deaf individuals, both groups varying in whether or not they were native signers. They used a Reading Span task in which participants saw series of sentences presented in Auslan (Australian Sign Language) and had to verify whether or not each sentence made sense. They were asked to remember the last sign of each sentence and at the end of each series reproduce all of the sentence-final signs in the correct order. Overall, there was no difference in recall between native and nonnative signers, but the hearing signers/interpreters had significantly higher recall scores than the deaf signers.
Geers, Pisoni, and Brenner (2013) also used a Reading Span task in a study of working memory among teenage cochlear implant users. The deaf teenagers scored significantly below hearing norms on simple digit span tasks and demonstrated slower articulation rates than hearing peers, two findings that are associated because slower articulation limits the potential of verbal rehearsal to support memory (Pisoni et al., 2011; Wilson & Emmorey, 1997). The Reading Span task, in contrast, yielded no difference between deaf and hearing participants. However, the complex memory span results in the Geers et al. study apparently were confounded by differences in administration of the task by two experimenters, yielding different results. In contrast to the automated procedures usually employed in complex memory tasks, the Geers et al. methodology involved the manual presentation of stimuli on index cards, and the investigators acknowledged that Reading Span performance is particularly vulnerable to differences in stimulus presentation.
The present study was designed to extend and clarify the above studies, examining relations among hearing status, sign language, cochlear implant use, and working memory using complex memory span tasks with deaf and hearing college students who varied in their sign language abilities. The study utilized versions of the Reading Span, Operation Span, and Symmetry Span tasks developed by Engle and his colleagues (see Unsworth, Redick, Heitz, Broadway & Engle, 2009). The Reading and Operation Span tasks involve memory for letters and thus are amenable to verbal coding. The Symmetry Span task involves memory for the positions of colored squares and is not readily amenable to verbal coding. On the basis of the memory research described earlier, it was expected that hearing students would outperform the deaf students on the Reading Span and Operation Span tasks. Results from previous studies could lead to the prediction that the deaf students might outperform hearing students on the visual-spatial Symmetry Span task (Hall & Bavelier, 2010; Van Dijk, Klappers, & Postma, 2013a) or that deaf and hearing signers might surpass their nonsigning deaf and hearing peers on that task (Arnold & Mills, 2001; Van Dijk, Klappers, & Postma, 2013b). No previous studies, however, have included deaf and hearing participant samples that varied in their sign language abilities and a deaf sample that included both cochlear implant users and nonusers.
A total of 152 college students participated, 85 deaf and 67 hearing, all enrolled at Rochester Institute of Technology (RIT). RIT includes the National Technical Institute for the Deaf (NTID), but deaf students were drawn from all of the RIT colleges. Unaided average pure tone hearing thresholds were available from institutional records for 60 of the deaf students (see Table 1), 40 of whom were active cochlear implant users. Being college students at this point in cochlear implant history, most of the latter had received their cochlear implants relatively late by current standards, ranging from ages 1.5 to 22 years (Table 1). As will be described later, deaf participants also varied in their self-reported sign language skills, their preferred language modalities, and their ages of sign language acquisition; all variables to be considered below. Of the 67 hearing participants, 33 were enrolled in a sign language interpreter training program and were recruited specifically because of their sign language skills.
Participants were recruited through posted advertisements, personal contact (i.e., indications of interest in research participation), and in the case of the interpreting students, announcements in their interpreting classes. All participants were paid for their time. The only restrictions on participation were that individuals had to be native speakers of English or users of American Sign Language (ASL) and have sufficient mobility to perform the tasks on laptop computers. The study, as described, was approved by the RIT Institutional Review Board.
The three complex working memory span tasks were administered using a laptop computer and E-Prime laboratory software. The Reading Span task required participants to remember series of letters while judging the sense of sentences. Participants were given three sets of practice trials before starting the actual task. They first practiced memorizing letter sequences, then practiced making sentence judgments, and, finally, practiced remembering letters and making sentence judgments together. On each test trial, participants first read a presented sentence. A mouse click started the next screen on which they were asked “Does this sentence make sense?” They responded true or false with a mouse click. They then were shown a letter for 800ms before the next sentence appeared. After each sequence of sentence/letter combinations, participants were asked to select the letters they had seen, in the correct order, from a 3×4 letter array. The sequences were presented in order of increasing length, starting with three sets of three sentence/letter combinations and increasing to three sets of seven combinations, for 15 total trials.
The Operation Span task required participants to remember series of letters while performing mental arithmetic. Participants received three sets of practice trials. The first set consisted of memorizing and reporting letter sequences; the second set consisted of solving mathematical problems and judging the correctness of presented answers. In the third set, participants judged the correctness of the equations while remembering letters for later recall. On each test trial, participants first saw a math problem. After solving it and clicking the mouse, they saw a number and had to decide whether it was the correct or incorrect answer to the problem, responding accordingly. They then were shown a to-be-remembered letter for 800ms. After a sequence of problem/letter combinations, the participants selected the letters presented as in the Reading Span task. Sequences of increasing length were presented, starting with three sets of three problem/letter combinations and increasing to three sets of seven combinations.
The Symmetry Span task proceeded similarly to the previous two except that participants had to remember the positions of single squares seen in a 4×4 matrix while making interleaved judgments about the symmetry of other matrices. Participants were given three sets of practice trials that involved first memorizing square positions, then making symmetry judgments, and then doing both together. On test trials, participants first were presented with a figure composed of black squares in an 8×8 matrix, and after a mouse click, they responded to indicate whether or not the figure was symmetrical. They then were shown a single red square in a 4×4 matrix. After a sequence of figure/square combinations, participants selected the locations of presented red squares, in the correct order, from another matrix and submitted their answers. The sequences increased in length, starting with three sets of three figure/square combinations and increasing to three sets of seven figure/square combinations.
Following the working memory tasks, all participants completed communication questionnaires. The language and communication skills of deaf students entering RIT are evaluated for the purposes of service provision through the Language and Communication Background Questionnaire (LCBQ). RIT utilizes this self-report measure instead of face-to-face communication interviews because it is faster than interview assessments, can be administered online, and has been shown to be both valid and reliable (McKee, Stinson, & Blake, 1984; Metz, Caccamise, & Gustafson, 1997; Spencer et al., 2015). The version of the LCBQ used here asked all students their birthdates, the age at which they learned to sign, and had them rate their expressive and receptive skills in ASL on 5-point Likert scales. 2 Means and standard deviations for those variables are provided in Table 1. The deaf students were asked to indicate their “overall” language preference using a 5-point scale from “Spoken” to “Signed.” In addition, they were asked whether they used cochlear implants and, if so, the age at which they had received them.
Working memory performance was evaluated using a partial scoring method in which a point was awarded for each item in a trial recalled in the correct position, whether or not other items in that trial were recalled correctly. This method provides the most accurate measure of individual task performance, increases variation in scores, and improves reliability of the complex span tasks (Conway et al., 2005).
Correlations among scores on the three complex span tasks indicated their strong interrelations in both deaf, .55 ≤ rs(84) ≤ .62, and hearing, .44 ≤ rs(64) ≤ .54, samples (Turner & Engle, 1989). Recall was not significantly related to the deaf students’ pure tone average hearing thresholds in any of the three tasks, −.10 ≤ rs(58) ≤ .10.
Memory span recall scores initially were analyzed using a 2 (group: deaf or hearing) by 3 (task: Reading, Operation, Symmetry) analysis of variance in which task was within-participants. Significant main effects were obtained for both group, F(1, 150) = 19.76, MSE = 321.70, p < .001, and task F(2, 150) = 443.77, MSE = 84.14, p < .001 (see Table 2). Of particular interest was the significant group by task interaction, F(1, 150) = 24.29, p < .001. A priori t tests indicated that the hearing students scored significantly higher than the deaf students on the Reading Span and Operation Span tasks, t(150) = 4.30, p < .001 and t(150) = 4.75, p < .001, respectively (see Table 2). As in the study by Unsworth et al. (2009), scores on the more difficult Symmetry Span task were about 50% of scores on the other two tasks. The lack of a significant difference between deaf and hearing college students on that task, t(150) = 1.16, is consistent with findings from studies described earlier that failed to find significant differences between deaf and hearing students on the (visual-spatial) Corsi Blocks task.
Several analyses were conducted in order to examine possible relations among hearing status, sign language, and complex memory span. First, Pearson correlations examined possible associations between self-rated expressive and receptive sign language abilities, age of sign language acquisition, age of implantation and length of implant use, and scores on the three working memory tasks. Among the deaf students, the only significant coefficient indicated that participants who had learned to sign later scored higher on the Operation Span task, r(73) = .30, p < .01 (all other coefficients, −.17 ≤ r(41–84) ≤ .15). There were no significant coefficients among the hearing students, −.21 ≤ r(39) ≤ .22.
Among the deaf participants, 18 (including four cochlear implant users) reported being native signers. Comparisons between them and the other 56 deaf participants using independent sample t tests revealed no significant differences on any of the three tasks, Reading: t(72) = 1.28, Operation: t(72) = −0.59, Symmetry: t(72) = −0.40 (see Table 2). Using the deaf students’ ratings of their preferred communication modality, t tests also were used to compare scores on the three memory tasks for the 20 students who indicated a preference for spoken language with a rating of 1 or 2 on the 5-point Likert scale and the 37 who indicated a preference for sign language with a rating of 4 or 5 on the scale. There were no differences between those groups on any of the three tasks, Reading: t(55) = −0.74, Operation: t(55) = 0.60, Symmetry: t(55) = −0.25 (see Table 2). Similar comparisons between the 40 deaf participants who used cochlear implants and the other 45 deaf participants also yielded nonsignificant results, Reading: t(83) = 0.59, Operation: t(83) = 1.03, Symmetry: t(83) = 0.59 (see Table 2).
Considering the hearing participants, t tests comparing the scores of the 33 interpreting students and the other 34 hearing students failed to indicate any significant differences between the groups, Reading: t(65) = 0.39, Operation: t(65) = 1.50, Symmetry: t(55) = 0.24 (see Table 2). Consistent with the overall differences between the deaf and hearing groups, however, the hearing signers (interpreting students) scored significantly higher than the deaf signers on the Reading Span, t(68) = 2.78, p < .01, and Operation Span, t(68) = 4.24, tasks, p < .001, whereas there was no significant difference between them on the Symmetry Span task, t(68) = 0.58 (see Table 2).
Consistent with previous findings involving a variety of stimulus materials, the present study found that hearing participants significantly outscored deaf participants on two memory span tasks that were amenable to verbal coding (e.g., Lichtenstein, 1998; MacSweeney et al., 1996; Pintner & Patterson, 1917; Pisoni & Cleary, 2003). In contrast, there was no difference between the groups in their performance on a visual-spatial memory task that was not amenable to verbal coding. Those findings were independent of whether deaf participants reported primary reliance on spoken language or sign language, whether or not they used cochlear implants, and the levels of their self-reported expressive and receptive sign language skills. Hearing sign language interpreting students outperformed deaf participants who signed, but they did not differ from the other hearing participants on any of the three memory tasks. The present findings thus all are consistent with those of Marschark et al. (2013), Marschark, Spencer, et al. (2015), and Wang and Napier (2013) in appearing to rule out any simple relationship between sign language skill or hearing status and visual-spatial memory (López-Crespo et al., 2012).
Correlational results reinforced the findings from comparisons among the groups in indicating that neither hearing thresholds nor sign language, spoken language, or cochlear implant use conferred special advantages in any of the memory tasks used here. Previous studies have found sign language to be associated with performance in mental rotation, complex mental image generation, face discrimination, and some visual-spatial memory tasks (see Hall & Bavelier, 2010; Marschark & Wauters, 2011; Mayberry, 2002; Van Dijk et al., 2013a). Most of the relevant studies, however, have been limited to native-signing deaf individuals who grew up in deaf families and thus may have brains and behavior that are far more visually oriented than other deaf individuals (Corina et al., 2013; Emmorey, 2002; see Pisoni et al., 2010). The extent to which sign language and a greater dependence on vision might contribute to real-world (e.g., academically relevant; Marschark & Hauser, 2012) visual-spatial skills among native-signing and other deaf individuals still is in need of further investigation. Evidence from studies by Alamargot et al. (2007), López-Crespo et al. (2012), Marschark, Spencer, et al. (2015), and others, however, all suggest that there is no generalized advantage for deaf individuals or even deaf signers with regard to visual memory.
Working memory performance among deaf individuals has been shown to be positively related to spoken language, at least when materials are amenable to verbal coding and particularly when tasks require retention of sequential or temporal information. To the extent that a cochlear implant enhances an individual’s access to spoken language, it might be expected that working memory for sequential, verbally codable stimuli would be greater among cochlear implant users. Pisoni and his colleagues (Burkholder & Pisoni, 2006; Fagan et al., 2007; Pisoni et al., 2010), however, consistently have found that deaf children with cochlear implants demonstrate significantly poorer working memory spans than hearing age-mates with both verbal and nonverbal materials, a result they ascribe to broader neuropsychological and cognitive delays related to hearing loss (cf. Geers et al., 2013).
The foregoing all suggest that the differences between deaf and hearing individuals frequently observed in simple and complex working memory tasks are not an issue simply of hearing status, but one of the impact of experience (e.g., memory strategies; Nunes et al., 2014) and cognitive abilities (e.g., executive functioning; Kronenberger et al., 2013; Pisoni et al., 2011) more generally. Still unclear is the precise nature of interactions among language fluency and modality, cognitive control of memory coding, and the differing formal and informal learning experiences of deaf and hearing individuals. In that regard, the present study was limited by the need to rely on self-reports of sign language ability among hearing as well as deaf participants. Formal assessments of expressive and receptive sign language skills are time-consuming, however, and self-reports of sign language skill generally have been found to be reliable, at least for those who rate themselves as having either greater or lesser abilities (McKee et al., 1984; Metz et al., 1997; Spencer et al., 2015). This study also may be limited by the lack of demographic data pertaining to participants’ intelligence and academic achievement as well as their parents’ education and family socioeconomic status.
The present findings contribute to theoretical discussions concerning the roles of hearing status and sign language on cognitive abilities, but they also speak to practical issues associated with educating deaf learners. In the educational and cognitive literatures, it is not uncommon to find claim that deaf students, and particularly those who use sign language rather than spoken language, are visual learners (e.g., Dowaliby & Lang, 1999; Hauser et al., 2008; Marschark & Hauser, 2012). This assumption leads teachers and perhaps parents of deaf learners to emphasize visually oriented methods and materials in formal and informal educational settings (e.g., Hauser et al., 2008, p. 299). Results of the sort obtained here and in the related studies described earlier, in contrast, call such generalizations into question. In fact, there does not appear to be any empirical evidence to indicate that deaf students have better visual memory skills than hearing students, regardless of whether they rely primarily on sign language or spoken language (López-Crespo et al., 2012). The heavy reliance on visual materials and the orienting of deaf learners toward visual-spatial rather than verbal-sequential memory strategies therefore may be doing more harm than good.
As alluded to earlier, the extent to which visual-spatial or verbal-sequential memory strategies might affect academic outcomes for some deaf learners, with some kinds of material, in some contexts, and perhaps at different ages, remains to be determined. Meanwhile, claims that deaf individuals at large are visual learners or should focus on acquiring visual rather than verbal (not necessarily vocal) processing skills are not consistent with the evidence base and follow only from the belief that deaf individuals rely more on vision than audition. Given the heterogeneity of the deaf population with regard to various cognitive variables as well as hearing thresholds and use of assistive listening devices, even that assumption has to be viewed as questionable and addressed by future empirical studies.
No conflicts of interest were reported.
Preparation of this chapter was supported in part by Grant 1R01DC012317 from the National Institute on Deafness and Other Communication Disorders. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIDCD or NTID.
The authors thank Randy Engle for his assistance in several phases of this project and Georgianna Borgna, Carol Convertino, Richard Dirmyer, and Denise Wellin for assistance in obtaining some of the data.
1.Deaf individuals also may adopt different and more variable memory coding strategies at different ages relative to hearing peers, preferences that may interact with their preferred language modality.
2.Although self-reports provide only rough estimates of language abilities, Spencer et al. (2015) obtained high correlations for both deaf and hearing students in assessments of ASL skill using the Sign Language Proficiency Interview (SLPI) and self-ratings using the SLPI rubric, r(105) = .84 and r(68) = .88, respectively.