Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.
Sign language; Sign perception; Non-native first language acquisition; Second language acquisition
We tested hearing six- and ten-month-olds’ ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-naïve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer’s facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children’s production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.
Perception; Discrimination; American Sign Language; Parameters; Infants
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
gesture; American Sign Language; bilingualism
We investigated the relevance of linguistic and perceptual factors to sign processing by comparing hearing individuals and deaf signers as they performed a handshape monitoring task, a sign-language analogue to the phoneme-monitoring paradigms used in many spoken-language studies. Each subject saw a series of brief video clips, each of which showed either an ASL sign or a phonologically possible but non-lexical “non-sign,” and responded when the viewed action was formed with a particular handshape. Stimuli varied with respect to the factors of Lexicality, handshape Markedness (Battison, 1978), and Type, defined according to whether the action is performed with one or two hands and for two-handed stimuli, whether or not the action is symmetrical.
Deaf signers performed faster and more accurately than hearing non-signers, and effects related to handshape Markedness and stimulus Type were observed in both groups. However, no effects or interactions related to Lexicality were seen. A further analysis restricted to the deaf group indicated that these results were not dependent upon subjects' age of acquisition of ASL. This work provides new insights into the processes by which the handshape component of sign forms is recognized in a sign language, the role of language experience, and the extent to which these processes may or may not be considered specifically linguistic.
ASL; psycholinguistics; markedness; phoneme monitoring
Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia & Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et a., 2006), introducing the question of whether the neural regions used by ASL signers for sign language verb processing might be similar to those used by non-signers for event segmentation.
The present study investigated the neural substrate of predicate perception and linguistic processing in ASL. Observed patterns of activation demonstrate that Deaf signers process telic verb signs as having higher phonological complexity as compared to atelic verb signs. These results, together with previous neuroimaging data on spoken and sign languages (Shetreet, Friedmann, & Hadar, 2010; Emmorey et al., 2009), illustrate a route for how a prominent perceptual-kinematic feature used for non-linguistic event segmentation might come to be processed as an abstract linguistic feature due to sign language exposure.
sign language; ASL; fMRI; event structure; verb; neuroplasticity; motion
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.
sign language; deaf signers; fMRI; pseudowords
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs with soundless handshapes that are nonetheless phonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization.
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition.
ASL; Sign language; Perception; Sign recognition; Lexical access; Language experience; Delayed first language acquisition; Second language acquisition
Manual gestures occur on a continuum from co-speech gesticulations to conventionalized emblems to language signs. Our goal in the present study was to understand the neural bases of the processing of gestures along such a continuum. We studied four types of gestures, varying along linguistic and semantic dimensions: linguistic and meaningful American Sign Language (ASL), non-meaningful pseudo-ASL, meaningful emblematic, and nonlinguistic, non-meaningful made-up gestures. Pre-lingually deaf, native signers of ASL participated in the fMRI study and performed two tasks while viewing videos of the gestures: a visuo-spatial (identity) discrimination task and a category discrimination task. We found that the categorization task activated left ventral middle and inferior frontal gyrus, among other regions, to a greater extent compared to the visual discrimination task, supporting the idea of semantic-level processing of the gestures. The reverse contrast resulted in enhanced activity of bilateral intraparietal sulcus, supporting the idea of featural-level processing (analogous to phonological-level processing of speech sounds) of the gestures. Regardless of the task, we found that brain activation patterns for the nonlinguistic, non-meaningful gestures were the most different compared to the ASL gestures. The activation patterns for the emblems were most similar to those of the ASL gestures and those of the pseudo-ASL were most similar to the nonlinguistic, non-meaningful gestures. The fMRI results provide partial support for the conceptualization of different gestures as belonging to a continuum and the variance in the fMRI results was best explained by differences in the processing of gestures along the semantic dimension.
American Sign Language; gestures; Deaf; visual processing; categorization; linguistic; brain; fMRI
Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus system. Informally produced signs were shorter with less vertical movement. Shouted signs were displaced forward and to the right and were produced within a larger volume of signing space, with greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to produce less movement within the vertical dimension of signing space, but blind and citation signing did not differ significantly on any measure, except duration. Thus, signers do not “sign louder” when they cannot see themselves, but they do alter their sign production when vision is restricted. We hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as input to a comprehension-based monitor.
The effects of knowledge of sign language on co-speech gesture were investigated by comparing the spontaneous gestures of bimodal bilinguals (native users of American Sign Language and English; n = 13) and non-signing native English speakers (n = 12). Each participant viewed and re-told the Canary Row cartoon to a non-signer whom they did not know. Nine of the thirteen bimodal bilinguals produced at least one ASL sign, which we hypothesise resulted from a failure to inhibit ASL. Compared with non-signers, bimodal bilinguals produced more iconic gestures, fewer beat gestures, and more gestures from a character viewpoint. The gestures of bimodal bilinguals also exhibited a greater variety of handshape types and more frequent use of unmarked handshapes. We hypothesise that these semantic and form differences arise from an interaction between the ASL language production system and the co-speech gesture system.
A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas.
deafness; blindness; tactile language; neuroplasticity; fMRI; diffusion tensor imaging
We investigated the functional organization of neural systems supporting language production when the primary language articulators are also used for meaningful, but non-linguistic, expression such as pantomime. Fourteen hearing non-signers and 10 deaf native users of American Sign Language (ASL) participated in an H2 15O-PET study in which they generated action pantomimes or ASL verbs in response to pictures of tools and manipulable objects. For pantomime generation, participants were instructed to “show how you would use the object.” For verb generation, signers were asked to “generate a verb related to the object.” The objects for this condition were selected to elicit handling verbs that resemble pantomime (e.g., TO-HAMMER (hand configuration and movement mimic the act of hammering) and non-handling verbs that do not (e.g., POUR-SYRUP, produced with a “Y” handshape). For the baseline task, participants viewed pictures of manipulable objects and an occasional non-manipulable object and decided whether the objects could be handled, gesturing “yes” (thumbs up) or “no” (hand wave). Relative to baseline, generation of ASL verbs engaged left inferior frontal cortex, but when non-signers produced pantomimes for the same objects, no frontal activation was observed. Both groups recruited left parietal cortex during pantomime production. However, for deaf signers the activation was more extensive and bilateral, which may reflect a more complex and integrated neural representation of hand actions. We conclude that the production of pantomime versus ASL verbs (even those that resemble pantomime) engage partially segregated neural systems that support praxic versus linguistic functions.
Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, non-arbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than non-iconic signs (controlling for strength of iconicity, semantic relatedness, familiarity, and imageability). Twenty deaf signers made lexical decisions to the second item of a prime-target pair. Iconic target signs were preceded by prime signs that were a) iconic and semantically related, b) non-iconic and semantically related, or c) semantically unrelated. In addition, a set of non-iconic target signs was preceded by semantically unrelated primes. Significant facilitation was observed for target signs when preceded by semantically related primes. However, iconicity did not increase the priming effect (e.g., the target sign PIANO was primed equally by the iconic sign GUITAR and the non-iconic sign MUSIC). In addition, iconic signs were not recognized faster or more accurately than non-iconic signs. These results confirm the existence of semantic priming for sign language and suggest that iconicity does not play a robust role in on-line lexical processing.
semantic priming; iconicity; American Sign Language; lexical recognition
Bilinguals report more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at (a) semantic and/or (b) phonological levels, or (c) that bilinguals use each language less frequently than monolinguals. Bilinguals who speak one language and sign another help decide between these alternatives because their languages lack phonological overlap. Twenty-two American Sign Language (ASL)-English bilinguals, 22 English monolinguals, and 11 Spanish-English bilinguals named 52 pictures in English. Despite no phonological overlap between languages, ASL-English bilinguals had more TOTs than monolinguals, and equivalent TOTs as Spanish-English bilinguals. These data eliminate phonological blocking as the exclusive source of bilingual disadvantages. A small advantage of ASL-English over Spanish-English bilinguals in correct retrievals is consistent with semantic interference and a minor role for phonological blocking. However, this account faces substantial challenges. We argue reduced frequency-of-use is the more comprehensive explanation of TOT rates in all bilinguals.
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.
American Sign Language; lowering; phonetic reduction; motion capture; sign production
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages.
Speech–sign or “bimodal” bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal–manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language.
short-term memory; digit span; sign language; working memory; phonological loop
Spoken languages are characterized by flexible, multivariate prosodic systems. As a natural language, American Sign Language (ASL), and other sign languages (SLs), are also expected to be characterized in the same way. Artificially created signing systems for classroom use, such as signed English, serve as a contrast to natural sign languages. The present article explores the effects of changes in signing rate on signs, pauses, and, unlike previous studies, a variety of nonmanual markers.
Rate was a main effect on the duration of signs, the number of pauses and pause duration, the duration of brow raises, the duration of licensed lowered brows, the number and duration of blinks, all of which decreased with increased signing rate. This indicates that signers produced their different signing rates without making dramatic changes in the number of signs, but instead by varying the sign duration, in accordance with previous observations (Grosjean, 1978, 1979). These results can be brought to bear on three different issues: (1) the difference between grammatical nonmanuals and non-grammatical nonmanuals; (2) the fact that nonmanuals in general are not just a modality effect; and (3) the use of some nonmanuals as pragmatically determined as opposed to overt morphophonological markers reflecting the semantic–syntax–pragmatic interfaces.
American Sign Language; facial expressions; pragmatics; prosody; semantics
This article extends current methodologies for the linguistic analysis of sign language acquisition to cases of bimodal bilingual acquisition. Using ELAN, we are transcribing longitudinal spontaneous production data from hearing children of Deaf parents who are learning either American Sign Language (ASL) and American English (AE), or Brazilian Sign Language (Libras, also referred to as Língua de Sinais Brasileira/LSB in some texts) and Brazilian Portuguese (BP). Our goal is to construct corpora that can be mined for a wide range of investigations on various topics in acquisition. Thus, it is important that we maintain consistency in transcription for both signed and spoken languages. This article documents our transcription conventions, including the principles behind our approach. Using this document, other researchers can chose to follow similar conventions or develop new ones using our suggestions as a starting point.
bimodal bilingualism; sign language acquisition; child language acquisition; corpus methodology; sign notational conventions; ELAN; transcription
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
aphasia; American sign language; deaf; neurolinguistics
The production of sign language involves two large articulators (the hands) moving through space and contacting the body. In contrast, speech production requires small movements of the tongue and vocal tract with no observable spatial contrasts. Nonetheless, both language types exhibit a sublexical layer of structure with similar properties (e.g., segments, syllables, feature hierarchies). To investigate which neural areas are involved in modality-independent language production and which are tied specifically to the input-output mechanisms of signed and spoken language, we reanalyzed PET data collected from 29 deaf signers and 64 hearing speakers who participated in a series of separate studies. Participants were asked to overtly name concrete objects from distinct semantic categories in either American Sign Language (ASL) or in English. The baseline task required participants to judge the orientation of unknown faces (overtly responding ‘yes’/‘no’ for upright/inverted). A random effects analysis revealed that left mesial temporal cortex and the left inferior frontal gyrus were equally involved in both speech and sign production, suggesting a modality-independent role for these regions in lexical access. Within the left parietal lobe, two regions were more active for sign than for speech: the supramarginal gyrus (peak coordinates: −60, −35, +27) and the superior parietal lobule (peak coordinates: −26, −51, +54). Activation in these regions may be linked to modality-specific output parameters of sign language. Specifically, activation within left SMG may reflect aspects of phonological processing in ASL (e.g., selection of hand configuration and place of articulation features), whereas activation within SPL may reflect proprioceptive monitoring of motoric output.
Deaf signers have extensive experience using their hands to communicate. Using fMRI, we examined the neural systems engaged during the perception of manual communication in 14 deaf signers and 14 hearing non-signers. Participants passively viewed blocked video clips of pantomimes (e.g., peeling an imaginary banana) and action verbs in American Sign Language (ASL) that were rated as meaningless by non-signers (e.g., TO-DANCE). In contrast to visual fixation, pantomimes strongly activated fronto-parietal regions (the mirror neuron system, MNS) in hearing non-signers, but only bilateral middle temporal regions in deaf signers. When contrasted with ASL verbs, pantomimes selectively engaged inferior and superior parietal regions in hearing non-signers, but right superior temporal cortex in deaf signers. The perception of ASL verbs recruited similar regions as pantomimes for deaf signers, with some evidence of greater involvement of left inferior frontal gyrus for ASL verbs. Functional connectivity analyses with left hemisphere seed voxels (ventral premotor, inferior parietal lobule, fusiform gyrus) revealed robust connectivity with the MNS for the hearing non-signers. Deaf signers exhibited functional connectivity with the right hemisphere that was not observed for the hearing group for the fusiform gyrus seed voxel. We suggest that life-long experience with manual communication, and/or auditory deprivation, may alter regional connectivity and brain activation when viewing pantomimes. We conclude that the lack of activation within the MNS for deaf signers does not support an account of human communication that depends upon automatic sensorimotor resonance between perception and action.
Short-term memory (STM), or the ability to hold information in mind for a few seconds, is thought to be limited in its capacity to about 7 ± 2 items. Notably, the average STM capacity when using American Sign Language (ASL) rather than English is only 5 ± 1 items. Here we show that, contrary to previous interpretations, this difference cannot be attributed to phonological factors, item duration or reduced memory abilities in deaf people. We also show that, despite this difference in STM span, hearing speakers and deaf ASL users have comparable working memory resources during language use, indicating similar abilities to maintain and manipulate linguistic information. The shorter STM span in ASL users therefore confirms the view that the spoken span of 7 ± 2 is an exception, probably owing to the reliance of speakers on auditory-based rather than visually based representations in linguistic STM, and calls for adjustments in the norms used with deaf individuals.