Search tips
Search criteria

Results 1-25 (837789)

Clipboard (0)

Related Articles

1.  The role of syllables in sign language production 
Frontiers in Psychology  2014;5:1254.
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
PMCID: PMC4230165  PMID: 25431562
sign language; speech production; syllables; sign parameters; picture naming
2.  When does a system become phonological? Handshape production in gesturers, signers, and homesigners 
Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.
PMCID: PMC3665423  PMID: 23723534
Sign language; phonology; morphology; homesign; gesture; language evolution; historical change; handshape; classifier predicates
3.  Handshape monitoring: Evaluation of linguistic and perceptual factors in the processing of American Sign Language 
Language and cognitive processes  2011;27(1):117-141.
We investigated the relevance of linguistic and perceptual factors to sign processing by comparing hearing individuals and deaf signers as they performed a handshape monitoring task, a sign-language analogue to the phoneme-monitoring paradigms used in many spoken-language studies. Each subject saw a series of brief video clips, each of which showed either an ASL sign or a phonologically possible but non-lexical “non-sign,” and responded when the viewed action was formed with a particular handshape. Stimuli varied with respect to the factors of Lexicality, handshape Markedness (Battison, 1978), and Type, defined according to whether the action is performed with one or two hands and for two-handed stimuli, whether or not the action is symmetrical.
Deaf signers performed faster and more accurately than hearing non-signers, and effects related to handshape Markedness and stimulus Type were observed in both groups. However, no effects or interactions related to Lexicality were seen. A further analysis restricted to the deaf group indicated that these results were not dependent upon subjects' age of acquisition of ASL. This work provides new insights into the processes by which the handshape component of sign forms is recognized in a sign language, the role of language experience, and the extent to which these processes may or may not be considered specifically linguistic.
PMCID: PMC3399660  PMID: 22822282
ASL; psycholinguistics; markedness; phoneme monitoring
4.  Effects of language experience on the perception of American Sign Language 
Cognition  2008;109(1):41-53.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.
PMCID: PMC2639215  PMID: 18834975
Sign language; Sign perception; Non-native first language acquisition; Second language acquisition
5.  Sign language and pantomime production differentially engage frontal and parietal cortices 
Language and cognitive processes  2011;26(7):878-901.
We investigated the functional organization of neural systems supporting language production when the primary language articulators are also used for meaningful, but non-linguistic, expression such as pantomime. Fourteen hearing non-signers and 10 deaf native users of American Sign Language (ASL) participated in an H2 15O-PET study in which they generated action pantomimes or ASL verbs in response to pictures of tools and manipulable objects. For pantomime generation, participants were instructed to “show how you would use the object.” For verb generation, signers were asked to “generate a verb related to the object.” The objects for this condition were selected to elicit handling verbs that resemble pantomime (e.g., TO-HAMMER (hand configuration and movement mimic the act of hammering) and non-handling verbs that do not (e.g., POUR-SYRUP, produced with a “Y” handshape). For the baseline task, participants viewed pictures of manipulable objects and an occasional non-manipulable object and decided whether the objects could be handled, gesturing “yes” (thumbs up) or “no” (hand wave). Relative to baseline, generation of ASL verbs engaged left inferior frontal cortex, but when non-signers produced pantomimes for the same objects, no frontal activation was observed. Both groups recruited left parietal cortex during pantomime production. However, for deaf signers the activation was more extensive and bilateral, which may reflect a more complex and integrated neural representation of hand actions. We conclude that the production of pantomime versus ASL verbs (even those that resemble pantomime) engage partially segregated neural systems that support praxic versus linguistic functions.
PMCID: PMC3167215  PMID: 21909174
6.  Sign Perception and Recognition in Non-Native Signers of ASL 
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition.
PMCID: PMC3114635  PMID: 21686080
ASL; Sign language; Perception; Sign recognition; Lexical access; Language experience; Delayed first language acquisition; Second language acquisition
7.  The effects of learning American Sign Language on co-speech gesture* 
Bilingualism (Cambridge, England)  2012;15(4):677-686.
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
PMCID: PMC3547625  PMID: 23335853
gesture; American Sign Language; bilingualism
8.  Palm Reversal Errors in Native-Signing Children with Autism 
Children with autism spectrum disorder (ASD) who have native exposure to a sign language such as American Sign Language (ASL) have received almost no scientific attention. This paper reports the first studies on a sample of five native-signing children (four deaf children of deaf parents and one hearing child of deaf parents; ages 4;6 to 7;5) diagnosed with ASD. A domain-general deficit in the ability of children with ASD to replicate the gestures of others is hypothesized to be a source of palm orientation reversal errors in sign. In Study 1, naturalistic language samples were collected from three native-signing children with ASD and were analyzed for errors in handshape, location, movement and palm orientation. In Study 2, four native-signing children with ASD were compared to 12 typically-developing deaf children (ages 3;7 to 6;9, all born to deaf parents) on a fingerspelling task. In both studies children with ASD showed a tendency to reverse palm orientation on signs specified for inward/outward orientation. Typically-developing deaf children did not produce any such errors in palm orientation. We conclude that this kind of palm reversal has a perceptual rather than a motoric source, and is further evidence of a “self-other mapping” deficit in ASD.
PMCID: PMC3479340  PMID: 22981637
Autism spectrum disorder; sign language
9.  Neural responses to meaningless pseudosigns: Evidence for sign-based phonetic processing in superior temporal cortex 
Brain and language  2010;117(1):34-38.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.
PMCID: PMC3075318  PMID: 21094525
sign language; deaf signers; fMRI; pseudowords
10.  The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language 
Journal of cognitive neuroscience  2012;25(4):517-533.
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.
PMCID: PMC3715382  PMID: 23249348
11.  Event segmentation in a visual language: Neural bases of processing American Sign Language predicates 
Neuroimage  2011;59(4):4094-4101.
Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia & Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et a., 2006), introducing the question of whether the neural regions used by ASL signers for sign language verb processing might be similar to those used by non-signers for event segmentation.
The present study investigated the neural substrate of predicate perception and linguistic processing in ASL. Observed patterns of activation demonstrate that Deaf signers process telic verb signs as having higher phonological complexity as compared to atelic verb signs. These results, together with previous neuroimaging data on spoken and sign languages (Shetreet, Friedmann, & Hadar, 2010; Emmorey et al., 2009), illustrate a route for how a prominent perceptual-kinematic feature used for non-linguistic event segmentation might come to be processed as an abstract linguistic feature due to sign language exposure.
PMCID: PMC3279599  PMID: 22032944
sign language; ASL; fMRI; event structure; verb; neuroplasticity; motion
12.  Acquiring word class distinctions in American Sign Language: Evidence from handshape* 
Handshape works differently in nouns vs. a class of verbs in American Sign Language (ASL), and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handling handshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns, object and handling handshapes are phonemic—that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object vs. handling handshapes as a function of grammatical context in classifier predicates, but not in the nouns that accompany those predicates. We asked 4–6 year old children, 7–10 year old children, and adults, all of whom were native ASL signers, to describe a series of vignettes designed to elicit object and handling handshapes in both nouns and classifier predicates. We found that all of the children behaved like adults with respect to all nouns, systematically varying object and handling handshapes as a function of type of item and not grammatical context. The children also behaved like adults with respect to certain classifiers, systematically varying handshape type as a function of grammatical context for items whose nouns have handling handshapes. The children differed from adults in that they did not systematically vary handshape as a function of grammatical context for items whose nouns have object handshapes. These findings extend previous work by showing that children require developmental time to acquire the full morphological system underlying classifier predicates in sign language, just as children acquiring complex morphology in spoken languages do. In addition, we show for the first time that children acquiring ASL treat object and handling handshapes differently as a function of their status as nouns vs. classifier predicates, and thus display a distinction between these word classes as early as 4 years of age.
PMCID: PMC3650914  PMID: 23671406
13.  Discriminating Signs: Perceptual Precursors to Acquiring a Visual-Gestural Language 
Infant behavior & development  2006;30(1):153-160.
We tested hearing six- and ten-month-olds’ ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-naïve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer’s facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children’s production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.
PMCID: PMC1885556  PMID: 17292788
Perception; Discrimination; American Sign Language; Parameters; Infants
14.  Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language 
PLoS ONE  2014;9(2):e86268.
To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.
PMCID: PMC3916328  PMID: 24516528
15.  The Link Between Form and Meaning in American Sign Language: Lexical Processing Effects 
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture–sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign, they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality.
PMCID: PMC3667647  PMID: 19271866
semantics; iconicity; sign language; word recognition; psycholinguistics
16.  The perception of handshapes in American Sign Language 
Memory & cognition  2005;33(5):887-904.
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs with soundless handshapes that are nonetheless phonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization.
PMCID: PMC2730958  PMID: 16383176
17.  Amodal Aspects of Linguistic Design 
PLoS ONE  2013;8(4):e60617.
All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)—a natural phonological constraint attested in every human language—nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.
PMCID: PMC3616023  PMID: 23573272
18.  Bimodal bilingualism* 
Speech–sign or “bimodal” bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal–manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
PMCID: PMC2600850  PMID: 19079743
19.  Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously 
Journal of memory and language  2012;67(1):199-210.
Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch.
PMCID: PMC3389804  PMID: 22773886
bilingualism; lexical access; sign language
20.  Neuroplasticity Associated with Tactile Language Communication in a Deaf-Blind Subject 
A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas.
PMCID: PMC2805429  PMID: 20130756
deafness; blindness; tactile language; neuroplasticity; fMRI; diffusion tensor imaging
21.  Distinguishing the Processing of Gestures from Signs in Deaf Individuals: An fMRI Study 
Brain research  2009;1276:140-150.
Manual gestures occur on a continuum from co-speech gesticulations to conventionalized emblems to language signs. Our goal in the present study was to understand the neural bases of the processing of gestures along such a continuum. We studied four types of gestures, varying along linguistic and semantic dimensions: linguistic and meaningful American Sign Language (ASL), non-meaningful pseudo-ASL, meaningful emblematic, and nonlinguistic, non-meaningful made-up gestures. Pre-lingually deaf, native signers of ASL participated in the fMRI study and performed two tasks while viewing videos of the gestures: a visuo-spatial (identity) discrimination task and a category discrimination task. We found that the categorization task activated left ventral middle and inferior frontal gyrus, among other regions, to a greater extent compared to the visual discrimination task, supporting the idea of semantic-level processing of the gestures. The reverse contrast resulted in enhanced activity of bilateral intraparietal sulcus, supporting the idea of featural-level processing (analogous to phonological-level processing of speech sounds) of the gestures. Regardless of the task, we found that brain activation patterns for the nonlinguistic, non-meaningful gestures were the most different compared to the ASL gestures. The activation patterns for the emblems were most similar to those of the ASL gestures and those of the pseudo-ASL were most similar to the nonlinguistic, non-meaningful gestures. The fMRI results provide partial support for the conceptualization of different gestures as belonging to a continuum and the variance in the fMRI results was best explained by differences in the processing of gestures along the semantic dimension.
PMCID: PMC2693477  PMID: 19397900
American Sign Language; gestures; Deaf; visual processing; categorization; linguistic; brain; fMRI
22.  From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner 
Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent (“hand-as-hand” iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent (“hand-as-object” iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, “Julio,” age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.
PMCID: PMC4139701  PMID: 25191283
sign language; homesign; gesture; phonology; morphology; language emergence; iconicity; grammaticalization
23.  Sign Language Ability in Young Deaf Signers Predicts Comprehension of Written Sentences in English 
PLoS ONE  2014;9(2):e89994.
We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into ‘skilled’ and ‘less-skilled’ signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.
PMCID: PMC3938551  PMID: 24587174
24.  The influence of visual feedback and register changes on sign language production: A kinematic study with deaf signers 
Applied psycholinguistics  2009;30(1):187-203.
Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign Language (ASL) signs within a carrier phrase under five conditions: blindfolded, wearing tunnel-vision goggles, normal (citation) signing, shouting, and informal signing. Three-dimensional movement trajectories were obtained using an Optotrak Certus system. Informally produced signs were shorter with less vertical movement. Shouted signs were displaced forward and to the right and were produced within a larger volume of signing space, with greater velocity, greater distance traveled, and a longer duration. Tunnel vision caused signers to produce less movement within the vertical dimension of signing space, but blind and citation signing did not differ significantly on any measure, except duration. Thus, signers do not “sign louder” when they cannot see themselves, but they do alter their sign production when vision is restricted. We hypothesize that visual feedback serves primarily to fine-tune the size of signing space rather than as input to a comprehension-based monitor.
PMCID: PMC2726740  PMID: 20046943
25.  Deaf Mothers and Breastfeeding: Do Unique Features of Deaf Culture and Language Support Breastfeeding Success? 
Deaf mothers who use American Sign Language (ASL) consider themselves a linguistic minority group, with specific cultural practices. Rarely has this group been engaged in infant-feeding research.
To understand how ASL-using Deaf mothers learn about infant feeding and to identify their breastfeeding challenges.
Using a community-based participatory research (CBPR) approach we conducted four focus groups with Deaf mothers who had at least one child 0–5 years. A script was developed using a social ecological model (SEM) to capture multiple levels of influence. All groups were conducted in ASL, filmed, and transcribed into English. Deaf and hearing researchers analyzed data by coding themes within each SEM level.
Fifteen mothers participated. All had initiated breastfeeding with their most recent child. Breastfeeding duration for eight of the mothers was three weeks to 12 months. Seven of the mothers were still breastfeeding, the longest for 19 months. Those mothers who breastfed longer described a supportive social environment and the ability to surmount challenges. Participants described characteristics of Deaf culture such as direct communication, sharing information, use of technologies, language access through interpreters and ASL-using providers, and strong self-advocacy skills. Finally, mothers used the sign ‘struggle’ to describe their breastfeeding experience. The sign implies a sustained effort over time which leads to success.
In a setting with a large population of Deaf women and ASL-using providers, we identified several aspects of Deaf culture and language which support BF mothers across institutional, community, and interpersonal levels of the SEM.
PMCID: PMC4112581  PMID: 23492762

Results 1-25 (837789)