PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1107632)

Clipboard (0)
None

Related Articles

1.  The role of syllables in sign language production 
Frontiers in Psychology  2014;5:1254.
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
doi:10.3389/fpsyg.2014.01254
PMCID: PMC4230165  PMID: 25431562
sign language; speech production; syllables; sign parameters; picture naming
2.  When does a system become phonological? Handshape production in gesturers, signers, and homesigners 
Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.
doi:10.1007/s11049-011-9145-1
PMCID: PMC3665423  PMID: 23723534
Sign language; phonology; morphology; homesign; gesture; language evolution; historical change; handshape; classifier predicates
3.  Handshape monitoring: Evaluation of linguistic and perceptual factors in the processing of American Sign Language 
Language and cognitive processes  2011;27(1):117-141.
We investigated the relevance of linguistic and perceptual factors to sign processing by comparing hearing individuals and deaf signers as they performed a handshape monitoring task, a sign-language analogue to the phoneme-monitoring paradigms used in many spoken-language studies. Each subject saw a series of brief video clips, each of which showed either an ASL sign or a phonologically possible but non-lexical “non-sign,” and responded when the viewed action was formed with a particular handshape. Stimuli varied with respect to the factors of Lexicality, handshape Markedness (Battison, 1978), and Type, defined according to whether the action is performed with one or two hands and for two-handed stimuli, whether or not the action is symmetrical.
Deaf signers performed faster and more accurately than hearing non-signers, and effects related to handshape Markedness and stimulus Type were observed in both groups. However, no effects or interactions related to Lexicality were seen. A further analysis restricted to the deaf group indicated that these results were not dependent upon subjects' age of acquisition of ASL. This work provides new insights into the processes by which the handshape component of sign forms is recognized in a sign language, the role of language experience, and the extent to which these processes may or may not be considered specifically linguistic.
doi:10.1080/01690965.2010.549667
PMCID: PMC3399660  PMID: 22822282
ASL; psycholinguistics; markedness; phoneme monitoring
4.  Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language 
PLoS ONE  2014;9(2):e86268.
To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.
doi:10.1371/journal.pone.0086268
PMCID: PMC3916328  PMID: 24516528
5.  Acquiring word class distinctions in American Sign Language: Evidence from handshape* 
Handshape works differently in nouns vs. a class of verbs in American Sign Language (ASL), and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handling handshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns, object and handling handshapes are phonemic—that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object vs. handling handshapes as a function of grammatical context in classifier predicates, but not in the nouns that accompany those predicates. We asked 4–6 year old children, 7–10 year old children, and adults, all of whom were native ASL signers, to describe a series of vignettes designed to elicit object and handling handshapes in both nouns and classifier predicates. We found that all of the children behaved like adults with respect to all nouns, systematically varying object and handling handshapes as a function of type of item and not grammatical context. The children also behaved like adults with respect to certain classifiers, systematically varying handshape type as a function of grammatical context for items whose nouns have handling handshapes. The children differed from adults in that they did not systematically vary handshape as a function of grammatical context for items whose nouns have object handshapes. These findings extend previous work by showing that children require developmental time to acquire the full morphological system underlying classifier predicates in sign language, just as children acquiring complex morphology in spoken languages do. In addition, we show for the first time that children acquiring ASL treat object and handling handshapes differently as a function of their status as nouns vs. classifier predicates, and thus display a distinction between these word classes as early as 4 years of age.
doi:10.1080/15475441.2012.679540
PMCID: PMC3650914  PMID: 23671406
6.  Effects of language experience on the perception of American Sign Language 
Cognition  2008;109(1):41-53.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.
doi:10.1016/j.cognition.2008.07.016
PMCID: PMC2639215  PMID: 18834975
Sign language; Sign perception; Non-native first language acquisition; Second language acquisition
7.  Sign Perception and Recognition in Non-Native Signers of ASL 
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition.
doi:10.1080/15475441.2011.543393
PMCID: PMC3114635  PMID: 21686080
ASL; Sign language; Perception; Sign recognition; Lexical access; Language experience; Delayed first language acquisition; Second language acquisition
8.  The effects of learning American Sign Language on co-speech gesture* 
Bilingualism (Cambridge, England)  2012;15(4):677-686.
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
doi:10.1017/S1366728911000575
PMCID: PMC3547625  PMID: 23335853
gesture; American Sign Language; bilingualism
9.  Palm Reversal Errors in Native-Signing Children with Autism 
Children with autism spectrum disorder (ASD) who have native exposure to a sign language such as American Sign Language (ASL) have received almost no scientific attention. This paper reports the first studies on a sample of five native-signing children (four deaf children of deaf parents and one hearing child of deaf parents; ages 4;6 to 7;5) diagnosed with ASD. A domain-general deficit in the ability of children with ASD to replicate the gestures of others is hypothesized to be a source of palm orientation reversal errors in sign. In Study 1, naturalistic language samples were collected from three native-signing children with ASD and were analyzed for errors in handshape, location, movement and palm orientation. In Study 2, four native-signing children with ASD were compared to 12 typically-developing deaf children (ages 3;7 to 6;9, all born to deaf parents) on a fingerspelling task. In both studies children with ASD showed a tendency to reverse palm orientation on signs specified for inward/outward orientation. Typically-developing deaf children did not produce any such errors in palm orientation. We conclude that this kind of palm reversal has a perceptual rather than a motoric source, and is further evidence of a “self-other mapping” deficit in ASD.
doi:10.1016/j.jcomdis.2012.08.004
PMCID: PMC3479340  PMID: 22981637
Autism spectrum disorder; sign language
10.  Neural responses to meaningless pseudosigns: Evidence for sign-based phonetic processing in superior temporal cortex 
Brain and language  2010;117(1):34-38.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.
doi:10.1016/j.bandl.2010.10.003
PMCID: PMC3075318  PMID: 21094525
sign language; deaf signers; fMRI; pseudowords
11.  Event segmentation in a visual language: Neural bases of processing American Sign Language predicates 
Neuroimage  2011;59(4):4094-4101.
Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia & Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et a., 2006), introducing the question of whether the neural regions used by ASL signers for sign language verb processing might be similar to those used by non-signers for event segmentation.
The present study investigated the neural substrate of predicate perception and linguistic processing in ASL. Observed patterns of activation demonstrate that Deaf signers process telic verb signs as having higher phonological complexity as compared to atelic verb signs. These results, together with previous neuroimaging data on spoken and sign languages (Shetreet, Friedmann, & Hadar, 2010; Emmorey et al., 2009), illustrate a route for how a prominent perceptual-kinematic feature used for non-linguistic event segmentation might come to be processed as an abstract linguistic feature due to sign language exposure.
doi:10.1016/j.neuroimage.2011.10.034
PMCID: PMC3279599  PMID: 22032944
sign language; ASL; fMRI; event structure; verb; neuroplasticity; motion
12.  Sign language and pantomime production differentially engage frontal and parietal cortices 
Language and cognitive processes  2011;26(7):878-901.
We investigated the functional organization of neural systems supporting language production when the primary language articulators are also used for meaningful, but non-linguistic, expression such as pantomime. Fourteen hearing non-signers and 10 deaf native users of American Sign Language (ASL) participated in an H2 15O-PET study in which they generated action pantomimes or ASL verbs in response to pictures of tools and manipulable objects. For pantomime generation, participants were instructed to “show how you would use the object.” For verb generation, signers were asked to “generate a verb related to the object.” The objects for this condition were selected to elicit handling verbs that resemble pantomime (e.g., TO-HAMMER (hand configuration and movement mimic the act of hammering) and non-handling verbs that do not (e.g., POUR-SYRUP, produced with a “Y” handshape). For the baseline task, participants viewed pictures of manipulable objects and an occasional non-manipulable object and decided whether the objects could be handled, gesturing “yes” (thumbs up) or “no” (hand wave). Relative to baseline, generation of ASL verbs engaged left inferior frontal cortex, but when non-signers produced pantomimes for the same objects, no frontal activation was observed. Both groups recruited left parietal cortex during pantomime production. However, for deaf signers the activation was more extensive and bilateral, which may reflect a more complex and integrated neural representation of hand actions. We conclude that the production of pantomime versus ASL verbs (even those that resemble pantomime) engage partially segregated neural systems that support praxic versus linguistic functions.
doi:10.1080/01690965.2010.492643
PMCID: PMC3167215  PMID: 21909174
13.  The perception of handshapes in American Sign Language 
Memory & cognition  2005;33(5):887-904.
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs with soundless handshapes that are nonetheless phonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization.
PMCID: PMC2730958  PMID: 16383176
14.  Discriminating Signs: Perceptual Precursors to Acquiring a Visual-Gestural Language 
Infant behavior & development  2006;30(1):153-160.
We tested hearing six- and ten-month-olds’ ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-naïve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer’s facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children’s production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.
doi:10.1016/j.infbeh.2006.08.006
PMCID: PMC1885556  PMID: 17292788
Perception; Discrimination; American Sign Language; Parameters; Infants
15.  From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner 
Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent (“hand-as-hand” iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent (“hand-as-object” iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, “Julio,” age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.
doi:10.3389/fpsyg.2014.00830
PMCID: PMC4139701  PMID: 25191283
sign language; homesign; gesture; phonology; morphology; language emergence; iconicity; grammaticalization
16.  Neuroplasticity Associated with Tactile Language Communication in a Deaf-Blind Subject 
A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas.
doi:10.3389/neuro.09.060.2009
PMCID: PMC2805429  PMID: 20130756
deafness; blindness; tactile language; neuroplasticity; fMRI; diffusion tensor imaging
17.  The Link Between Form and Meaning in American Sign Language: Lexical Processing Effects 
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture–sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign, they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality.
doi:10.1037/a0014547
PMCID: PMC3667647  PMID: 19271866
semantics; iconicity; sign language; word recognition; psycholinguistics
18.  How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language 
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
doi:10.3389/fpsyg.2014.00484
PMCID: PMC4033845  PMID: 24904497
American Sign Language; audio-visual English; bimodal bilinguals; PET; fMRI
19.  How handshape type can distinguish between nouns and verbs in homesign 
All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but receives from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a producer of his system but does not receive it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system.
doi:10.1075/gest.13.3.05hun
PMCID: PMC4245027  PMID: 25435844
20.  Deaf Mothers and Breastfeeding: Do Unique Features of Deaf Culture and Language Support Breastfeeding Success? 
Background
Deaf mothers who use American Sign Language (ASL) consider themselves a linguistic minority group, with specific cultural practices. Rarely has this group been engaged in infant-feeding research.
Objectives
To understand how ASL-using Deaf mothers learn about infant feeding and to identify their breastfeeding challenges.
Methods
Using a community-based participatory research (CBPR) approach we conducted four focus groups with Deaf mothers who had at least one child 0–5 years. A script was developed using a social ecological model (SEM) to capture multiple levels of influence. All groups were conducted in ASL, filmed, and transcribed into English. Deaf and hearing researchers analyzed data by coding themes within each SEM level.
Results
Fifteen mothers participated. All had initiated breastfeeding with their most recent child. Breastfeeding duration for eight of the mothers was three weeks to 12 months. Seven of the mothers were still breastfeeding, the longest for 19 months. Those mothers who breastfed longer described a supportive social environment and the ability to surmount challenges. Participants described characteristics of Deaf culture such as direct communication, sharing information, use of technologies, language access through interpreters and ASL-using providers, and strong self-advocacy skills. Finally, mothers used the sign ‘struggle’ to describe their breastfeeding experience. The sign implies a sustained effort over time which leads to success.
Conclusions
In a setting with a large population of Deaf women and ASL-using providers, we identified several aspects of Deaf culture and language which support BF mothers across institutional, community, and interpersonal levels of the SEM.
doi:10.1177/0890334413476921
PMCID: PMC4112581  PMID: 23492762
21.  Using a social marketing framework to evaluate recruitment of a prospective study of genetic counseling and testing for the deaf community 
Background
Recruiting deaf and hard-of-hearing participants, particularly sign language-users, for genetics health service research is challenging due to communication barriers, mistrust toward genetics, and researchers’ unfamiliarity with deaf people. Feelings of social exclusion and lack of social cohesion between researchers and the Deaf community are factors to consider. Social marketing is effective for recruiting hard-to-reach populations because it fosters social inclusion and cohesion by focusing on the targeted audience’s needs. For the deaf population this includes recognizing their cultural and linguistic diversity, their geography, and their systems for information exchange. Here we use concepts and language from social marketing to evaluate our effectiveness to engage a U.S. deaf population in a prospective, longitudinal genetic counseling and testing study.
Methods
The study design was interpreted in terms of a social marketing mix of Product, Price, Place, and Promotion. Price addressed linguistic diversity by including a variety of communication technologies and certified interpreters to facilitate communication; Place addressed geography by including community-based participation locations; Promotion addressed information exchange by using multiple recruitment strategies. Regression analyses examined the study design’s effectiveness in recruiting a culturally and linguistically diverse sample.
Results
271 individuals were enrolled, with 66.1% American Sign Language (ASL)-users, 19.9% ASL + English-users, 12.6% English-users. Language was significantly associated with communication technology, participation location, and recruitment. Videophone and interpreters were more likely to be used for communication between ASL-users and researchers while voice telephone and no interpreters were preferred by English-users (Price). ASL-users were more likely to participate in community-based locations while English-users preferred medically-based locations (Place). English-users were more likely to be recruited through mass media (Promotion) while ASL-users were more likely to be recruited through community events and to respond to messaging that emphasized inclusion of a Deaf perspective.
Conclusions
This study design effectively engaged the deaf population, particularly sign language-users. Results suggest that the deaf population’s cultural and linguistic diversity, geography, and forms of information exchange must be taken into account in study designs for successful recruitment. A social marketing approach that incorporates critical social determinants of health provides a novel and important framework for genetics health service research targeting specific, and hard-to-reach, underserved groups.
doi:10.1186/1471-2288-13-145
PMCID: PMC3924226  PMID: 24274380
Social marketing; Genomic medicine; Deaf; Deaf; Hard-of-hearing; American Sign Language; Genetic testing; Health disparity; Health service research; Minority groups; Communities; Study design
22.  Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously 
Journal of memory and language  2012;67(1):199-210.
Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch.
doi:10.1016/j.jml.2012.04.005
PMCID: PMC3389804  PMID: 22773886
bilingualism; lexical access; sign language
23.  Bimodal bilingualism* 
Speech–sign or “bimodal” bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal–manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
doi:10.1017/S1366728907003203
PMCID: PMC2600850  PMID: 19079743
24.  The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language 
Journal of cognitive neuroscience  2012;25(4):517-533.
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.
doi:10.1162/jocn_a_00339
PMCID: PMC3715382  PMID: 23249348
25.  Symbiotic symbolization by hand and mouth in sign language* 
Semiotica  2009;2009(174):241-275.
Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill’s criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths.
doi:10.1515/semi.2009.035
PMCID: PMC2863338  PMID: 20445832
sign language; gesture; mouth gesture; iconic; hand and mouth; symbolization

Results 1-25 (1107632)