PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-6 (6)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
Document Types
1.  Atypical audio-visual speech perception and McGurk effects in children with specific language impairment 
Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.
doi:10.3389/fpsyg.2014.00422
PMCID: PMC4033223  PMID: 24904454
multisensory speech perception; specific language impairment; McGurk effects; audio-visual speech integration; masking release
2.  How is the McGurk effect modulated by Cued Speech in deaf and hearing adults? 
Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.
doi:10.3389/fpsyg.2014.00416
PMCID: PMC4032946  PMID: 24904451
multimodal speech perception; Cued Speech; cochlear implant; deafness; audio-visual speech integration
3.  Symbolic Number Abilities Predict Later Approximate Number System Acuity in Preschool Children 
PLoS ONE  2014;9(3):e91839.
An ongoing debate in research on numerical cognition concerns the extent to which the approximate number system and symbolic number knowledge influence each other during development. The current study aims at establishing the direction of the developmental association between these two kinds of abilities at an early age. Fifty-seven children of 3–4 years performed two assessments at 7 months interval. In each assessment, children's precision in discriminating numerosities as well as their capacity to manipulate number words and Arabic digits was measured. By comparing relationships between pairs of measures across the two time points, we were able to assess the predictive direction of the link. Our data indicate that both cardinality proficiency and symbolic number knowledge predict later accuracy in numerosity comparison whereas the reverse links are not significant. The present findings are the first to provide longitudinal evidence that the early acquisition of symbolic numbers is an important precursor in the developmental refinement of the approximate number representation system.
doi:10.1371/journal.pone.0091839
PMCID: PMC3956743  PMID: 24637785
4.  Sleep May Not Benefit Learning New Phonological Categories 
It is known that sleep participates in memory consolidation processes. However, results obtained in the auditory domain are inconsistent. Here we aimed at investigating the role of post-training sleep in auditory training and learning new phonological categories, a fundamental process in speech processing. Adult French-speakers were trained to identify two synthetic speech variants of the syllable /d∂/ during two 1-h training sessions. The 12-h interval between the two sessions either did (8 p.m. to 8 a.m. ± 1 h) or did not (8 a.m. to 8 p.m. ± 1 h) included a sleep period. In both groups, identification performance dramatically improved over the first training session, to slightly decrease over the 12-h offline interval, although remaining above chance levels. Still, reaction times (RT) were slowed down after sleep suggesting higher attention devoted to the learned, novel phonological contrast. Notwithstanding, our results essentially suggest that post-training sleep does not benefit more than wakefulness to the consolidation or stabilization of new phonological categories.
doi:10.3389/fneur.2012.00097
PMCID: PMC3379727  PMID: 22723789
sleep; auditory training; identification; new phonological categories
5.  Audiovisual Segregation in Cochlear Implant Users 
PLoS ONE  2012;7(3):e33113.
It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.
doi:10.1371/journal.pone.0033113
PMCID: PMC3299746  PMID: 22427963
6.  Cued Speech for Enhancing Speech Perception and First Language Development of Children With Cochlear Implants 
Trends in Amplification  2010;14(2):96-112.
Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants.
doi:10.1177/1084713810375567
PMCID: PMC4111351  PMID: 20724357
cued speech; cochlear implants; brain plasticity; phonological processing; audiovisual integration

Results 1-6 (6)