PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-6 (6)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
Document Types
1.  An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex 
PLoS ONE  2013;8(6):e68959.
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
doi:10.1371/journal.pone.0068959
PMCID: PMC3689691  PMID: 23805332
2.  Response Bias Modulates the Speech Motor System during Syllable Discrimination 
Recent evidence suggests that the speech motor system may play a significant role in speech perception. Repetitive transcranial magnetic stimulation (TMS) applied to a speech region of premotor cortex impaired syllable identification, while stimulation of motor areas for different articulators selectively facilitated identification of phonemes relying on those articulators. However, in these experiments performance was not corrected for response bias. It is not currently known how response bias modulates activity in these networks. The present functional magnetic resonance imaging experiment was designed to produce specific, measureable changes in response bias in a speech perception task. Minimal consonant-vowel stimulus pairs were presented between volume acquisitions for same-different discrimination. Speech stimuli were embedded in Gaussian noise at the psychophysically determined threshold level. We manipulated bias by changing the ratio of same-to-different trials: 1:3, 1:2, 1:1, 2:1, 3:1. Ratios were blocked by run and subjects were cued to the upcoming ratio at the beginning of each run. The stimuli were physically identical across runs. Response bias (criterion, C) was measured in individual subjects for each ratio condition. Group mean bias varied in the expected direction. We predicted that activation in frontal but not temporal brain regions would co-vary with bias. Group-level regression of bias scores on percent signal change revealed a fronto-parietal network of motor and sensory-motor brain regions that were sensitive to changes in response bias. We identified several pre- and post-central clusters in the left hemisphere that overlap well with TMS targets from the aforementioned studies. Importantly, activity in these regions covaried with response bias even while the perceptual targets remained constant. Thus, previous results suggesting that speech motor cortex participates directly in the perceptual analysis of speech should be called into question.
doi:10.3389/fpsyg.2012.00157
PMCID: PMC3361017  PMID: 22723787
signal detection; response bias; speech perception; fMRI
3.  Hierarchical Organization of Human Auditory Cortex: Evidence from Acoustic Invariance in the Response to Intelligible Speech 
Cerebral Cortex (New York, NY)  2010;20(10):2486-2495.
Hierarchical organization of human auditory cortex has been inferred from functional imaging observations that core regions respond to simple stimuli (tones) whereas downstream regions are selectively responsive to more complex stimuli (band-pass noise, speech). It is assumed that core regions code low-level features, which are combined at higher levels in the auditory system to yield more abstract neural codes. However, this hypothesis has not been critically evaluated in the auditory domain. We assessed sensitivity to acoustic variation within intelligible versus unintelligible speech using functional magnetic resonance imaging and a multivariate pattern analysis. Core auditory regions on the dorsal plane of the superior temporal gyrus exhibited high levels of sensitivity to acoustic features, whereas downstream auditory regions in both anterior superior temporal sulcus and posterior superior temporal sulcus (pSTS) bilaterally showed greater sensitivity to whether speech was intelligible or not and less sensitivity to acoustic variation (acoustic invariance). Acoustic invariance was most pronounced in more pSTS regions of both hemispheres, which we argue support phonological level representations. This finding provides direct evidence for a hierarchical organization of human auditory cortex and clarifies the cortical pathways supporting the processing of intelligible speech.
doi:10.1093/cercor/bhp318
PMCID: PMC2936804  PMID: 20100898
auditory cortex; fMRI; Heschl's gyrus; hierarchical organization; intelligible speech; language; multivariate pattern classification; speech; superior temporal sulcus
4.  Functional Anatomy of Language and Music Perception: Temporal and Structural Factors Investigated Using fMRI 
Language and music exhibit similar acoustic and structural properties and both appear to be uniquely human. Several recent studies suggest that speech and music perception recruit shared computational systems, and a common substrate in Broca’s area for hierarchical processing has recently been proposed. However, this claim has not been tested by directly comparing the spatial distribution of activations to speech and music processing within-subjects. In the present study, participants listened to sentences, scrambled sentences, and novel melodies. As expected, large swaths of activation for both sentences and melodies were found bilaterally in the superior temporal lobe, overlapping in portions of auditory cortex. However, substantial nonoverlap was also found: sentences elicited more ventrolateral activation, whereas the melodies elicited a more dorsomedial pattern, extending into the parietal lobe. Multitvariate pattern classification analyses indicate that even within the regions of BOLD response overlap, speech and music elicit distinguishable patterns of activation. Regions involved in processing hierarchical related aspects of sentence perception were identified by contrasting sentences with scrambled sentences revealing a bilateral temporal lobe network. Music perception showed no overlap whatsoever with this network. Broca’s area was not robustly activated by any stimulus type. Overall, these findings suggest that basic hierarchical processing for music and speech recruit distinct cortical networks, neither of which involve Broca’s area. We suggest that previous claims are based on data from tasks that tap higher-order cognitive processes, such as working memory and/or cognitive control, which can operate in both speech and music domains.
doi:10.1523/JNEUROSCI.4515-10.2011
PMCID: PMC3066175  PMID: 21389239
speech perception; music perception; anterior temporal cortex; syntax; fMRI
5.  Detection of sinusoidal amplitude modulation in logarithmic frequency sweeps across wide regions of the spectrum 
Hearing research  2010;262(1-2):9-18.
Many natural sounds such as speech contain concurrent amplitude and frequency modulation (AM and FM), with the FM components often in the form of directional frequency sweeps or glides. Most studies of modulation coding, however, have employed one modulation type in stationary carriers, and in cases where mixed-modulation sounds have been used, the FM component has typically been confined to an extremely narrow range within a critical band. The current study examined the ability to detect AM signals carried by broad logarithmic frequency sweeps using a 2-alternative forced-choice adaptive psychophysical design. AM detection thresholds were measured as a function of signal modulation rate and carrier sweep frequency region. Thresholds for detection of AM in a sweep carrier ranged from -8 dB for an AM rate of 8 Hz to -30 dB at 128 Hz. Compared to thresholds obtained for stationary carriers (pure tones and filtered Gaussian noise), detection of AM carried by frequency sweeps substantially declined at low (12 dB at 8 Hz) but not high modulation rates. Several trends in the data, including sweep- versus stationary-carrier threshold patterns and effects of frequency region were predicted from a modulation filterbank model with an envelope-correlation decision statistic.
doi:10.1016/j.heares.2010.02.002
PMCID: PMC3045847  PMID: 20144700
psychoacoustics; frequency sweep; FM; modulation; psychophysics
6.  Detection of Large Interaural Delays and Its Implication for Models of Binaural Interaction  
The interaural time difference (ITD) is a major cue to sound localization along the horizontal plane. The maximum natural ITD occurs when a sound source is positioned opposite to one ear. We examined the ability of owls and humans to detect large ITDs in sounds presented through headphones. Stimuli consisted of either broad or narrow bands of Gaussian noise, 100 ms in duration. Using headphones allowed presentation of ITDs that are greater than the maximum natural ITD. Owls were able to discriminate a sound leading to the left ear from one leading to the right ear, for ITDs that are 5 times the maximum natural delay. Neural recordings from optic-tectum neurons, however, show that best ITDs are usually well within the natural range and are never as large as ITDs that are behaviorally discriminable. A model of binaural cross-correlation with short delay lines is shown to explain behavioral detection of large ITDs. The model uses curved trajectories of a cross-correlation pattern as the basis for detection. These trajectories represent side peaks of neural ITD-tuning curves and successfully predict localization reversals by both owls and human subjects.
doi:10.1007/s101620020006
PMCID: PMC3202365  PMID: 12083726
interaural; binaural; owl; ITD

Results 1-6 (6)