PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-20 (20)
 

Clipboard (0)
None

Select a Filter Below

Journals
more »
Year of Publication
Document Types
1.  Detection of the arcuate fasciculus in congenital amusia depends on the tractography algorithm 
The advent of diffusion magnetic resonance imaging (MRI) allows researchers to virtually dissect white matter fiber pathways in the brain in vivo. This, for example, allows us to characterize and quantify how fiber tracts differ across populations in health and disease, and change as a function of training. Based on diffusion MRI, prior literature reports the absence of the arcuate fasciculus (AF) in some control individuals and as well in those with congenital amusia. The complete absence of such a major anatomical tract is surprising given the subtle impairments that characterize amusia. Thus, we hypothesize that failure to detect the AF in this population may relate to the tracking algorithm used, and is not necessarily reflective of their phenotype. Diffusion data in control and amusic individuals were analyzed using three different tracking algorithms: deterministic and probabilistic, the latter either modeling two or one fiber populations. Across the three algorithms, we replicate prior findings of a left greater than right AF volume, but do not find group differences or an interaction. We detect the AF in all individuals using the probabilistic 2-fiber model, however, tracking failed in some control and amusic individuals when deterministic tractography was applied. These findings show that the ability to detect the AF in our sample is dependent on the type of tractography algorithm. This raises the question of whether failure to detect the AF in prior studies may be unrelated to the underlying anatomy or phenotype.
doi:10.3389/fpsyg.2015.00009
PMCID: PMC4300860  PMID: 25653637
arcuate fasciculus; congenital amusia; diffusion magnetic resonance imaging; tractography; deterministic; probabilistic; crossing fibers
2.  Representations of specific acoustic patterns in the auditory cortex and hippocampus 
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
doi:10.1098/rspb.2014.1000
PMCID: PMC4132675  PMID: 25100695
acoustic patterns; fMRI; auditory cortex; multi-voxel pattern analysis; hippocampus
3.  Distinct critical cerebellar subregions for components of verbal working memory 
Neuropsychologia  2011;50(1):189-197.
A role for the cerebellum in cognition has been proposed based on studies suggesting a profile of cognitive deficits due to cerebellar stroke. Such studies are limited in the determination of the detailed organisation of cerebellar subregions that are critical for different aspects of cognition. In this study we examined the correlation between cognitive performance and cerebellar integrity in a specific degeneration of the cerebellar cortex: Spinocerebellar Ataxia type 6 (SCA6). The results demonstrate a critical relationship between verbal working memory and grey matter density in superior (bilateral lobules VI and crus I of lobule VII) and inferior (bilateral lobules VIIIa and VIIIb, and right lobule IX) parts of the cerebellum. We demonstrate that distinct cerebellar regions subserve different components of the prevalent psychological model for verbal working memory based on a phonological loop. The work confirms the involvement of the cerebellum in verbal working memory and defines specific subsystems for this within the cerebellum.
doi:10.1016/j.neuropsychologia.2011.11.017
PMCID: PMC4040406  PMID: 22133495
SCA-6; Cerebellum; Cognition; MRI; VBM; Neurodegeneration
4.  A brain basis for musical hallucinations☆ 
The physiological basis for musical hallucinations (MH) is not understood. One obstacle to understanding has been the lack of a method to manipulate the intensity of hallucination during the course of experiment. Residual inhibition, transient suppression of a phantom percept after the offset of a masking stimulus, has been used in the study of tinnitus. We report here a human subject whose MH were residually inhibited by short periods of music. Magnetoencephalography (MEG) allowed us to examine variation in the underlying oscillatory brain activity in different states. Source-space analysis capable of single-subject inference defined left-lateralised power increases, associated with stronger hallucinations, in the gamma band in left anterior superior temporal gyrus, and in the beta band in motor cortex and posteromedial cortex. The data indicate that these areas form a crucial network in the generation of MH, and are consistent with a model in which MH are generated by persistent reciprocal communication in a predictive coding hierarchy.
doi:10.1016/j.cortex.2013.12.002
PMCID: PMC3969291  PMID: 24445167
Musical hallucinations; Magnetoencephalography; Auditory cortex; Gamma oscillations; Beta oscillations; Predictive coding
5.  Exploring the role of auditory analysis in atypical compared to typical language development☆ 
Hearing Research  2014;308(100):129-140.
The relationship between auditory processing and language skills has been debated for decades. Previous findings have been inconsistent, both in typically developing and impaired subjects, including those with dyslexia or specific language impairment. Whether correlations between auditory and language skills are consistent between different populations has hardly been addressed at all. The present work presents an exploratory approach of testing for patterns of correlations in a range of measures of auditory processing. In a recent study, we reported findings from a large cohort of eleven-year olds on a range of auditory measures and the data supported a specific role for the processing of short sequences in pitch and time in typical language development. Here we tested whether a group of individuals with dyslexic traits (DT group; n = 28) from the same year group would show the same pattern of correlations between auditory and language skills as the typically developing group (TD group; n = 173). Regarding the raw scores, the DT group showed a significantly poorer performance on the language but not the auditory measures, including measures of pitch, time and rhythm, and timbre (modulation). In terms of correlations, there was a tendency to decrease in correlations between short-sequence processing and language skills, contrasted by a significant increase in correlation for basic, single-sound processing, in particular in the domain of modulation. The data support the notion that the fundamental relationship between auditory and language skills might differ in atypical compared to typical language development, with the implication that merging data or drawing inference between populations might be problematic. Further examination of the relationship between both basic sound feature analysis and music-like sound analysis and language skills in impaired populations might allow the development of appropriate training strategies. These might include types of musical training to augment language skills via their common bases in sound sequence analysis.
This article is part of a Special Issue entitled .
Highlights
•Auditory and language skills were tested in 28 11-year olds with dyslexic traits.•Auditory processing of pitch, rhythm and modulation did not differ from controls.•The pattern of correlation with language skills differed from that seen in controls.•Differences in patterns of correlation merit further testing in prospective cohorts.
doi:10.1016/j.heares.2013.09.015
PMCID: PMC3969305  PMID: 24112877
TD, Typically developing; DT, Dyslexic traits
6.  Estimating neural response functions from fMRI 
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study.
doi:10.3389/fninf.2014.00048
PMCID: PMC4021120  PMID: 24847246
neural response function; population receptive field; parametric modulation; Bayesian inference; auditory perception; repetition suppression; Tonotopic Mapping; Balloon model
7.  Predictive coding and pitch processing in the auditory cortex 
Journal of cognitive neuroscience  2011;23(10):10.1162/jocn_a_00021.
In this work, we show that electrophysiological responses during pitch perception are best explained by distributed activity in a hierarchy of cortical sources and, crucially, that the effective connectivity between these sources is modulated with pitch-strength. Local field potentials were recorded in two subjects from primary auditory cortex and adjacent auditory cortical areas along the axis of Heschl's gyrus (HG) while they listened to stimuli of varying pitch strength. Dynamic Causal Modelling was used to compare system architectures that might explain the recorded activity. The data show that representation of pitch requires an interaction between non-primary and primary auditory cortex along HG that is consistent with the principle of predictive coding.
doi:10.1162/jocn_a_00021
PMCID: PMC3821983  PMID: 21452943
8.  Segregation of complex acoustic scenes based on temporal coherence 
eLife  2013;2:e00699.
In contrast to the complex acoustic environments we encounter everyday, most studies of auditory segregation have used relatively simple signals. Here, we synthesized a new stimulus to examine the detection of coherent patterns (‘figures’) from overlapping ‘background’ signals. In a series of experiments, we demonstrate that human listeners are remarkably sensitive to the emergence of such figures and can tolerate a variety of spectral and temporal perturbations. This robust behavior is consistent with the existence of automatic auditory segregation mechanisms that are highly sensitive to correlations across frequency and time. The observed behavior cannot be explained purely on the basis of adaptation-based models used to explain the segregation of deterministic narrowband signals. We show that the present results are consistent with the predictions of a model of auditory perceptual organization based on temporal coherence. Our data thus support a role for temporal coherence as an organizational principle underlying auditory segregation.
DOI: http://dx.doi.org/10.7554/eLife.00699.001
eLife digest
Even when seated in the middle of a crowded restaurant, we are still able to distinguish the speech of the person sitting opposite us from the conversations of fellow diners and a host of other background noise. While we generally perform this task almost effortlessly, it is unclear how the brain solves what is in reality a complex information processing problem.
In the 1970s, researchers began to address this question using stimuli consisting of simple tones. When subjects are played a sequence of alternating high and low frequency tones, they perceive them as two independent streams of sound. Similar experiments in macaque monkeys reveal that each stream activates a different area of auditory cortex, suggesting that the brain may distinguish acoustic stimuli on the basis of their frequency.
However, the simple tones that are used in laboratory experiments bear little resemblance to the complex sounds we encounter in everyday life. These are often made up of multiple frequencies, and overlap—both in frequency and in time—with other sounds in the environment. Moreover, recent experiments have shown that if a subject hears two tones simultaneously, he or she perceives them as belonging to a single stream of sound even if they have different frequencies: models that assume that we distinguish stimuli from noise on the basis of frequency alone struggle to explain this observation.
Now, Teki, Chait, et al. have used more complex sounds, in which frequency components of the target stimuli overlap with those of background signals, to obtain new insights into how the brain solves this problem. Subjects were extremely good at discriminating these complex target stimuli from background noise, and computational modelling confirmed that they did so via integration of both frequency and temporal information. The work of Teki, Chait, et al. thus offers the first explanation for our ability to home in on speech and other pertinent sounds, even amidst a sea of background noise.
DOI: http://dx.doi.org/10.7554/eLife.00699.002
doi:10.7554/eLife.00699
PMCID: PMC3721234  PMID: 23898398
auditory scene analysis; temporal coherence; psychophysics; segregation; Human
9.  Features vs. Feelings: Dissociable representations of the acoustic features and valence of aversive sounds 
This study addresses the neuronal representation of aversive sounds that are perceived as unpleasant. Functional magnetic resonance imaging (fMRI) in humans demonstrated responses in the amygdala and auditory cortex to aversive sounds. We show that the amygdala encodes both the acoustic features of a stimulus and its valence (perceived unpleasantness). Dynamic Causal Modelling (DCM) of this system revealed that evoked responses to sounds are relayed to the amygdala via auditory cortex. While acoustic features modulate effective connectivity from auditory cortex to the amygdala, the valence modulates the effective connectivity from amygdala to the auditory cortex. These results support a complex (recurrent) interaction between the auditory cortex and amygdala based on object-level analysis in the auditory cortex that portends the assignment of emotional valence in amygdala that in turn influences the representation of salient information in auditory cortex.
doi:10.1523/JNEUROSCI.1759-12.2012
PMCID: PMC3505833  PMID: 23055488
10.  Navigating the auditory scene: an expert role for the hippocampus 
Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on: firstly, selective listening to beats within frequency windows and, secondly, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown.
Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in grey matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of grey matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with GM volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality.
Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound ‘templates’ are encoded and consolidated into memory over time in an experience-dependent manner.
doi:10.1523/JNEUROSCI.0082-12.2012
PMCID: PMC3448926  PMID: 22933806
11.  Single-subject oscillatory gamma responses in tinnitus 
Brain  2012;135(10):3089-3100.
This study used magnetoencephalography to record oscillatory activity in a group of 17 patients with chronic tinnitus. Two methods, residual inhibition and residual excitation, were used to bring about transient changes in spontaneous tinnitus intensity in order to measure dynamic tinnitus correlates in individual patients. In residual inhibition, a positive correlation was seen between tinnitus intensity and both delta/theta (6/14 patients) and gamma band (8/14 patients) oscillations in auditory cortex, suggesting an increased thalamocortical input and cortical gamma response, respectively, associated with higher tinnitus states. Conversely, 4/4 patients exhibiting residual excitation demonstrated an inverse correlation between perceived tinnitus intensity and auditory cortex gamma oscillations (with no delta/theta changes) that cannot be explained by existing models. Significant oscillatory power changes were also identified in a variety of cortical regions, most commonly midline lobar regions in the default mode network, cerebellum, insula and anterior temporal lobe. These were highly variable across patients in terms of areas and frequency bands involved, and in direction of power change. We suggest a model based on a local circuit function of cortical gamma-band oscillations as a process of mutual inhibition that might suppress abnormal cortical activity in tinnitus. The work implicates auditory cortex gamma-band oscillations as a fundamental intrinsic mechanism for attenuating phantom auditory perception.
doi:10.1093/brain/aws220
PMCID: PMC3470708  PMID: 22975389
tinnitus; gamma oscillations; mutual inhibition; auditory cortex; magnetoencephalography
12.  Auditory sequence analysis and phonological skill 
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence.
doi:10.1098/rspb.2012.1817
PMCID: PMC3479813  PMID: 22951739
auditory sequence analysis; pitch; rhythm; phonological skill; adolescence; language
13.  Gamma band pitch responses in human auditory cortex measured with magnetoencephalography 
Neuroimage  2012;59(2-5):1904-1911.
We have previously used direct electrode recordings in two human subjects to identify neural correlates of the perception of pitch (Griffiths, Kumar, Sedley et al., Direct recordings of pitch responses from human auditory cortex, Curr. Biol. 22 (2010), pp. 1128–1132). The present study was carried out to assess virtual-electrode measures of pitch perception based on non-invasive magnetoencephalography (MEG). We recorded pitch responses in 13 healthy volunteers using a passive listening paradigm and the same pitch-evoking stimuli (regular interval noise; RIN) as in the previous study. Source activity was reconstructed using a beamformer approach, which was used to place virtual electrodes in auditory cortex. Time-frequency decomposition of these data revealed oscillatory responses to pitch in the gamma frequency band to occur, in Heschl's gyrus, from 60 Hz upwards. Direct comparison of these pitch responses to the previous depth electrode recordings shows a striking congruence in terms of spectrotemporal profile and anatomical distribution. These findings provide further support that auditory high gamma oscillations occur in association with RIN pitch stimuli, and validate the use of MEG to assess neural correlates of normal and abnormal pitch perception.
Highlights
► High gamma-band correlates of pitch perception identified with MEG beamforming. ► Results correlate strongly with invasive electrode recordings of same responses. ► Validation of accuracy of MEG beamformer approach.
doi:10.1016/j.neuroimage.2011.08.098
PMCID: PMC3236996  PMID: 21925281
Pitch; Auditory; Magnetoencephalography; Gamma; Beamformer; Perception
14.  Distinct Neural Substrates of Duration-Based and Beat-Based Auditory Timing 
Research on interval timing strongly implicates the cerebellum and the basal ganglia as part of the timing network of the brain. Here we tested the hypothesis that the brain uses differential timing mechanisms and networks—specifically, that the cerebellum subserves the perception of the absolute duration of time intervals, whereas the basal ganglia mediate perception of time intervals relative to a regular beat. In a functional magnetic resonance imaging experiment, we asked human subjects to judge the difference in duration of two successive time intervals as a function of the preceding context of an irregular sequence of clicks (where the task relies on encoding the absolute duration of time intervals) or a regular sequence of clicks (where the regular beat provides an extra cue for relative timing). We found significant activations in an olivocerebellar network comprising the inferior olive, vermis, and deep cerebellar nuclei including the dentate nucleus during absolute, duration-based timing and a striato-thalamo-cortical network comprising the putamen, caudate nucleus, thalamus, supplementary motor area, premotor cortex, and dorsolateral prefrontal cortex during relative, beat-based timing. Our results support two distinct timing mechanisms and underlying subsystems: first, a network comprising the inferior olive and the cerebellum that acts as a precision clock to mediate absolute, duration-based timing, and second, a distinct network for relative, beat-based timing incorporating a striato-thalamo-cortical network.
doi:10.1523/JNEUROSCI.5561-10.2011
PMCID: PMC3074096  PMID: 21389235
15.  Brain Bases for Auditory Stimulus-Driven Figure–Ground Segregation 
Auditory figure–ground segregation, listeners’ ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure–ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures.
In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure–ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure–ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.
doi:10.1523/JNEUROSCI.3788-10.2011
PMCID: PMC3059575  PMID: 21209201
16.  Cortical mechanisms for the segregation and representation of acoustic textures 
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. While it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an ‘acoustic texture’, composed of multiple frequency-modulated ramps. In these stimuli we independently manipulated the statistical rules governing a) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and b) the boundaries between textures (adjacent textures with different spectrotemporal coherence). Using functional magnetic resonance imaging (fMRI), we show mechanisms defining boundaries between textures with different coherence in primary and association auditory cortex, while texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
doi:10.1523/JNEUROSCI.5378-09.2010
PMCID: PMC2880611  PMID: 20147535
Auditory cortex; Auditory; fMRI; Frequency; Acoustic; Object Recognition
17.  Direct Recordings of Pitch Responses from Human Auditory Cortex 
Current Biology  2010;20(12):1128-1132.
Summary
Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept.
Highlights
► We report direct recordings of electrical activity from human auditory cortex ► We distinguish activity related to stimulus regularity and to perceived pitch ► Both are demonstrated in primary cortex and adjacent “core” areas
doi:10.1016/j.cub.2010.04.044
PMCID: PMC3221038  PMID: 20605456
SYSNEURO
18.  Encoding of Spectral Correlation over Time in Auditory Cortex 
The Journal of Neuroscience  2008;28(49):13268-13273.
Natural sounds contain multiple spectral components that vary over time. The degree of variation can be characterized in terms of correlation between successive time frames of the spectrum, or as a time window within which any two frames show a minimum degree of correlation: the greater the correlation of the spectrum between successive time frames, the longer the time window. Recent studies suggest differences in the encoding of shorter and longer time windows in left and right auditory cortex, respectively. The present functional magnetic resonance imaging study assessed brain activation in response to the systematic variation of the time window in complex spectra that are more similar to natural sounds than in previous studies. The data show bilateral activity in the planum temporale and anterior superior temporal gyrus as a function of increasing time windows, as well as activity in the superior temporal sulcus that was significantly lateralized to the right. The results suggest a coexistence of hierarchical and lateralization schemes for representing increasing time windows in auditory association cortex.
doi:10.1523/JNEUROSCI.4596-08.2008
PMCID: PMC3844743  PMID: 19052218
auditory cortex; time windows; spectrotemporal correlation; fMRI; sound; speech
19.  An Information Theoretic Characterisation of Auditory Encoding 
PLoS Biology  2007;5(11):e288.
The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content.
Author Summary
Understanding how the brain makes sense of our acoustic environment remains a major challenge. One way to describe the complexity of our acoustic environment is in terms of information entropy: acoustic signals with high entropy convey large amounts of information, whereas low entropy signifies redundancy. To investigate how the brain processes this information, we controlled the amount of entropy in the signal by using pitch sequences. Participants listened to pitch sequences with varying amounts of entropy while we measured their brain activity using functional magnetic resonance imaging (fMRI). We show that the planum temporale (PT), a region of auditory association cortex, is sensitive to the entropy in pitch sequences. In two convergent fMRI studies, activity in PT increases as the entropy in the pitch sequence increases. The results establish PT as an important “computational hub” that requires less resource to encode redundant signals than it does to encode signals with high information content.
A part of the auditory cortex (planum temporale) encodes the information content of pitch sequences.
doi:10.1371/journal.pbio.0050288
PMCID: PMC2039771  PMID: 17958472
20.  Hierarchical Processing of Auditory Objects in Humans 
PLoS Computational Biology  2007;3(6):e100.
This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG), containing the primary auditory cortex, planum temporale (PT), and superior temporal sulcus (STS), and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal “templates” in the PT before further analysis of the abstracted form in anterior temporal lobe areas.
Author Summary
The past decade has seen a phenomenal rise in applications of functional magnetic resonance imaging for both research and clinical applications. Most of the applications, however, concentrate on finding the regions of the brain that mediate the processing of a cognitive/motor task without determining the interaction between the identified regions. It is, however, the interactions between the different regions that accomplish a given task. In this study, we have examined the interactions between three regions—Heshl's gyrus (HG), planum temporale (PT), and superior temporal sulcus (STS)—that have been implicated in processing the spectral envelope of sounds. The spectral envelope is one of the dimensions of timbre that determine the identity of two sounds that have the same pitch, duration, and intensity. The interaction between the regions is examined using a system-based mathematical modelling technique called dynamic causal modelling (DCM). It is found that flow of information is serial, with HG sending information to PT and then to STS with the connectivity between HG to PT being effectively increased by the extraction of spectral envelope. The study provides evidence for an earlier hypothesis that PT is a computational hub.
doi:10.1371/journal.pcbi.0030100
PMCID: PMC1885275  PMID: 17542641

Results 1-20 (20)