PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (602584)

Clipboard (0)
None

Related Articles

1.  Initiating the development of multisensory integration by manipulating sensory experience 
The multisensory integration capabilities of superior colliculus (SC) neurons emerge gradually during early postnatal life as a consequence of experience with cross-modal stimuli. Without such experience neurons become responsive to multiple sensory modalities but are unable to integrate their inputs. The present study demonstrates that neurons retain sensitivity to cross-modal experience well past the normal developmental period for acquiring multisensory integration capabilities. Experience surprisingly late in life was found to rapidly initiate the development of multisensory integration, even more rapidly than expected based on its normal developmental time course. Furthermore, the requisite experience was acquired by the anesthetized brain and in the absence of any of the stimulus-response contingencies generally associated with learning. The key experiential factor was repeated exposure to the relevant stimuli, and this required that the multiple receptive fields of a multisensory neuron encompassed the cross-modal exposure site. Simple exposure to the individual components of a cross-modal stimulus was ineffective in this regard. Furthermore, once a neuron acquired multisensory integration capabilities at the exposure site, it generalized this experience to other locations, albeit with lowered effectiveness. These observations suggest that the prolonged period during which multisensory integration normally appears is due to developmental factors in neural circuitry in addition to those required for incorporating the statistics of cross-modal events; that neurons learn a multisensory principle based on the specifics of experience and can then apply it to other stimulus conditions; and that the incorporation of this multisensory information does not depend on an alert brain.
doi:10.1523/JNEUROSCI.5575-09.2010
PMCID: PMC2858413  PMID: 20371810
Dark Rearing; Colliculus; Vision; Auditory; Maturation; Plasticity
2.  Adult plasticity of spatiotemporal receptive fields of multisensory superior colliculus neurons following early visual deprivation 
Restorative neurology and neuroscience  2010;28(2):10.3233/RNN-2010-0488.
Purpose
Previous work has established that the integrative capacity of multisensory neurons in the superior colliculus (SC) matures over a protracted period of postnatal life (Wallace and Stein, 1997), and that the development of normal patterns of multisensory integration depends critically on early sensory experience (Wallace et al., 2004). Although these studies demonstrated the importance of early sensory experience in the creation of mature multisensory circuits, it remains unknown whether the reestablishment of sensory experience in adulthood can reverse these effects and restore integrative capacity.
Methods
The current study tested this hypothesis in cats that were reared in absolute darkness until adulthood and then returned to a normal housing environment for an equivalent period of time. Single unit extracellular recordings targeted multisensory neurons in the deep layers of the SC, and analyses were focused on both conventional measures of multisensory integration and on more recently developed methods designed to characterize spatiotemporal receptive fields (STRF).
Results
Analysis of the STRF structure and integrative capacity of multisensory SC neurons revealed significant modifications in the temporal response dynamics of multisensory responses (e.g., discharge durations, peak firing rates, and mean firing rates), as well as significant changes in rates of spontaneous activation and degrees of multisensory integration.
Conclusions
These results emphasize the importance of early sensory experience in the establishment of normal multisensory processing architecture and highlight the limited plastic potential of adult multisensory circuits.
doi:10.3233/RNN-2010-0488
PMCID: PMC3652394  PMID: 20404413
superior colliculus; multimodal; cat; cross-modal; experiential plasticity
3.  Organization, Maturation, and Plasticity of Multisensory Integration: Insights from Computational Modeling Studies 
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
doi:10.3389/fpsyg.2011.00077
PMCID: PMC3110383  PMID: 21687448
neural network modeling; multimodal neurons; superior colliculus; peripersonal space; neural mechanisms; learning and plasticity; behavior
4.  Non-Stationarity in Multisensory Neurons in the Superior Colliculus  
The superior colliculus (SC) integrates information from multiple sensory modalities to facilitate the detection and localization of salient events. The efficacy of “multisensory integration” is traditionally measured by comparing the magnitude of the response elicited by a cross-modal stimulus to the responses elicited by its modality-specific component stimuli, and because there is an element of randomness in the system, these calculations are made using response values averaged over multiple stimulus presentations in an experiment. Recent evidence suggests that multisensory integration in the SC is highly plastic and these neurons adapt to specific anomalous stimulus configurations. This raises the question whether such adaptation occurs during an experiment with traditional stimulus configurations; that is, whether the state of the neuron and its integrative principles are the same at the beginning and end of the experiment, or whether they are altered as a consequence of exposure to the testing stimuli even when they are pseudo-randomly interleaved. We find that unisensory and multisensory responses do change during an experiment, and that these changes are predictable. Responses that are initially weak tend to potentiate, responses that are initially strong tend to habituate, and the efficacy of multisensory integration waxes or wanes accordingly during the experiment as predicted by the “principle of inverse effectiveness.” These changes are presumed to reflect two competing mechanisms in the SC: potentiation reflects increases in the expectation that a stimulus will occur at a given location relative to others, and habituation reflects decreases in stimulus novelty. These findings indicate plasticity in multisensory integration that allows animals to adapt to rapidly changing environmental events while suggesting important caveats in the interpretation of experimental data: the neuron studied at the beginning of an experiment is not the same at the end of it.
doi:10.3389/fpsyg.2011.00144
PMCID: PMC3131158  PMID: 21772824
multisensory; superior colliculus
5.  Laminar and Connectional Organization of a Multisensory Cortex 
The Journal of comparative neurology  2013;521(8):1867-1890.
The transformation of sensory signals as they pass through cortical circuits has been revealed almost exclusively through studies of the primary sensory cortices, where principles of laminar organization, local connectivity and parallel processing have been elucidated. In contrast, almost nothing is known about the circuitry or laminar features of multisensory processing in higher-order, multisensory cortex. Therefore, using the ferret higher-order multisensory rostral posterior parietal (PPr) cortex, the present investigation employed a combination of multichannel recording and neuroanatomical techniques to elucidate the laminar basis of multisensory cortical processing. The proportion of multisensory neurons, the share of neurons showing multisensory integration, and the magnitude of multisensory integration were all found to differ by layer in a way that matched the functional or connectional characteristics of the PPr. Specifically, the supragranular layers (L2–3) demonstrated among the highest proportions of multisensory neurons and the highest incidence of multisensory response enhancement, while also receiving the highest levels of extrinsic inputs, exhibiting the highest dendritic spine densities, and providing a major source of local connectivity. In contrast, layer 6 showed the highest proportion of unisensory neurons while receiving the fewest external and local projections and exhibiting the lowest dendritic spine densities. Coupled with a lack of input from principal thalamic nuclei and a minimal layer 4, these observations indicate that this higher-level multisensory cortex shows unique functional and organizational modifications from the well-known patterns identified for primary sensory cortical regions.
doi:10.1002/cne.23264
PMCID: PMC3618603  PMID: 23172137
Parietal Cortex; Convergence; Integration; Somatosensation; Vision; Ferret
6.  Learning Multisensory Integration and Coordinate Transformation via Density Estimation 
PLoS Computational Biology  2013;9(4):e1003035.
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations.
Author Summary
Over the first few years of their lives, humans (and other animals) appear to learn how to combine signals from multiple sense modalities: when to “integrate” them into a single percept, as with visual and proprioceptive information about one's body; when not to integrate them (e.g., when looking somewhere else); how they vary over longer time scales (e.g., where in physical space my hand tends to be); as well as more complicated manipulations, like subtracting gaze angle from the visually-perceived position of an object to compute the position of that object with respect to the head—i.e., “coordinate transformation.” Learning which sensory signals to integrate, or which to manipulate in other ways, does not appear to require an additional supervisory signal; we learn to do so, rather, based on structure in the sensory signals themselves. We present a biologically plausible artificial neural network that learns all of the above in just this way, but by training it for a much more general statistical task: “density estimation”—essentially, learning to be able to reproduce the data on which it was trained. This also links coordinate transformation and multisensory integration to other cortical operations, especially in early sensory areas, that have have been modeled as density estimators.
doi:10.1371/journal.pcbi.1003035
PMCID: PMC3630212  PMID: 23637588
7.  Audio-Tactile Integration and the Influence of Musical Training 
PLoS ONE  2014;9(1):e85743.
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
doi:10.1371/journal.pone.0085743
PMCID: PMC3897506  PMID: 24465675
8.  Neuronal Plasticity and Multisensory Integration in Filial Imprinting 
PLoS ONE  2011;6(3):e17777.
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.
doi:10.1371/journal.pone.0017777
PMCID: PMC3053393  PMID: 21423770
9.  Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study 
Neuropsychologia  2011;49(12):3178-3187.
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.
doi:10.1016/j.neuropsychologia.2011.07.017
PMCID: PMC3632320  PMID: 21807011
schizophrenia; multisensory; audio-visual; visual-evoked potential; auditory-evoked potential; ERP
10.  Spatial receptive field organization of multisensory neurons and its impact on multisensory interactions 
Hearing research  2009;258(1-2):47-54.
Previous work has established that the spatial receptive fields (SRFs) of multisensory neurons in the cerebral cortex are strikingly heterogeneous, and that SRF architecture plays an important deterministic role in sensory responsiveness and multisensory integrative capacities. The initial part of this contribution serves to review these findings detailing the key features of SRF organization in cortical multisensory populations by highlighting work from the cat anterior ectosylvian sulcus (AES). In addition, we have recently conducted parallel studies designed to examine SRF architecture in the classic model for multisensory studies, the cat superior colliculus (SC), and we present some of the preliminary observations from the SC here. An examination of individual SC neurons revealed marked similarities between their unisensory (i.e., visual and auditory) SRFs, as well as between these unisensory SRFs and the multisensory SRF. Despite these similarities within individual neurons, different SC neurons had SRFs that ranged from a single area of greatest activation (hot spot) to multiple and spatially discrete hot spots. Similar to cortical multisensory neurons, the interactive profile of SC neurons was correlated strongly to SRF architecture, closely following the principle of inverse effectiveness. Thus, large and often superadditive multisensory response enhancements were typically seen at SRF locations where visual and auditory stimuli were weakly effective. Conversely, subadditive interactions were seen at SRF locations where stimuli were highly effective. Despite the unique functions characteristic of cortical and subcortical multisensory circuits, our results suggest a strong mechanistic interrelationship between SRF microarchitecture and integrative capacity.
doi:10.1016/j.heares.2009.08.003
PMCID: PMC2787656  PMID: 19698773
polysensory; cat; cross-modal; multimodal; integration
11.  A model of the temporal dynamics of multisensory enhancement 
The senses transduce different forms of environmental energy, and the brain synthesizes information across them to enhance responses to salient biological events. We hypothesize that the potency of multisensory integration is attributable to the convergence of independent and temporally aligned signals derived from cross-modal stimulus configurations onto multisensory neurons. The temporal profile of multisensory integration in neurons of the deep superior colliculus (SC) is consistent with this hypothesis. The responses of these neurons to visual, auditory, and combinations of visual–auditory stimuli reveal that multisensory integration takes place in real-time; that is, the input signals are integrated as soon as they arrive at the target neuron. Interactions between cross-modal signals may appear to reflect linear or nonlinear computations on a moment-by-moment basis, the aggregate of which determines the net product of multisensory integration. Modeling observations presented here suggest that the early nonlinear components of the temporal profile of multisensory integration can be explained with a simple spiking neuron model, and do not require more sophisticated assumptions about the underlying biology. A transition from nonlinear “super-additive” computation to linear, additive computation can be accomplished via scaled inhibition. The findings provide a set of design constraints for artificial implementations seeking to exploit the basic principles and potency of biological multisensory integration in contexts of sensory substitution or augmentation.
doi:10.1016/j.neubiorev.2013.12.003
PMCID: PMC4120822  PMID: 24374382
Multisensory; Cross-modal; Modeling; Temporal dynamics; Enhancement
12.  The Development of Audiovisual Multisensory Integration Across Childhood and Early Adolescence: A High-Density Electrical Mapping Study 
Cerebral Cortex (New York, NY)  2010;21(5):1042-1055.
The integration of multisensory information is essential to forming meaningful representations of the environment. Adults benefit from related multisensory stimuli but the extent to which the ability to optimally integrate multisensory inputs for functional purposes is present in children has not been extensively examined. Using a cross-sectional approach, high-density electrical mapping of event-related potentials (ERPs) was combined with behavioral measures to characterize neurodevelopmental changes in basic audiovisual (AV) integration from middle childhood through early adulthood. The data indicated a gradual fine-tuning of multisensory facilitation of performance on an AV simple reaction time task (as indexed by race model violation), which reaches mature levels by about 14 years of age. They also revealed a systematic relationship between age and the brain processes underlying multisensory integration (MSI) in the time frame of the auditory N1 ERP component (∼120 ms). A significant positive correlation between behavioral and neurophysiological measures of MSI suggested that the underlying brain processes contributed to the fine-tuning of multisensory facilitation of behavior that was observed over middle childhood. These findings are consistent with protracted plasticity in a dynamic system and provide a starting point from which future studies can begin to examine the developmental course of multisensory processing in clinical populations.
doi:10.1093/cercor/bhq170
PMCID: PMC3077428  PMID: 20847153
children; cross-modal; development, electrophysiology; ERP; multisensory integration
13.  Neonatal cortical ablation disrupts multisensory development in superior colliculus 
Journal of neurophysiology  2005;95(3):1380-1396.
The ability of cat superior colliculus (SC) neurons to synthesize information from different senses depends on influences from two areas of the cortex: the anterior ectosylvian sulcus (AES) and the rostral lateral suprasylvian sulcus (rLS). Reversibly deactivating the inputs to the SC from either of these areas in normal adults severely compromises this ability and the SC-mediated behaviors that depend on it. In the present study we found that removal of these areas in neonatal animals precluded the normal development of multisensory SC processes. At maturity there was a substantial decrease in the incidence of multisensory neurons, and those multisensory neurons that did develop were highly abnormal. Their cross-modal receptive field register was severely compromised, as was their ability to integrate cross-modal stimuli. Apparently, despite the impressive plasticity of the neonatal brain, it cannot compensate for the early loss of these cortices. Surprisingly, however, neonatal removal of either AES or rLS had comparatively minor consequences on these properties. At maturity multisensory SC neurons were quite common: they developed the characteristic spatial register among their unisensory receptive fields and exhibited normal adult-like multisensory integration. These observations suggest that during early ontogeny, when the multisensory properties of SC neurons are being crafted, AES and rLS may have the ability to compensate for the loss of one another’s cortico-collicular influences so that normal multisensory processes can develop in the SC.
doi:10.1152/jn.00880.2005
PMCID: PMC1538963  PMID: 16267111
development; sensory cortex; cross-modal; plasticity; compensation; multisensory integration
14.  Evidence for Training-Induced Plasticity in Multisensory Brain Structures: An MEG Study 
PLoS ONE  2012;7(5):e36534.
Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS)], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV)]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training.
doi:10.1371/journal.pone.0036534
PMCID: PMC3343004  PMID: 22570723
15.  Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users 
PLoS ONE  2014;9(6):e99606.
Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation.
doi:10.1371/journal.pone.0099606
PMCID: PMC4053428  PMID: 24918766
16.  Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training 
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
doi:10.3389/fnhum.2014.00829
PMCID: PMC4215828  PMID: 25400566
multisensory perception; speech perception; reverse hierarchy theory; lipreading; vibrotactile perception; vocoded speech; perceptual learning
17.  Adult plasticity in multisensory neurons: Short-term experience-dependent changes in the superior colliculus 
Multisensory neurons in the superior colliculus (SC) have the capability to integrate signals that belong to the same event, despite being conveyed by different senses. They develop this capability during early life as experience is gained with the statistics of cross-modal events. These adaptations prepare the SC to deal with the cross-modal events that are likely to be encountered throughout life. Here we found that neurons in the adult SC can also adapt to experience with sequentially-ordered cross-modal (visual-auditory or auditory-visual) cues, and that they do so over short periods of time (minutes), as if adapting to a particular stimulus configuration. This short-term plasticity was evident as a rapid increase in the magnitude and duration of responses to the first stimulus, and a shortening of the latency and increase in magnitude of the responses to the second stimulus when they are presented in sequence. The result was that the two responses appeared to merge. These changes were stable in the absence of experience with competing stimulus configurations, outlasted the exposure period, and could not be induced by equivalent experience with sequential within-modal (visual-visual or auditory-auditory) stimuli. A parsimonious interpretation is that the additional SC activity provided by the second stimulus became associated with, and increased the potency of, the afferents responding to the preceding stimulus. This interpretation is consistent with the principle of spike-timing dependent plasticity (STDP), which may provide the basic mechanism for short term or long term plasticity and be operative in both the adult and neonatal SC.
doi:10.1523/JNEUROSCI.4041-09.2009
PMCID: PMC2824179  PMID: 20016107
Midbrain; Multisensory; Superior; Colliculus; Plasticity; Visual; Auditory
18.  Multisensory dysfunction accompanies crossmodal plasticity following adult hearing impairment 
Neuroscience  2012;214:136-148.
Until now, cortical crossmodal plasticity has largely been regarded as the effect of early and complete sensory loss. Recently, massive crossmodal cortical reorganization was demonstrated to result from profound hearing loss in adult ferrets (Allman et al., 2009a). Moderate adult hearing loss, on the other hand, induced not just crossmodal reorganization, but also merged new crossmodal inputs with residual auditory function to generate multisensory neurons. Because multisensory convergence can lead to dramatic levels of response integration when stimuli from more than one modality are present (and thereby potentially interfere with residual auditory processing), the present investigation sought to evaluate the multisensory properties of auditory cortical neurons in partially deafened adult ferrets. When compared with hearing controls, partially-deaf animals revealed elevated spontaneous levels and a dramatic increase (~2 times) in the proportion of multisensory cortical neurons, but few of which showed multisensory integration. Moreover, a large proportion (68%) of neurons with somatosensory and/or visual inputs was vigorously active in core auditory cortex in the absence of auditory stimulation. Collectively, these results not only demonstrate multisensory dysfunction in core auditory cortical neurons from hearing impaired adults but also reveal a potential cortical substrate for maladaptive perceptual effects such as tinnitus.
doi:10.1016/j.neuroscience.2012.04.001
PMCID: PMC3403530  PMID: 22516008
aging; crossmodal plasticity; hearing loss; deafness; tinnitus; cortex
19.  Disintegration of Multisensory Signals from the Real Hand Reduces Default Limb Self-Attribution: An fMRI Study 
The Journal of Neuroscience  2013;33(33):13350-13366.
The perception of our limbs in space is built upon the integration of visual, tactile, and proprioceptive signals. Accumulating evidence suggests that these signals are combined in areas of premotor, parietal, and cerebellar cortices. However, it remains to be determined whether neuronal populations in these areas integrate hand signals according to basic temporal and spatial congruence principles of multisensory integration. Here, we developed a setup based on advanced 3D video technology that allowed us to manipulate the spatiotemporal relationships of visuotactile (VT) stimuli delivered on a healthy human participant's real hand during fMRI and investigate the ensuing neural and perceptual correlates. Our experiments revealed two novel findings. First, we found responses in premotor, parietal, and cerebellar regions that were dependent upon the spatial and temporal congruence of VT stimuli. This multisensory integration effect required a simultaneous match between the seen and felt postures of the hand, which suggests that congruent visuoproprioceptive signals from the upper limb are essential for successful VT integration. Second, we observed that multisensory conflicts significantly disrupted the default feeling of ownership of the seen real limb, as indexed by complementary subjective, psychophysiological, and BOLD measures. The degree to which self-attribution was impaired could be predicted from the attenuation of neural responses in key multisensory areas. These results elucidate the neural bases of the integration of multisensory hand signals according to basic spatiotemporal principles and demonstrate that the disintegration of these signals leads to “disownership” of the seen real hand.
doi:10.1523/JNEUROSCI.1363-13.2013
PMCID: PMC3742923  PMID: 23946393
20.  Evidence for Enhanced Multisensory Facilitation with Stimulus Relevance: An Electrophysiological Investigation 
PLoS ONE  2013;8(1):e52978.
Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.
doi:10.1371/journal.pone.0052978
PMCID: PMC3553102  PMID: 23372652
21.  Recurrent network for multisensory integration-identification of common sources of audiovisual stimuli 
We perceive our surrounding environment by using different sense organs. However, it is not clear how the brain estimates information from our surroundings from the multisensory stimuli it receives. While Bayesian inference provides a normative account of the computational principle at work in the brain, it does not provide information on how the nervous system actually implements the computation. To provide an insight into how the neural dynamics are related to multisensory integration, we constructed a recurrent network model that can implement computations related to multisensory integration. Our model not only extracts information from noisy neural activity patterns, it also estimates a causal structure; i.e., it can infer whether the different stimuli came from the same source or different sources. We show that our model can reproduce the results of psychophysical experiments on spatial unity and localization bias which indicate that a shift occurs in the perceived position of a stimulus through the effect of another simultaneous stimulus. The experimental data have been reproduced in previous studies using Bayesian models. By comparing the Bayesian model and our neural network model, we investigated how the Bayesian prior is represented in neural circuits.
doi:10.3389/fncom.2013.00101
PMCID: PMC3722481  PMID: 23898263
causality inference; multisensory integration; spatial orientation; recurrent neural network; Mexican-hat type interaction
22.  Intracranial Cortical Responses during Visual–Tactile Integration in Humans 
The Journal of Neuroscience  2014;34(1):171-181.
Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices.
doi:10.1523/JNEUROSCI.0532-13.2014
PMCID: PMC3866483  PMID: 24381279
23.  Audiovisual integration of speech in a bistable illusion 
Current biology : CB  2009;19(9):735-739.
Summary
Visible speech enhances the intelligibility of auditory speech when listening conditions are poor [1], and can even modify the perception of otherwise perfectly audible utterances [2]. This audiovisual perception is our most natural form of communication and one of our most common multisensory phenomena. However, where and in what form the visual and auditory representations interact is still not completely understood. While there are longstanding proposals that multisensory integration occurs relatively late in the speech processing sequence [3], there is considerable neurophysiological evidence that audiovisual interactions can occur in the brain stem and primary auditory and visual cortices [4,5]. One of the difficulties testing such hypotheses is that when the degree of integration is manipulated experimentally, the visual and/or auditory stimulus conditions are drastically modified [6,7] and thus the perceptual processing within a modality and the corresponding processing loads are affected [8]. Here we used a novel bistable speech stimulus to examine the conditions under which there is a visual influence on auditory perception in speech. The results indicate that visual influences on auditory speech processing, at least for the McGurk illusion, necessitate the conscious perception of the visual speech gestures, thus supporting the hypothesis that multisensory speech integration is not completed in early processing stages.
doi:10.1016/j.cub.2009.03.019
PMCID: PMC2692869  PMID: 19345097
24.  The roles of physical and physiological simultaneity in audiovisual multisensory facilitation 
i-Perception  2013;4(4):213-228.
A series of experiments measured the audiovisual stimulus onset asynchrony (SOAAV), yielding facilitative multisensory integration. We evaluated (1) the range of SOAAV over which facilitation occurred when unisensory stimuli were weak; (2) whether the range of SOAAV producing facilitation supported the hypothesis that physiological simultaneity of unisensory activity governs multisensory facilitation; and (3) whether AV multisensory facilitation depended on relative stimulus intensity. We compared response-time distributions to unisensory auditory (A) and visual (V) stimuli with those to AV stimuli over a wide range (300 and 20 ms increments) of SOAAV, across four conditions of varying stimulus intensity. In condition 1, the intensity of unisensory stimuli was adjusted such that d′ ≈ 2. In condition 2, V stimulus intensity was increased (d′ > 4), while A stimulus intensity was as in condition 1. In condition 3, A stimulus intensity was increased (d′ > 4) while V stimulus intensity was as in condition 1. In condition 4, both A and V stimulus intensities were increased to clearly suprathreshold levels (d′ > 4). Across all conditions of stimulus intensity, significant multisensory facilitation occurred exclusively for simultaneously presented A and V stimuli. In addition, facilitation increased as stimulus intensity increased, in disagreement with inverse effectiveness. These results indicate that the requirements for facilitative multisensory integration include both physical and physiological simultaneity.
doi:10.1068/i0532
PMCID: PMC3859565  PMID: 24349682
multisensory integration; neural coactivation; inverse effectiveness; race model; simultaneity; reaction time; d′
25.  The Neural Basis of Multisensory Integration in the Midbrain: Its Organization and Maturation 
Hearing research  2009;258(1-2):4-15.
Multisensory Integration describes a process by which information from different sensory systems is combined to influence perception, decisions, and overt behavior. Despite a widespread appreciation of its utility in the adult, its developmental antecedents have received relatively little attention. Here we review what is known about the development of multisensory integration, with a focus on the circuitry and experiential antecedents of its development in the model system of the multisensory (i.e., deep) layers of the superior colliculus. Of particular interest here are two sets of experimental observations: 1) cortical influences appear essential for multisensory integration in the SC, and 2) postnatal experience guides its maturation. The current belief is that the experience normally gained during early life is instantiated in the cortico-SC projection, and that this is the primary route by which ecological pressures adapt SC multisensory integration to the particular environment in which it will be used.
doi:10.1016/j.heares.2009.03.012
PMCID: PMC2787841  PMID: 19345256

Results 1-25 (602584)