Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.
The multisensory integration capabilities of superior colliculus (SC) neurons emerge gradually during early postnatal life as a consequence of experience with cross-modal stimuli. Without such experience neurons become responsive to multiple sensory modalities but are unable to integrate their inputs. The present study demonstrates that neurons retain sensitivity to cross-modal experience well past the normal developmental period for acquiring multisensory integration capabilities. Experience surprisingly late in life was found to rapidly initiate the development of multisensory integration, even more rapidly than expected based on its normal developmental time course. Furthermore, the requisite experience was acquired by the anesthetized brain and in the absence of any of the stimulus-response contingencies generally associated with learning. The key experiential factor was repeated exposure to the relevant stimuli, and this required that the multiple receptive fields of a multisensory neuron encompassed the cross-modal exposure site. Simple exposure to the individual components of a cross-modal stimulus was ineffective in this regard. Furthermore, once a neuron acquired multisensory integration capabilities at the exposure site, it generalized this experience to other locations, albeit with lowered effectiveness. These observations suggest that the prolonged period during which multisensory integration normally appears is due to developmental factors in neural circuitry in addition to those required for incorporating the statistics of cross-modal events; that neurons learn a multisensory principle based on the specifics of experience and can then apply it to other stimulus conditions; and that the incorporation of this multisensory information does not depend on an alert brain.
Dark Rearing; Colliculus; Vision; Auditory; Maturation; Plasticity
Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.
The superior colliculus (SC) integrates information from multiple sensory modalities to facilitate the detection and localization of salient events. The efficacy of “multisensory integration” is traditionally measured by comparing the magnitude of the response elicited by a cross-modal stimulus to the responses elicited by its modality-specific component stimuli, and because there is an element of randomness in the system, these calculations are made using response values averaged over multiple stimulus presentations in an experiment. Recent evidence suggests that multisensory integration in the SC is highly plastic and these neurons adapt to specific anomalous stimulus configurations. This raises the question whether such adaptation occurs during an experiment with traditional stimulus configurations; that is, whether the state of the neuron and its integrative principles are the same at the beginning and end of the experiment, or whether they are altered as a consequence of exposure to the testing stimuli even when they are pseudo-randomly interleaved. We find that unisensory and multisensory responses do change during an experiment, and that these changes are predictable. Responses that are initially weak tend to potentiate, responses that are initially strong tend to habituate, and the efficacy of multisensory integration waxes or wanes accordingly during the experiment as predicted by the “principle of inverse effectiveness.” These changes are presumed to reflect two competing mechanisms in the SC: potentiation reflects increases in the expectation that a stimulus will occur at a given location relative to others, and habituation reflects decreases in stimulus novelty. These findings indicate plasticity in multisensory integration that allows animals to adapt to rapidly changing environmental events while suggesting important caveats in the interpretation of experimental data: the neuron studied at the beginning of an experiment is not the same at the end of it.
multisensory; superior colliculus
In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.
neural network modeling; multimodal neurons; superior colliculus; peripersonal space; neural mechanisms; learning and plasticity; behavior
We perceive our surrounding environment by using different sense organs. However, it is not clear how the brain estimates information from our surroundings from the multisensory stimuli it receives. While Bayesian inference provides a normative account of the computational principle at work in the brain, it does not provide information on how the nervous system actually implements the computation. To provide an insight into how the neural dynamics are related to multisensory integration, we constructed a recurrent network model that can implement computations related to multisensory integration. Our model not only extracts information from noisy neural activity patterns, it also estimates a causal structure; i.e., it can infer whether the different stimuli came from the same source or different sources. We show that our model can reproduce the results of psychophysical experiments on spatial unity and localization bias which indicate that a shift occurs in the perceived position of a stimulus through the effect of another simultaneous stimulus. The experimental data have been reproduced in previous studies using Bayesian models. By comparing the Bayesian model and our neural network model, we investigated how the Bayesian prior is represented in neural circuits.
causality inference; multisensory integration; spatial orientation; recurrent neural network; Mexican-hat type interaction
Previous work has established that the integrative capacity of multisensory neurons in the superior colliculus (SC) matures over a protracted period of postnatal life (Wallace and Stein, 1997), and that the development of normal patterns of multisensory integration depends critically on early sensory experience (Wallace et al., 2004). Although these studies demonstrated the importance of early sensory experience in the creation of mature multisensory circuits, it remains unknown whether the reestablishment of sensory experience in adulthood can reverse these effects and restore integrative capacity.
The current study tested this hypothesis in cats that were reared in absolute darkness until adulthood and then returned to a normal housing environment for an equivalent period of time. Single unit extracellular recordings targeted multisensory neurons in the deep layers of the SC, and analyses were focused on both conventional measures of multisensory integration and on more recently developed methods designed to characterize spatiotemporal receptive fields (STRF).
Analysis of the STRF structure and integrative capacity of multisensory SC neurons revealed significant modifications in the temporal response dynamics of multisensory responses (e.g., discharge durations, peak firing rates, and mean firing rates), as well as significant changes in rates of spontaneous activation and degrees of multisensory integration.
These results emphasize the importance of early sensory experience in the establishment of normal multisensory processing architecture and highlight the limited plastic potential of adult multisensory circuits.
superior colliculus; multimodal; cat; cross-modal; experiential plasticity
Multisensory Integration describes a process by which information from different sensory systems is combined to influence perception, decisions, and overt behavior. Despite a widespread appreciation of its utility in the adult, its developmental antecedents have received relatively little attention. Here we review what is known about the development of multisensory integration, with a focus on the circuitry and experiential antecedents of its development in the model system of the multisensory (i.e., deep) layers of the superior colliculus. Of particular interest here are two sets of experimental observations: 1) cortical influences appear essential for multisensory integration in the SC, and 2) postnatal experience guides its maturation. The current belief is that the experience normally gained during early life is instantiated in the cortico-SC projection, and that this is the primary route by which ecological pressures adapt SC multisensory integration to the particular environment in which it will be used.
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.
schizophrenia; multisensory; audio-visual; visual-evoked potential; auditory-evoked potential; ERP
Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying the brain effects of acquiring specialized sensorimotor skills. For example, musicians learn and repeatedly practice the association of motor actions with specific sound and visual patterns (musical notation) while receiving continuous multisensory feedback. This association learning can strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) while activating multimodal integration regions (e.g., around the intraparietal sulcus). We argue that training of this neural network may produce cross-modal effects on other behavioral or cognitive operations that draw on this network. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. These enhancements suggest the potential for music making as an interactive treatment or intervention for neurological and developmental disorders, as well as those associated with normal aging.
auditory; diffusion tensor imaging; functional MRI; morphometry; motor; music; plasticity
Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during perception of self-motion. Unlike the nonlinear (super-additive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through sub-additive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.
Brain plasticity is a common phenomenon across animals and in many cases it is associated with behavioral transitions. In social insects, such as bees, wasps and ants, plasticity in a particular brain compartment involved in multisensory integration (the mushroom body) has been associated with transitions between tasks differing in cognitive demands. However, in most of these cases, transitions between tasks are age-related, requiring the experimental manipulation of the age structure in the studied colonies to distinguish age and experience-dependent effects. To better understand the interplay between brain plasticity and behavioral performance it would therefore be advantageous to study species whose division of labor is not age-dependent. Here, we focus on brain plasticity in the bumblebee Bombus occidentalis, in which division of labor is strongly affected by the individual's body size instead of age. We show that, like in vertebrates, body size strongly correlates with brain size. We also show that foraging experience, but not age, significantly correlates with the increase in the size of the mushroom body, and in particular one of its components, the medial calyx. Our results support previous findings from other social insects suggesting that the mushroom body plays a key role in experience-based decision making. We also discuss the use of bumblebees as models to analyze neural plasticity and the association between brain size and behavioral performance.
Bombus; Honeybee; Brain size; Cognition
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration exhibited by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which employs a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the key features of multisensory integration by neurons.
Until now, cortical crossmodal plasticity has largely been regarded as the effect of early and complete sensory loss. Recently, massive crossmodal cortical reorganization was demonstrated to result from profound hearing loss in adult ferrets (Allman et al., 2009a). Moderate adult hearing loss, on the other hand, induced not just crossmodal reorganization, but also merged new crossmodal inputs with residual auditory function to generate multisensory neurons. Because multisensory convergence can lead to dramatic levels of response integration when stimuli from more than one modality are present (and thereby potentially interfere with residual auditory processing), the present investigation sought to evaluate the multisensory properties of auditory cortical neurons in partially deafened adult ferrets. When compared with hearing controls, partially-deaf animals revealed elevated spontaneous levels and a dramatic increase (~2 times) in the proportion of multisensory cortical neurons, but few of which showed multisensory integration. Moreover, a large proportion (68%) of neurons with somatosensory and/or visual inputs was vigorously active in core auditory cortex in the absence of auditory stimulation. Collectively, these results not only demonstrate multisensory dysfunction in core auditory cortical neurons from hearing impaired adults but also reveal a potential cortical substrate for maladaptive perceptual effects such as tinnitus.
aging; crossmodal plasticity; hearing loss; deafness; tinnitus; cortex
The ability of cat superior colliculus (SC) neurons to synthesize information from different senses depends on influences from two areas of the cortex: the anterior ectosylvian sulcus (AES) and the rostral lateral suprasylvian sulcus (rLS). Reversibly deactivating the inputs to the SC from either of these areas in normal adults severely compromises this ability and the SC-mediated behaviors that depend on it. In the present study we found that removal of these areas in neonatal animals precluded the normal development of multisensory SC processes. At maturity there was a substantial decrease in the incidence of multisensory neurons, and those multisensory neurons that did develop were highly abnormal. Their cross-modal receptive field register was severely compromised, as was their ability to integrate cross-modal stimuli. Apparently, despite the impressive plasticity of the neonatal brain, it cannot compensate for the early loss of these cortices. Surprisingly, however, neonatal removal of either AES or rLS had comparatively minor consequences on these properties. At maturity multisensory SC neurons were quite common: they developed the characteristic spatial register among their unisensory receptive fields and exhibited normal adult-like multisensory integration. These observations suggest that during early ontogeny, when the multisensory properties of SC neurons are being crafted, AES and rLS may have the ability to compensate for the loss of one another’s cortico-collicular influences so that normal multisensory processes can develop in the SC.
development; sensory cortex; cross-modal; plasticity; compensation; multisensory integration
A series of experiments measured the audiovisual stimulus onset asynchrony (SOAAV), yielding facilitative multisensory integration. We evaluated (1) the range of SOAAV over which facilitation occurred when unisensory stimuli were weak; (2) whether the range of SOAAV producing facilitation supported the hypothesis that physiological simultaneity of unisensory activity governs multisensory facilitation; and (3) whether AV multisensory facilitation depended on relative stimulus intensity. We compared response-time distributions to unisensory auditory (A) and visual (V) stimuli with those to AV stimuli over a wide range (300 and 20 ms increments) of SOAAV, across four conditions of varying stimulus intensity. In condition 1, the intensity of unisensory stimuli was adjusted such that d′ ≈ 2. In condition 2, V stimulus intensity was increased (d′ > 4), while A stimulus intensity was as in condition 1. In condition 3, A stimulus intensity was increased (d′ > 4) while V stimulus intensity was as in condition 1. In condition 4, both A and V stimulus intensities were increased to clearly suprathreshold levels (d′ > 4). Across all conditions of stimulus intensity, significant multisensory facilitation occurred exclusively for simultaneously presented A and V stimuli. In addition, facilitation increased as stimulus intensity increased, in disagreement with inverse effectiveness. These results indicate that the requirements for facilitative multisensory integration include both physical and physiological simultaneity.
multisensory integration; neural coactivation; inverse effectiveness; race model; simultaneity; reaction time; d′
Multisensory neurons in cat SC exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields (RFs) and cannot integrate information across sensory modalities. During the first several months of postnatal life RFs contract, responses become more robust, and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, cortical-SC synapses are present but not active, and that responses are driven by non-cortical inputs with very large RFs. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. Cortical-SC synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that RFs are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli.
visual-acoustic neurons; anterior ectosylvian sulcus; enhancement; Hebb rule; learning mechanisms; inverse effectiveness principle; neural network modelling
Multisensory neurons in the superior colliculus (SC) have the capability to integrate signals that belong to the same event, despite being conveyed by different senses. They develop this capability during early life as experience is gained with the statistics of cross-modal events. These adaptations prepare the SC to deal with the cross-modal events that are likely to be encountered throughout life. Here we found that neurons in the adult SC can also adapt to experience with sequentially-ordered cross-modal (visual-auditory or auditory-visual) cues, and that they do so over short periods of time (minutes), as if adapting to a particular stimulus configuration. This short-term plasticity was evident as a rapid increase in the magnitude and duration of responses to the first stimulus, and a shortening of the latency and increase in magnitude of the responses to the second stimulus when they are presented in sequence. The result was that the two responses appeared to merge. These changes were stable in the absence of experience with competing stimulus configurations, outlasted the exposure period, and could not be induced by equivalent experience with sequential within-modal (visual-visual or auditory-auditory) stimuli. A parsimonious interpretation is that the additional SC activity provided by the second stimulus became associated with, and increased the potency of, the afferents responding to the preceding stimulus. This interpretation is consistent with the principle of spike-timing dependent plasticity (STDP), which may provide the basic mechanism for short term or long term plasticity and be operative in both the adult and neonatal SC.
Midbrain; Multisensory; Superior; Colliculus; Plasticity; Visual; Auditory
The perception of our limbs in space is built upon the integration of visual, tactile, and proprioceptive signals. Accumulating evidence suggests that these signals are combined in areas of premotor, parietal, and cerebellar cortices. However, it remains to be determined whether neuronal populations in these areas integrate hand signals according to basic temporal and spatial congruence principles of multisensory integration. Here, we developed a setup based on advanced 3D video technology that allowed us to manipulate the spatiotemporal relationships of visuotactile (VT) stimuli delivered on a healthy human participant's real hand during fMRI and investigate the ensuing neural and perceptual correlates. Our experiments revealed two novel findings. First, we found responses in premotor, parietal, and cerebellar regions that were dependent upon the spatial and temporal congruence of VT stimuli. This multisensory integration effect required a simultaneous match between the seen and felt postures of the hand, which suggests that congruent visuoproprioceptive signals from the upper limb are essential for successful VT integration. Second, we observed that multisensory conflicts significantly disrupted the default feeling of ownership of the seen real limb, as indexed by complementary subjective, psychophysiological, and BOLD measures. The degree to which self-attribution was impaired could be predicted from the attenuation of neural responses in key multisensory areas. These results elucidate the neural bases of the integration of multisensory hand signals according to basic spatiotemporal principles and demonstrate that the disintegration of these signals leads to “disownership” of the seen real hand.
The present study examined age-related differences in multisensory integration and the role of attention in age-related differences in multisensory integration. The sound-induced flash illusion---the misperception of the number of visual flashes due to the simultaneous presentation of a different number of auditory beeps---was used to examine the strength of multisensory integration in older and younger observers. The effects of integration were examined when discriminating 1–3 flashes, 1–3 beeps, or 1–3 flashes presented with 1–3 beeps. Stimulus conditions were blocked according to these conditions, with baseline (unisensory) performance assessed during the multisensory block. Older participants demonstrated greater multisensory integration--a greater influence of the beeps when judging the number of visual flashes--than younger observers. In a second experiment, the role of attention was assessed using a go/no-go paradigm. The results of Experiment 2 replicated those of Experiment 1. In addition, the strength of the illusion was modulated by the sensory domain of the go/no-go task, though this did not differ by age group. In the visual go/no-go task we found a decrease in the illusion, while in the auditory go/no-go task we found an increase in the illusion. These results demonstrate that older individuals exhibit increased multisensory integration compared to younger individuals. Attention was also found to modulate the strength of the sound-induced flash illusion. However, the results also suggest that attention was not likely to be a factor in the age-related differences in multisensory integration.
multisensory integration; aging; attention; vision; audition
Self-organization, a process by which the internal organization of a system changes without supervision, has been proposed as a possible basis for multisensory enhancement (MSE) in the superior colliculus (Anastasio and Patton, 2003). We simplify and extend these results by presenting a simulation using traditional self-organizing maps, intended to understand and simulate MSE as it may generally occur throughout the central nervous system. This simulation of MSE: (1) uses a standard unsupervised competitive learning algorithm, (2) learns from artificially generated activation levels corresponding to driven and spontaneous stimuli from separate and combined input channels, (3) uses a sigmoidal transfer function to generate quantifiable responses to separate inputs, (4) enhances the responses when those same inputs are combined, (5) obeys the inverse effectiveness principle of multisensory integration, and (6) can topographically congregate MSE in a manner similar to that seen in cortex. Thus, the model provides a useful method for evaluating and simulating the development of enhanced interactions between responses to different sensory modalities.
multisensory integration; artificial neural networks; competitive learning; self-organization; computational modeling; superior colliculus
The integration of multisensory information is essential to forming meaningful representations of the environment. Adults benefit from related multisensory stimuli but the extent to which the ability to optimally integrate multisensory inputs for functional purposes is present in children has not been extensively examined. Using a cross-sectional approach, high-density electrical mapping of event-related potentials (ERPs) was combined with behavioral measures to characterize neurodevelopmental changes in basic audiovisual (AV) integration from middle childhood through early adulthood. The data indicated a gradual fine-tuning of multisensory facilitation of performance on an AV simple reaction time task (as indexed by race model violation), which reaches mature levels by about 14 years of age. They also revealed a systematic relationship between age and the brain processes underlying multisensory integration (MSI) in the time frame of the auditory N1 ERP component (∼120 ms). A significant positive correlation between behavioral and neurophysiological measures of MSI suggested that the underlying brain processes contributed to the fine-tuning of multisensory facilitation of behavior that was observed over middle childhood. These findings are consistent with protracted plasticity in a dynamic system and provide a starting point from which future studies can begin to examine the developmental course of multisensory processing in clinical populations.
children; cross-modal; development, electrophysiology; ERP; multisensory integration
Children who experience long periods of auditory deprivation are susceptible to large-scale reorganization of auditory cortical areas responsible for the perception of speech and language. One consequence of this reorganization is that integration of combined auditory and visual information may be altered after hearing is restored with a cochlear implant. Our goal was to investigate the effects of reorganization in a task that examines performance during multisensory integration.
Reaction times to the detection of basic auditory (A), visual (V), and combined auditory-visual (AV) stimuli were examined in a group of normally hearing children, and in two groups of cochlear implanted children: (1) early implanted children in whom cortical auditory evoked potentials (CAEPs) fell within normal developmental limits, and (2) late implanted children in whom CAEPs were outside of normal developmental limits. Miller’s test of the race model inequality was performed for each group in order to examine the effects of auditory deprivation on multisensory integration abilities after implantation.
Results revealed a significant violation of the race model inequality in the normally hearing and early implanted children, but not in the group of late implanted children.
These results suggest that coactivation to multi-modal sensory input cannot explain the decreased reaction times to multi-modal input in late implanted children. These results are discussed in regards to current models for coactivation to redundant sensory information.
Multisensory integration; cochlear implants; redundant signal effect (RSE); race model inequality
Single neuron studies provide one foundation for understanding many facets of multisensory integration. These studies have used a variety of criteria for identifying and quantifying multisensory integration. While a number of techniques have been used, there lacks an explicit discussion of the assumptions, criteria, and analytical methods traditionally used to define the principles of multisensory integration. This was not problematic when the field was small, but with rapid growth a number of alternative techniques and models have been introduced, each with its own criteria and sets of implicit assumptions to define and characterize what is thought to be the same phenomenon. The potential for misconception prompted this reexamination of traditional approaches in order to clarify their underlying assumptions and analytic techniques. The objective is to review the traditional quantitative methods advanced in the study of single-neuron physiology in order to appreciate the process of multisensory integration and its impact.
Sensory; Cross-modal; Computation; Vision; Auditory; Somatosensory