Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
open loop pointing; spatial cognition; perception/action; perceived direction
The present study examined whether the compression of perceived visual space varies according to the type of environmental surface being viewed. To examine this issue, observers made exocentric distance judgments when viewing simulated 3D scenes. In 4 experiments, observers viewed ground and ceiling surfaces and performed either an L-shaped matching task (Experiments 1, 3, and 4) or a bisection task (Experiment 2). Overall, we found considerable compression of perceived exocentric distance on both ground and ceiling surfaces. However, the perceived exocentric distance was less compressed on a ground surface than on a ceiling surface. In addition, this ground surface advantage did not vary systematically as a function of the distance in the scene. These results suggest that the perceived visual space when viewing a ground surface is less compressed than the perceived visual space when viewing a ceiling surface and that the perceived layout of a surface varies as a function of the type of the surface.
depth; space and scene perception; 3D surface and shape perception
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.
Distance perception; Height perception; Gaze declination; Perceptual scale expansion; Virtual reality
We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications.
depth perception; distance perception; art perception; visual space; visual field; geometry
The primary auditory cortex (A1) is involved in sound localization. A consistent observation in A1 is a clustered representation of binaural properties, but how spatial tuning varies within binaural clusters is unknown. Here, this issue was addressed in A1 of the pallid bat, a species that relies on passive hearing (as opposed to echolocation) to localize prey. Evidence is presented for systematic representations of sound azimuth within two binaural clusters in the pallid bat A1: the binaural inhibition (EI) and peaked (P) binaural interaction clusters. The representation is not a ‘point-to-point’ space map as seen in the superior colliculus, but in the form of a systematic increase in the area of activated cortex as azimuth changes from ipsilateral to contralateral locations. The underlying substrate in the EI cluster is a systematic representation of the medial boundary of azimuth receptive fields. The P cluster is activated mostly for sounds near the midline, providing a spatial acoustic fovea. Activity in the P cluster falls off systematically as the sound is moved to more lateral locations. Sensitivity to interaural intensity differences (IID) predicts azimuth tuning in the vast majority of neurons. Azimuth receptive field properties are relatively stable across intensity over a moderate range (20–40 dB above threshold) of intensities. This suggests the maps will be similar across the intensities tested. These results challenge the current view that no systematic representation of azimuth is present in A1 and show that such representations are present locally within individual binaural clusters.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.
Line Bisection; Pseudoneglect; Visuospatial Attention; Audiospatial Attention
Human spatial navigation can be conceptualized as egocentric or exocentric, depending on the navigator’s perspective. While navigational impairment occurs in individuals with cognitive impairment, less is known about navigational abilities in non-demented older adults. Our objective was to develop tests of navigation and study their cognitive correlates in non-demented older adults.
We developed a Local Route Recall Test (LRRT) to examine egocentric navigation and a Floor Maze Test (FMT) to assess exocentric navigation in 127 older adults without dementia or amnestic Mild Cognitive Impairment. Factor analysis was used to reduce neuropsychological test scores to three cognitive factors representing Executive Function/Attention, Verbal Ability, and Memory. We examined relationships between navigational tests and cognitive function (using both cognitive factors and the highest loading individual test on each factor) in a series of regression analyses adjusted for demographic variables (age, sex, and education), medical illnesses, and gait velocity.
The tests were well-tolerated, easy to administer, and reliable in this non-demented and non-MCI sample. Egocentric skills on the LRRT were associated with Executive Function/Attention (B -0.650, 95% C.I. -0.139, -0.535) and Memory (B -0.518, 95% C.I. -0.063, -4.893) factors. Exocentric navigation on the FMT was related to Executive Function/Attention (B -8.542, 95% C.I. -13.357, -3.727).
Our tests appear to assess egocentric and exocentric navigation skills in cognitively-normal older adults, and these skills are associated with specific cognitive processes such as executive function and memory.
Three signals are used to visually localize targets and stimulate saccades: (1) retinal-location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.
eye movements; saccade; adaptation; extra-retinal; retinal; arm-hand movements; visuomotor control
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV − V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40–60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
audiovisual integration; spatial congruity; visual prediction
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.
Exogenous attention; Intramodal; Crossmodal; Multisensory integration
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006 Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006, Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.
Recent models of multisensory integration predict differential weighting of information from different sensory modalities in different spatial directions. This direction-dependent weighting account suggests a heavier weighting for vision in the azimuthal (left-right) direction and a heavier weighting for proprioception in the radial (near-far) direction. Visually-induced reaching errors, as demonstrated in previous ‘mirror illusion’ reaching experiments, should therefore be greater under visual-proprioceptive conflict in the azimuthal direction than in the radial direction. We report two experiments designed to investigate the influence of direction-dependent weighting on the visual bias of reaching movements under the influence of a mirror-illusion. In Experiment 1, participants made reaches straight forward, and showed terminal reaching errors that were biased by vision in both directions, but this bias was significantly greater in the azimuthal as compared to the radial direction. In Experiment 2, participants made reaches from right to left, and showed a significant bias only in the azimuthal direction. These results support the direction-dependent weighting of visual and proprioceptive information, with vision relatively more dominant in the azimuthal direction, and proprioception relatively stronger in the radial direction.
Multisensory; Phantom limb; Mirror illusion; Hand position; Visuomotor
For low vision navigation, misperceiving the locations of hazards can have serious consequences. Potential sources of such misperceptions are hazards that are not visually associated with the ground plane, thus depriving the viewer of important perspective cues for egocentric distance. In Experiment 1, we assessed absolute distance and size judgments to targets on stands under degraded vision conditions. Normally sighted observers wore blur goggles that severely reduced acuity and contrast, and viewed targets placed on either detectable or undetectable stands. Participants in the detectable stand condition demonstrated accurate distance judgments, whereas participants in the undetectable stand condition overestimated target distances. Similarly, the perceived size of targets in the undetectable stand condition was judged to be significantly larger than those in the detectable stand condition, suggesting a perceptual coupling of size and distance in conditions of degraded vision. In Experiment 2, we investigated size and implied distance perception of targets elevated above a visible horizon for individuals in an induced state of degraded vision. When participants’ size judgments are inserted into the size-distance invariance hypothesis (SDIH) formula, distance to above-horizon objects increased compared to those below the horizon. Together, our results emphasize the importance of salient visible ground-contact information for accurate distance perception. The absence of this ground-contact information could be the source of perceptual errors leading to potential hazards for low vision individuals with severely degraded acuity and contrast sensitivity.
Low vision; navigation; distance perception; size perception; visual accessibility
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Localization; Induced Motion; motion perception
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
auditory space perception; eye-movements; spatial cognition
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.
Haptic; Space perception; Reference frames; Non-informative vision; Visual interference
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70° and 200° whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70° and 200° rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally “drifted” off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.
spatial representation; egocentric–allocentric frames of reference; spatial updating
Two strategies may guide walking to a stationary goal: (1) the optic flow strategy, in which one aligns the direction of locomotion or “heading” specified by optic flow with the visual goal [1, 2]; and (2) the egocentric direction strategy, in which one aligns the locomotor axis with the perceived egocentric direction of the goal [3, 4], and error results in optical target drift . Optic flow appears to dominate steering control in richly structured visual environments [2, 6-8], while the egocentric direction strategy prevails in visually sparse environments [2, 3, 9]. Here we determine whether optic flow also drives visuo-locomotor adaptation in visually structured environments. Participants adapted to walking with the virtual heading direction displaced 10° to the right of the actual walking direction, and were then tested with a normally aligned heading. Two environments, visually structured and visually sparse, were crossed in adaptation and test phases. Adaptation of the walking path was more rapid and complete in the structured environment, with twice the negative aftereffect on path deviation, indicating that optic flow contributes over and above target drift alone. Optic flow thus plays a central role in both online control of walking and adaptation of the visuo-locomotor mapping.
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.
sound localization; spatial hearing; multisensory integration; auditory plasticity; behavioural training; vision
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural?
AM envelope processing; inferior colliculus; sound localization; auditory distance; reverberation
The orderly projections from retina to superior colliculus (SC) preserve a continuous retinotopic representation of the visual world. The development of retinocollicular maps depend on a combination of molecular guidance cues and patterned neural activity. Here we characterize the functional retinocollicular maps in mice lacking the guidance molecules ephrins-A2, -A3, and -A5 and in mice deficient in both ephrin-As and structured spontaneous retinal activity, using a method of Fourier imaging of intrinsic signals. We find that the SC of ephrin-A2/A3/A5 triple knockout mice contains functional maps that are disrupted selectively along the nasotemporal (azimuth) axis of the visual space. These maps are discontinuous, with patches of SC responding to topographically incorrect locations. The patches disappear in mice that are deficient in both ephrin-As and structured activity, resulting in a near-absence of azimuth map in the SC. These results indicate that ephrin-As guide the formation of functional topography in the SC, and patterned retinal activity clusters cells based on their correlated firing patterns. Comparison of the SC and visual cortical mapping defects in these mice suggests that although ephrin-As are required for mapping in both SC and visual cortex, ephrin-A-independent mapping mechanisms are more important in visual cortex than in the SC.
visual system; topography; axonal guidance; optical imaging; chemoaffinity; retinal waves
Seeing the image of a newscaster on a television set causes us to think that the sound coming from the loudspeaker is actually coming from the screen. How images capture sounds is mysterious because the brain uses different methods for determining the locations of visual vs. auditory stimuli. The retina senses the locations of visual objects with respect to the eyes, whereas differences in sound characteristics across the ears indicate the locations of sound sources referenced to the head. Here, we tested which reference frame (RF) is used when vision recalibrates perceived sound locations.
Visually guided biases in sound localization were induced in seven humans and two monkeys who made eye movements to auditory or audio-visual stimuli. On audio-visual (training) trials, the visual component of the targets was displaced laterally by ~5°. Interleaved auditory-only (probe) trials served to evaluate the effect of experience with mismatched visual stimuli on auditory localization. We found that the displaced visual stimuli induced ventriloquism aftereffect in both humans (~50% of the displacement size) and monkeys (~25%), but only for locations around the trained spatial region, showing that audio-visual recalibration can be spatially specific.
We tested the reference frame in which the recalibration occurs. On probe trials, we varied eye position relative to the head to dissociate head- from eye-centered RFs. Results indicate that both humans and monkeys use a mixture of the two RFs, suggesting that the neural mechanisms involved in ventriloquism occur in brain region(s) employing a hybrid RF for encoding spatial information.
visual calibration of auditory space; humans; monkeys; reference frame of auditory space representation; ventriloquism; cross-modal adaptation