Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
open loop pointing; spatial cognition; perception/action; perceived direction
The present study examined whether the compression of perceived visual space varies according to the type of environmental surface being viewed. To examine this issue, observers made exocentric distance judgments when viewing simulated 3D scenes. In 4 experiments, observers viewed ground and ceiling surfaces and performed either an L-shaped matching task (Experiments 1, 3, and 4) or a bisection task (Experiment 2). Overall, we found considerable compression of perceived exocentric distance on both ground and ceiling surfaces. However, the perceived exocentric distance was less compressed on a ground surface than on a ceiling surface. In addition, this ground surface advantage did not vary systematically as a function of the distance in the scene. These results suggest that the perceived visual space when viewing a ground surface is less compressed than the perceived visual space when viewing a ceiling surface and that the perceived layout of a surface varies as a function of the type of the surface.
depth; space and scene perception; 3D surface and shape perception
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.
Distance perception; Height perception; Gaze declination; Perceptual scale expansion; Virtual reality
We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications.
depth perception; distance perception; art perception; visual space; visual field; geometry
The primary auditory cortex (A1) is involved in sound localization. A consistent observation in A1 is a clustered representation of binaural properties, but how spatial tuning varies within binaural clusters is unknown. Here, this issue was addressed in A1 of the pallid bat, a species that relies on passive hearing (as opposed to echolocation) to localize prey. Evidence is presented for systematic representations of sound azimuth within two binaural clusters in the pallid bat A1: the binaural inhibition (EI) and peaked (P) binaural interaction clusters. The representation is not a ‘point-to-point’ space map as seen in the superior colliculus, but in the form of a systematic increase in the area of activated cortex as azimuth changes from ipsilateral to contralateral locations. The underlying substrate in the EI cluster is a systematic representation of the medial boundary of azimuth receptive fields. The P cluster is activated mostly for sounds near the midline, providing a spatial acoustic fovea. Activity in the P cluster falls off systematically as the sound is moved to more lateral locations. Sensitivity to interaural intensity differences (IID) predicts azimuth tuning in the vast majority of neurons. Azimuth receptive field properties are relatively stable across intensity over a moderate range (20–40 dB above threshold) of intensities. This suggests the maps will be similar across the intensities tested. These results challenge the current view that no systematic representation of azimuth is present in A1 and show that such representations are present locally within individual binaural clusters.
Human spatial navigation can be conceptualized as egocentric or exocentric, depending on the navigator’s perspective. While navigational impairment occurs in individuals with cognitive impairment, less is known about navigational abilities in non-demented older adults. Our objective was to develop tests of navigation and study their cognitive correlates in non-demented older adults.
We developed a Local Route Recall Test (LRRT) to examine egocentric navigation and a Floor Maze Test (FMT) to assess exocentric navigation in 127 older adults without dementia or amnestic Mild Cognitive Impairment. Factor analysis was used to reduce neuropsychological test scores to three cognitive factors representing Executive Function/Attention, Verbal Ability, and Memory. We examined relationships between navigational tests and cognitive function (using both cognitive factors and the highest loading individual test on each factor) in a series of regression analyses adjusted for demographic variables (age, sex, and education), medical illnesses, and gait velocity.
The tests were well-tolerated, easy to administer, and reliable in this non-demented and non-MCI sample. Egocentric skills on the LRRT were associated with Executive Function/Attention (B -0.650, 95% C.I. -0.139, -0.535) and Memory (B -0.518, 95% C.I. -0.063, -4.893) factors. Exocentric navigation on the FMT was related to Executive Function/Attention (B -8.542, 95% C.I. -13.357, -3.727).
Our tests appear to assess egocentric and exocentric navigation skills in cognitively-normal older adults, and these skills are associated with specific cognitive processes such as executive function and memory.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.
Line Bisection; Pseudoneglect; Visuospatial Attention; Audiospatial Attention
Three signals are used to visually localize targets and stimulate saccades: (1) retinal-location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.
eye movements; saccade; adaptation; extra-retinal; retinal; arm-hand movements; visuomotor control
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV − V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40–60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
audiovisual integration; spatial congruity; visual prediction
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.
Exogenous attention; Intramodal; Crossmodal; Multisensory integration
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006 Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006, Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba), a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.
Recent models of multisensory integration predict differential weighting of information from different sensory modalities in different spatial directions. This direction-dependent weighting account suggests a heavier weighting for vision in the azimuthal (left-right) direction and a heavier weighting for proprioception in the radial (near-far) direction. Visually-induced reaching errors, as demonstrated in previous ‘mirror illusion’ reaching experiments, should therefore be greater under visual-proprioceptive conflict in the azimuthal direction than in the radial direction. We report two experiments designed to investigate the influence of direction-dependent weighting on the visual bias of reaching movements under the influence of a mirror-illusion. In Experiment 1, participants made reaches straight forward, and showed terminal reaching errors that were biased by vision in both directions, but this bias was significantly greater in the azimuthal as compared to the radial direction. In Experiment 2, participants made reaches from right to left, and showed a significant bias only in the azimuthal direction. These results support the direction-dependent weighting of visual and proprioceptive information, with vision relatively more dominant in the azimuthal direction, and proprioception relatively stronger in the radial direction.
Multisensory; Phantom limb; Mirror illusion; Hand position; Visuomotor
For low vision navigation, misperceiving the locations of hazards can have serious consequences. Potential sources of such misperceptions are hazards that are not visually associated with the ground plane, thus depriving the viewer of important perspective cues for egocentric distance. In Experiment 1, we assessed absolute distance and size judgments to targets on stands under degraded vision conditions. Normally sighted observers wore blur goggles that severely reduced acuity and contrast, and viewed targets placed on either detectable or undetectable stands. Participants in the detectable stand condition demonstrated accurate distance judgments, whereas participants in the undetectable stand condition overestimated target distances. Similarly, the perceived size of targets in the undetectable stand condition was judged to be significantly larger than those in the detectable stand condition, suggesting a perceptual coupling of size and distance in conditions of degraded vision. In Experiment 2, we investigated size and implied distance perception of targets elevated above a visible horizon for individuals in an induced state of degraded vision. When participants’ size judgments are inserted into the size-distance invariance hypothesis (SDIH) formula, distance to above-horizon objects increased compared to those below the horizon. Together, our results emphasize the importance of salient visible ground-contact information for accurate distance perception. The absence of this ground-contact information could be the source of perceptual errors leading to potential hazards for low vision individuals with severely degraded acuity and contrast sensitivity.
Low vision; navigation; distance perception; size perception; visual accessibility
In the “flash-beep illusion,” a single light flash is perceived as multiple flashes when presented in close temporal proximity to multiple auditory beeps. Accounts of this illusion argue that temporal auditory information interferes with visual information because temporal acuity is better in audition than vision. However, it may also be that whenever there are multiple sensory inputs, the interference caused by a to-be-ignored stimulus on an attended stimulus depends on the likelihood that the stimuli are perceived as coming from a single distal source. Here we explore, in human observers, perceptual interactions between competing auditory and visual inputs while varying spatial proximity, which affects object formation. When two spatially separated streams are presented in the same (visual or auditory) modality, temporal judgments about a target stream from one direction are biased by the content of the competing distractor stream. Cross-modally, auditory streams from both target and distractor directions bias the perceived number of events in a target visual stream; however, importantly, the auditory stream from the target direction influences visual judgments more than does the auditory stream from the opposite hemifield. As in the original flash-beep illusion, visual streams weakly influence auditory judgements, regardless of spatial proximity. We also find that perceptual interference in the flash-beep illusion is similar to within-modality interference from a competing same-modality stream. Results reveal imperfect and obligatory within-and across-modality integration of information, and hint that the strength of these interactions depends on object binding.
Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.
auditory space perception; eye-movements; spatial cognition
This research examined motor measures of the apparent egocentric location and perceptual measures of the apparent allocentric location of a target that was being seen to undergo induced motion (IM). In Experiments 1 and 3, subjects fixated a stationary dot (IM target) while a rectangular surround stimulus (inducing stimulus) oscillated horizontally. The inducing stimulus motion caused the IM target to appear to move in the opposite direction. In Experiment 1, two dots (flashed targets) were flashed above and below the IM target when the surround had reached its leftmost or rightmost displacement from the subject’s midline. Subjects pointed open loop at either the apparent egocentric location of the IM target or at the bottom of the two flashed targets. On separate trials, subjects made judgments of the Vernier alignment of the IM target with the flashed targets at the endpoints of the surround’s oscillation. The pointing responses were displaced in the direction of the previously seen IM for the IM target and to a lesser degree for the bottom flashed target. However, the allocentric Vernier judgments demonstrated no perceptual displacement of the IM target relative to the flashed targets. Thus, IM results in a dissociation of egocentric location measures from allocentric location measures. In Experiment 2, pointing and Vernier measures were obtained with stationary horizontally displaced surrounds and there was no dissociation of egocentric location measures from allocentric location measures. These results indicate that the Roelofs effect did not produce the pattern of results in Experiment 1. In Experiment 3, pointing and Vernier measures were obtained when the surround was at the midpoint of an oscillation. In this case, egocentric pointing responses were displaced in the direction of surround motion (opposite IM) for the IM target and to a greater degree for the bottom flashed target. However, there was no apparent displacement of the IM target relative to the flashed targets in the allocentric Vernier judgments. Therefore, in Experiment 3 egocentric location measures were again dissociated from allocentric location measures. The results of this experiment also demonstrate that IM does not generate an allocentric displacement illusion analogous to the “flash-lag” effect.
Localization; Induced Motion; motion perception
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70° and 200° whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70° and 200° rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally “drifted” off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.
spatial representation; egocentric–allocentric frames of reference; spatial updating
We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass) juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well as on swallow physiology. The taste score and the ease of swallowing score significantly increased when the participant's anticipation was primed by accurate auditory verbal cues. There was no significant effect of auditory verbal cues on distaste score. Regardless of age, the maximum suprahyoid muscle activity significantly decreased when a beverage was ingested without auditory verbal cues. The interval between the onset of swallowing sounds and the peak timing point of the infrahyoid muscle activity significantly shortened when the anticipation induced by the cue was contradicted in the elderly participant group. These results suggest that auditory verbal cues can improve the perceived flavor of beverages and swallow physiology.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
The primary purpose of this study was to examine the effects of non-informative vision and visual interference upon haptic spatial processing, which supposedly derives from an interaction between an allocentric and egocentric reference frame. To this end, a haptic parallelity task served as baseline to determine the participant-dependent biasing influence of the egocentric reference frame. As expected, large systematic participant-dependent deviations from veridicality were observed. In the second experiment we probed the effect of non-informative vision on the egocentric bias. Moreover, orienting mechanisms (gazing directions) were studied with respect to the presentation of haptic information in a specific hemispace. Non-informative vision proved to have a beneficial effect on haptic spatial processing. No effect of gazing direction or hemispace was observed. In the third experiment we investigated the effect of simultaneously presented interfering visual information on the haptic bias. Interfering visual information parametrically influenced haptic performance. The interplay of reference frames that subserves haptic spatial processing was found to be related to both the effects of non-informative vision and visual interference. These results suggest that spatial representations are influenced by direct cross-modal interactions; inter-participant differences in the haptic modality resulted in differential effects of the visual modality.
Haptic; Space perception; Reference frames; Non-informative vision; Visual interference
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.
sound localization; spatial hearing; multisensory integration; auditory plasticity; behavioural training; vision
Two strategies may guide walking to a stationary goal: (1) the optic flow strategy, in which one aligns the direction of locomotion or “heading” specified by optic flow with the visual goal [1, 2]; and (2) the egocentric direction strategy, in which one aligns the locomotor axis with the perceived egocentric direction of the goal [3, 4], and error results in optical target drift . Optic flow appears to dominate steering control in richly structured visual environments [2, 6-8], while the egocentric direction strategy prevails in visually sparse environments [2, 3, 9]. Here we determine whether optic flow also drives visuo-locomotor adaptation in visually structured environments. Participants adapted to walking with the virtual heading direction displaced 10° to the right of the actual walking direction, and were then tested with a normally aligned heading. Two environments, visually structured and visually sparse, were crossed in adaptation and test phases. Adaptation of the walking path was more rapid and complete in the structured environment, with twice the negative aftereffect on path deviation, indicating that optic flow contributes over and above target drift alone. Optic flow thus plays a central role in both online control of walking and adaptation of the visuo-locomotor mapping.
Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.
spatial audition; human; pointing movement kinematics; orienting movements; reaching; auditory-motor mapping; movement sonification