Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
open loop pointing; spatial cognition; perception/action; perceived direction
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.
Distance perception; Height perception; Gaze declination; Perceptual scale expansion; Virtual reality
“Exocentric pointing in the visual field” involves the setting of a pointer so as to visually point to a target, where both pointer and target are objects in the visual field. Phenomenologically, such pointings show systematic deviations from veridicality of several degrees. The errors are very small in the vertical and horizontal directions, but appreciable in oblique directions. The magnitude of the error is largely independent of the distance between pointer and target for stretches in the range 2–27°. A general conclusion is that the visual field cannot be described in terms of one of the classical homogeneous spaces, or, alternatively, that the results from pointing involve mechanisms that come after geometry proper has been established.
orientation; direction; anisotropy; geometry; space
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
Spatial image; extended touch; haptic perception; spatial cognition
In order to test how gravitational information would affect the choice of stable reference frame used to control posture and voluntary movement, we have analysed the forearm stabilisation during sit to stand movement under microgravity condition obtained during parabolic flights. In this study, we hypothesised that in response to the transient loss of graviceptive information, the postural adaptation might involve the use of several strategies of segmental stabilisation, depending on the subject's perceptual typology (dependence - independence with respect to the visual field). More precisely, we expected a continuum of postural strategies across subjects with 1) at one extreme the maintaining of an egocentric reference frame and 2) at the other the re-activation of childhood strategies consisting in adopting an egocentric reference frame.
To check this point, a forearm stabilisation task combined with a sit to stand movement was performed with eyes closed by 11 subjects during parabolic flight campaigns. Kinematic data were collected during 1-g and 0-g periods. The postural adaptation to microgravity's constraint may be described as a continuum of strategies ranging from the use of an exo- to an egocentric reference frame for segmental stabilisation. At one extremity, the subjects used systematically an exocentric frame to control each of their body segments independently, as under normogravity conditions. At the other, the segmental stabilisation strategies consist in systematically adopting an egocentric reference frame to control their forearm's stabilisation. A strong correlation between the mode of segmental stabilisation used and the perceptual typology (dependence - independence with respect to the visual field) of the subjects was reported.
The results of this study show different subjects' typologies from those that use the forearm orientation in a mainly exocentric reference frame to those that use the forearm orientation in a mainly egocentric reference frame.
The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking.
This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion.
Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics).
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments.
auditory localization; near-field pointing; nearby sound sources; virtual auditory display; spatial hearing; sound target; visual target
Three signals are used to visually localize targets and stimulate saccades: (1) retinal-location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.
eye movements; saccade; adaptation; extra-retinal; retinal; arm-hand movements; visuomotor control
The present study examined whether the compression of perceived visual space varies according to the type of environmental surface being viewed. To examine this issue, observers made exocentric distance judgments when viewing simulated 3D scenes. In 4 experiments, observers viewed ground and ceiling surfaces and performed either an L-shaped matching task (Experiments 1, 3, and 4) or a bisection task (Experiment 2). Overall, we found considerable compression of perceived exocentric distance on both ground and ceiling surfaces. However, the perceived exocentric distance was less compressed on a ground surface than on a ceiling surface. In addition, this ground surface advantage did not vary systematically as a function of the distance in the scene. These results suggest that the perceived visual space when viewing a ground surface is less compressed than the perceived visual space when viewing a ceiling surface and that the perceived layout of a surface varies as a function of the type of the surface.
depth; space and scene perception; 3D surface and shape perception
Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities.
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Human spatial navigation can be conceptualized as egocentric or exocentric, depending on the navigator’s perspective. While navigational impairment occurs in individuals with cognitive impairment, less is known about navigational abilities in non-demented older adults. Our objective was to develop tests of navigation and study their cognitive correlates in non-demented older adults.
We developed a Local Route Recall Test (LRRT) to examine egocentric navigation and a Floor Maze Test (FMT) to assess exocentric navigation in 127 older adults without dementia or amnestic Mild Cognitive Impairment. Factor analysis was used to reduce neuropsychological test scores to three cognitive factors representing Executive Function/Attention, Verbal Ability, and Memory. We examined relationships between navigational tests and cognitive function (using both cognitive factors and the highest loading individual test on each factor) in a series of regression analyses adjusted for demographic variables (age, sex, and education), medical illnesses, and gait velocity.
The tests were well-tolerated, easy to administer, and reliable in this non-demented and non-MCI sample. Egocentric skills on the LRRT were associated with Executive Function/Attention (B -0.650, 95% C.I. -0.139, -0.535) and Memory (B -0.518, 95% C.I. -0.063, -4.893) factors. Exocentric navigation on the FMT was related to Executive Function/Attention (B -8.542, 95% C.I. -13.357, -3.727).
Our tests appear to assess egocentric and exocentric navigation skills in cognitively-normal older adults, and these skills are associated with specific cognitive processes such as executive function and memory.
The primary auditory cortex (A1) is involved in sound localization. A consistent observation in A1 is a clustered representation of binaural properties, but how spatial tuning varies within binaural clusters is unknown. Here, this issue was addressed in A1 of the pallid bat, a species that relies on passive hearing (as opposed to echolocation) to localize prey. Evidence is presented for systematic representations of sound azimuth within two binaural clusters in the pallid bat A1: the binaural inhibition (EI) and peaked (P) binaural interaction clusters. The representation is not a ‘point-to-point’ space map as seen in the superior colliculus, but in the form of a systematic increase in the area of activated cortex as azimuth changes from ipsilateral to contralateral locations. The underlying substrate in the EI cluster is a systematic representation of the medial boundary of azimuth receptive fields. The P cluster is activated mostly for sounds near the midline, providing a spatial acoustic fovea. Activity in the P cluster falls off systematically as the sound is moved to more lateral locations. Sensitivity to interaural intensity differences (IID) predicts azimuth tuning in the vast majority of neurons. Azimuth receptive field properties are relatively stable across intensity over a moderate range (20–40 dB above threshold) of intensities. This suggests the maps will be similar across the intensities tested. These results challenge the current view that no systematic representation of azimuth is present in A1 and show that such representations are present locally within individual binaural clusters.
Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration.
Recent models of multisensory integration predict differential weighting of information from different sensory modalities in different spatial directions. This direction-dependent weighting account suggests a heavier weighting for vision in the azimuthal (left-right) direction and a heavier weighting for proprioception in the radial (near-far) direction. Visually-induced reaching errors, as demonstrated in previous ‘mirror illusion’ reaching experiments, should therefore be greater under visual-proprioceptive conflict in the azimuthal direction than in the radial direction. We report two experiments designed to investigate the influence of direction-dependent weighting on the visual bias of reaching movements under the influence of a mirror-illusion. In Experiment 1, participants made reaches straight forward, and showed terminal reaching errors that were biased by vision in both directions, but this bias was significantly greater in the azimuthal as compared to the radial direction. In Experiment 2, participants made reaches from right to left, and showed a significant bias only in the azimuthal direction. These results support the direction-dependent weighting of visual and proprioceptive information, with vision relatively more dominant in the azimuthal direction, and proprioception relatively stronger in the radial direction.
Multisensory; Phantom limb; Mirror illusion; Hand position; Visuomotor
Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy.
audio displays; sound localization; auditory-vestibular integration
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
recalibration; auditory localization; spatial perception; tactile feedback
Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.
spatial audition; Morris water maze; auditory landmarks; virtual reality; navigation; spatial memory; allocentric representation; auditory scene
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006 Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006, Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects.
Auditory cortex; tuning; sound; spike trains; vocalization; localization; parallel; hearing
When interpreting object shape from shading the visual system exhibits a strong bias that illumination comes from above and slightly from the left. We asked whether such biases in the perceived direction of illumination might also influence its perceived intensity. Arrays of nine cubes were stereoscopically rendered where individual cubes varied in their 3D pose, but possessed identical triplets of visible faces. Arrays were virtually illuminated from one of four directions: Above-Left, Above-Right, Below-Left, and Below-Right (±24.4° azimuth; ±90° elevation). Illumination intensity possessed 15 levels, resulting in mean cube array luminances ranging from 1.31–3.45 cd/m2. A “reference” array was consistently illuminated from Above-Left at mid-intensity (mean array luminance = 2.38 cd/m2). The reference array's illumination was compared to that of matching arrays which were illuminated from all four directions at all intensities. Reference and matching arrays appeared in the left and right visual field, respectively, or vice versa. Subjects judged which cube array appeared to be under more intense illumination. Using the method of constant stimuli we determined the illumination level of matching arrays required to establish subjective equality with the reference array as a function of matching cube visual field, illumination elevation, and illumination azimuth. Cube arrays appeared significantly more intensely illuminated when they were situated in the left visual field (p = 0.017), and when they were illuminated from below (p = 0.001), and from the left (p = 0.001). An interaction of modest strength was that the effect of illumination azimuth was greater for matching arrays situated in the left visual field (p = 0.042). We propose that objects lit from below appear more intensely illuminated than identical objects lit from above due to long-term adaptation to downward lighting. The amplification of perceived intensity of illumination for stimuli situated in the left visual field and lit from the left is best explained by tonic egocentric and allocentric leftward attentional biases, respectively.
brightness; perceived illumination; light-from-above bias; light-from-left bias; pseudoneglect; allocentric; egocentric; spatial attention
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Accommodation and convergence systems are cross-coupled so that stimulation of one system produces responses by both systems. Ideally, the cross-coupled responses of accommodation and convergence match their respective stimuli. When expressed in diopters and meter angles respectively, stimuli for accommodation and convergence are equal in the mid-sagittal plane when viewed with symmetrical convergence, where historically, the gains of the cross coupling (AC/A and CA/C ratios) have been quantified. However, targets at non-zero azimuth angles, when viewed with asymmetric convergence, present unequal stimuli for accommodation and convergence. Are the cross-links between the two systems calibrated to compensate for stimulus mismatches that increase with gaze-azimuth? We measured the response AC/A and stimulus CA/C ratios at zero azimuth, 17.5 and 30 degrees of rightward gaze eccentricities with a Badal Optometer and Wheatstone-mirror haploscope. AC/A ratios were measured under open-loop convergence conditions along the iso-accommodation circle (locus of points that stimulate approximately equal amounts of accommodation to the two eyes at all azimuth angles). CA/C ratios were measured under open-loop accommodation conditions along the iso-vergence circle (locus of points that stimulate constant convergence at all azimuth angles). Our results show that the gain of accommodative-convergence (AC/A ratio) decreased and the bias of convergence-accommodation increased at the 30 deg gaze eccentricity. These changes are in directions that compensate for stimulus mismatches caused by spatial-viewing geometry during asymmetric convergence.
accommodation; asymmetric convergence; cross-coupling; phoria; viewing geometry; near response; iso-vergence circle; iso-accommodation circle
To date, few studies have focused on the behavioural differences between the learning of multisensory auditory-visual and intra-modal associations. More specifically, the relative benefits of novel auditory-visual and verbal-visual associations for learning have not been directly compared. In Experiment 1, 20 adult volunteers completed three paired associate learning tasks: non-verbal novel auditory-visual (novel-AV), verbal-visual (verbal-AV; using pseudowords), and visual-visual (shape-VV). Participants were directed to make a motor response to matching novel and arbitrarily related stimulus pairs. Feedback was provided to facilitate trial and error learning. The results of Signal Detection Theory analyses suggested a multisensory enhancement of learning, with significantly higher discriminability measures (d-prime) in both the novel-AV and verbal-AV tasks than the shape-VV task. Motor reaction times were also significantly faster during the verbal-AV task than during the non-verbal learning tasks. Experiment 2 (n = 12) used a forced-choice discrimination paradigm to assess whether a difference in unisensory stimulus discriminability could account for the learning trends in Experiment 1. Participants were significantly slower at discriminating unisensory pseudowords than the novel sounds and visual shapes, which was notable given that these stimuli produced superior learning. Together the findings suggest that verbal information has an added enhancing effect on multisensory associative learning in adults