PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (921367)

Clipboard (0)
None

Related Articles

1.  Large manual pointing errors, but accurate verbal reports, for indications of target azimuth 
Perception  2008;37(4):511-534.
Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
PMCID: PMC2702262  PMID: 18546661
open loop pointing; spatial cognition; perception/action; perceived direction
2.  The underestimation of egocentric distance: evidence from frontal matching tasks 
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.
doi:10.3758/s13414-011-0170-2
PMCID: PMC3205207  PMID: 21735313
Distance perception; Height perception; Gaze declination; Perceptual scale expansion; Virtual reality
3.  Exocentric pointing in the visual field 
i-Perception  2013;4(8):532-542.
“Exocentric pointing in the visual field” involves the setting of a pointer so as to visually point to a target, where both pointer and target are objects in the visual field. Phenomenologically, such pointings show systematic deviations from veridicality of several degrees. The errors are very small in the vertical and horizontal directions, but appreciable in oblique directions. The magnitude of the error is largely independent of the distance between pointer and target for stretches in the range 2–27°. A general conclusion is that the visual field cannot be described in terms of one of the classical homogeneous spaces, or, alternatively, that the results from pointing involve mechanisms that come after geometry proper has been established.
doi:10.1068/i0609
PMCID: PMC4129387  PMID: 25165511
orientation; direction; anisotropy; geometry; space
4.  Perception of 3-D location based on vision, touch, and extended touch 
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
doi:10.1007/s00221-012-3295-1
PMCID: PMC3536915  PMID: 23070234
Spatial image; extended touch; haptic perception; spatial cognition
5.  Postural Adaptation of the Spatial Reference Frames to Microgravity: Back to the Egocentric Reference Frame 
PLoS ONE  2010;5(4):e10259.
Background
In order to test how gravitational information would affect the choice of stable reference frame used to control posture and voluntary movement, we have analysed the forearm stabilisation during sit to stand movement under microgravity condition obtained during parabolic flights. In this study, we hypothesised that in response to the transient loss of graviceptive information, the postural adaptation might involve the use of several strategies of segmental stabilisation, depending on the subject's perceptual typology (dependence - independence with respect to the visual field). More precisely, we expected a continuum of postural strategies across subjects with 1) at one extreme the maintaining of an egocentric reference frame and 2) at the other the re-activation of childhood strategies consisting in adopting an egocentric reference frame.
Methodology/Principal Findings
To check this point, a forearm stabilisation task combined with a sit to stand movement was performed with eyes closed by 11 subjects during parabolic flight campaigns. Kinematic data were collected during 1-g and 0-g periods. The postural adaptation to microgravity's constraint may be described as a continuum of strategies ranging from the use of an exo- to an egocentric reference frame for segmental stabilisation. At one extremity, the subjects used systematically an exocentric frame to control each of their body segments independently, as under normogravity conditions. At the other, the segmental stabilisation strategies consist in systematically adopting an egocentric reference frame to control their forearm's stabilisation. A strong correlation between the mode of segmental stabilisation used and the perceptual typology (dependence - independence with respect to the visual field) of the subjects was reported.
Conclusion
The results of this study show different subjects' typologies from those that use the forearm orientation in a mainly exocentric reference frame to those that use the forearm orientation in a mainly egocentric reference frame.
doi:10.1371/journal.pone.0010259
PMCID: PMC2857881  PMID: 20422038
6.  Imagined Self-Motion Differs from Perceived Self-Motion: Evidence from a Novel Continuous Pointing Method 
PLoS ONE  2009;4(11):e7793.
Background
The extent to which actual movements and imagined movements maintain a shared internal representation has been a matter of much scientific debate. Of the studies examining such questions, few have directly compared actual full-body movements to imagined movements through space. Here we used a novel continuous pointing method to a) provide a more detailed characterization of self-motion perception during actual walking and b) compare the pattern of responding during actual walking to that which occurs during imagined walking.
Methodology/Principal Findings
This continuous pointing method requires participants to view a target and continuously point towards it as they walk, or imagine walking past it along a straight, forward trajectory. By measuring changes in the pointing direction of the arm, we were able to determine participants' perceived/imagined location at each moment during the trajectory and, hence, perceived/imagined self-velocity during the entire movement. The specific pattern of pointing behaviour that was revealed during sighted walking was also observed during blind walking. Specifically, a peak in arm azimuth velocity was observed upon target passage and a strong correlation was observed between arm azimuth velocity and pointing elevation. Importantly, this characteristic pattern of pointing was not consistently observed during imagined self-motion.
Conclusions/Significance
Overall, the spatial updating processes that occur during actual self-motion were not evidenced during imagined movement. Because of the rich description of self-motion perception afforded by continuous pointing, this method is expected to have significant implications for several research areas, including those related to motor imagery and spatial cognition and to applied fields for which mental practice techniques are common (e.g. rehabilitation and athletics).
doi:10.1371/journal.pone.0007793
PMCID: PMC2771354  PMID: 19907655
7.  Reaching nearby sources: comparison between real and virtual sound and visual targets 
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments.
doi:10.3389/fnins.2014.00269
PMCID: PMC4151089  PMID: 25228855
auditory localization; near-field pointing; nearby sound sources; virtual auditory display; spatial hearing; sound target; visual target
8.  How does saccade adaptation affect visual perception? 
Journal of vision  2008;8(8):3.1-316.
Three signals are used to visually localize targets and stimulate saccades: (1) retinal-location signals for intended saccade amplitude, (2) sensory-motor transform (SMT) of retinal signals to extra-ocular muscle innervation, (3) estimates of eye position from extra-retinal signals. We investigated effects of adapting saccade amplitude to a double-step change in target location on perceived direction. In a flashed-pointing task, subjects pointed an unseen hand at a briefly displayed eccentric target without making a saccade. In a sustained-pointing task, subjects made a horizontal saccade to a double-step target. One second after the second step, they pointed an unseen hand at the final target position. After saccade-shortening adaptation, there was little change in hand-pointing azimuth toward the flashed target suggesting that most saccade adaptation was caused by changes in SMT. After saccade-lengthening adaptation, there were small changes in hand-pointing azimuth to flashed targets, indicating that 1/3 of saccade adaptation was caused by changes in estimated retinal location signals and 2/3 by changes in SMT. The sustained hand-pointing task indicated that estimates of eye position adapted inversely with changes of SMT. Changes in perceived direction resulting from saccade adaptation are mainly influenced by extra-retinal factors with a small retinal component in the lengthening condition.
doi:10.1167/8.8.3
PMCID: PMC2630579  PMID: 18831626
eye movements; saccade; adaptation; extra-retinal; retinal; arm-hand movements; visuomotor control
9.  Environmental surfaces and the compression of perceived visual space 
Journal of vision  2011;11(7):4.
The present study examined whether the compression of perceived visual space varies according to the type of environmental surface being viewed. To examine this issue, observers made exocentric distance judgments when viewing simulated 3D scenes. In 4 experiments, observers viewed ground and ceiling surfaces and performed either an L-shaped matching task (Experiments 1, 3, and 4) or a bisection task (Experiment 2). Overall, we found considerable compression of perceived exocentric distance on both ground and ceiling surfaces. However, the perceived exocentric distance was less compressed on a ground surface than on a ceiling surface. In addition, this ground surface advantage did not vary systematically as a function of the distance in the scene. These results suggest that the perceived visual space when viewing a ground surface is less compressed than the perceived visual space when viewing a ceiling surface and that the perceived layout of a surface varies as a function of the type of the surface.
doi:10.1167/11.7.4
PMCID: PMC3136083  PMID: 21669858
depth; space and scene perception; 3D surface and shape perception
10.  Spatial Attention Evokes Similar Activation Patterns for Visual and Auditory Stimuli 
Journal of cognitive neuroscience  2010;22(2):347-361.
Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities.
doi:10.1162/jocn.2009.21241
PMCID: PMC2846529  PMID: 19400684
11.  Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus 
PLoS ONE  2014;9(1):e85017.
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
doi:10.1371/journal.pone.0085017
PMCID: PMC3893137  PMID: 24454779
12.  Egocentric and Exocentric Navigation Skills in Older Adults 
Background
Human spatial navigation can be conceptualized as egocentric or exocentric, depending on the navigator’s perspective. While navigational impairment occurs in individuals with cognitive impairment, less is known about navigational abilities in non-demented older adults. Our objective was to develop tests of navigation and study their cognitive correlates in non-demented older adults.
Methods
We developed a Local Route Recall Test (LRRT) to examine egocentric navigation and a Floor Maze Test (FMT) to assess exocentric navigation in 127 older adults without dementia or amnestic Mild Cognitive Impairment. Factor analysis was used to reduce neuropsychological test scores to three cognitive factors representing Executive Function/Attention, Verbal Ability, and Memory. We examined relationships between navigational tests and cognitive function (using both cognitive factors and the highest loading individual test on each factor) in a series of regression analyses adjusted for demographic variables (age, sex, and education), medical illnesses, and gait velocity.
Results
The tests were well-tolerated, easy to administer, and reliable in this non-demented and non-MCI sample. Egocentric skills on the LRRT were associated with Executive Function/Attention (B -0.650, 95% C.I. -0.139, -0.535) and Memory (B -0.518, 95% C.I. -0.063, -4.893) factors. Exocentric navigation on the FMT was related to Executive Function/Attention (B -8.542, 95% C.I. -13.357, -3.727).
Conclusions
Our tests appear to assess egocentric and exocentric navigation skills in cognitively-normal older adults, and these skills are associated with specific cognitive processes such as executive function and memory.
PMCID: PMC2673537  PMID: 19126849
13.  Systematic representation of sound locations in the primary auditory cortex 
The primary auditory cortex (A1) is involved in sound localization. A consistent observation in A1 is a clustered representation of binaural properties, but how spatial tuning varies within binaural clusters is unknown. Here, this issue was addressed in A1 of the pallid bat, a species that relies on passive hearing (as opposed to echolocation) to localize prey. Evidence is presented for systematic representations of sound azimuth within two binaural clusters in the pallid bat A1: the binaural inhibition (EI) and peaked (P) binaural interaction clusters. The representation is not a ‘point-to-point’ space map as seen in the superior colliculus, but in the form of a systematic increase in the area of activated cortex as azimuth changes from ipsilateral to contralateral locations. The underlying substrate in the EI cluster is a systematic representation of the medial boundary of azimuth receptive fields. The P cluster is activated mostly for sounds near the midline, providing a spatial acoustic fovea. Activity in the P cluster falls off systematically as the sound is moved to more lateral locations. Sensitivity to interaural intensity differences (IID) predicts azimuth tuning in the vast majority of neurons. Azimuth receptive field properties are relatively stable across intensity over a moderate range (20–40 dB above threshold) of intensities. This suggests the maps will be similar across the intensities tested. These results challenge the current view that no systematic representation of azimuth is present in A1 and show that such representations are present locally within individual binaural clusters.
doi:10.1523/JNEUROSCI.1937-11.2011
PMCID: PMC3219787  PMID: 21957247
14.  Perceptual Scale Expansion: An Efficient Angular Coding Strategy for Locomotor Space 
Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration.
doi:10.3758/s13414-011-0143-5
PMCID: PMC3155211  PMID: 21594732
15.  Direction-dependent integration of vision and proprioception in reaching under the influence of the mirror illusion 
Neuropsychologia  2006;45(3):496-505.
Recent models of multisensory integration predict differential weighting of information from different sensory modalities in different spatial directions. This direction-dependent weighting account suggests a heavier weighting for vision in the azimuthal (left-right) direction and a heavier weighting for proprioception in the radial (near-far) direction. Visually-induced reaching errors, as demonstrated in previous ‘mirror illusion’ reaching experiments, should therefore be greater under visual-proprioceptive conflict in the azimuthal direction than in the radial direction. We report two experiments designed to investigate the influence of direction-dependent weighting on the visual bias of reaching movements under the influence of a mirror-illusion. In Experiment 1, participants made reaches straight forward, and showed terminal reaching errors that were biased by vision in both directions, but this bias was significantly greater in the azimuthal as compared to the radial direction. In Experiment 2, participants made reaches from right to left, and showed a significant bias only in the azimuthal direction. These results support the direction-dependent weighting of visual and proprioceptive information, with vision relatively more dominant in the azimuthal direction, and proprioception relatively stronger in the radial direction.
doi:10.1016/j.neuropsychologia.2006.01.003
PMCID: PMC1705814  PMID: 16499935
Multisensory; Phantom limb; Mirror illusion; Hand position; Visuomotor
16.  Tactile feedback improves auditory spatial localization 
Frontiers in Psychology  2014;5:1121.
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
doi:10.3389/fpsyg.2014.01121
PMCID: PMC4202795  PMID: 25368587
recalibration; auditory localization; spatial perception; tactile feedback
17.  Sound localization with head movement: implications for 3-d audio displays 
Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy.
doi:10.3389/fnins.2014.00210
PMCID: PMC4130110  PMID: 25161605
audio displays; sound localization; auditory-vestibular integration
18.  From ear to body: the auditory-motor loop in spatial cognition 
Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.
doi:10.3389/fnins.2014.00283
PMCID: PMC4155796  PMID: 25249933
spatial audition; Morris water maze; auditory landmarks; virtual reality; navigation; spatial memory; allocentric representation; auditory scene
19.  Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study 
PLoS ONE  2013;8(6):e66402.
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
doi:10.1371/journal.pone.0066402
PMCID: PMC3684583  PMID: 23799097
20.  A comparison of two theories of perceived distance on the ground plane: The angular expansion hypothesis and the intrinsic bias hypothesis 
i-Perception  2012;3(5):368-383.
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006 Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
doi:10.1068/i0505
PMCID: PMC3393602  PMID: 22792434
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
21.  A comparison of two theories of perceived distance on the ground plane: The angular expansion hypothesis and the intrinsic bias hypothesis 
i-Perception  2012;3(5):368-383.
Two theories of distance perception—ie, the angular expansion hypothesis (Durgin and Li, 2011 Attention, Perception, & Psychophysics 73 1856–1870) and the intrinsic bias hypothesis (Ooi et al, 2006, Perception 35 605–624)—are compared. Both theories attribute exocentric distance foreshortening to an exaggeration in perceived slant, but their fundamental geometrical assumptions are very different. The intrinsic bias hypothesis assumes a constant bias in perceived geographical slant of the ground plane and predicts both perceived egocentric and exocentric distances are increasingly compressed. In contrast, the angular expansion hypothesis assumes exaggerations in perceived gaze angle and perceived optical slant. Because the bias functions of the two angular variables are different, it allows the angular expansion hypothesis to distinguish two types of distance foreshortening—the linear compression in perceived egocentric distance and the nonlinear compression in perceived exocentric distance. While the intrinsic bias is proposed only for explaining distance biases, the angular expansion hypothesis provides accounts for a broader range of spatial biases.
doi:10.1068/i0505
PMCID: PMC3393602  PMID: 22792434
egocentric distance; exocentric distance; spatial biases; non-Euclidean; visual space; geographical slant
22.  Lighting direction and visual field modulate perceived intensity of illumination 
When interpreting object shape from shading the visual system exhibits a strong bias that illumination comes from above and slightly from the left. We asked whether such biases in the perceived direction of illumination might also influence its perceived intensity. Arrays of nine cubes were stereoscopically rendered where individual cubes varied in their 3D pose, but possessed identical triplets of visible faces. Arrays were virtually illuminated from one of four directions: Above-Left, Above-Right, Below-Left, and Below-Right (±24.4° azimuth; ±90° elevation). Illumination intensity possessed 15 levels, resulting in mean cube array luminances ranging from 1.31–3.45 cd/m2. A “reference” array was consistently illuminated from Above-Left at mid-intensity (mean array luminance = 2.38 cd/m2). The reference array's illumination was compared to that of matching arrays which were illuminated from all four directions at all intensities. Reference and matching arrays appeared in the left and right visual field, respectively, or vice versa. Subjects judged which cube array appeared to be under more intense illumination. Using the method of constant stimuli we determined the illumination level of matching arrays required to establish subjective equality with the reference array as a function of matching cube visual field, illumination elevation, and illumination azimuth. Cube arrays appeared significantly more intensely illuminated when they were situated in the left visual field (p = 0.017), and when they were illuminated from below (p = 0.001), and from the left (p = 0.001). An interaction of modest strength was that the effect of illumination azimuth was greater for matching arrays situated in the left visual field (p = 0.042). We propose that objects lit from below appear more intensely illuminated than identical objects lit from above due to long-term adaptation to downward lighting. The amplification of perceived intensity of illumination for stimuli situated in the left visual field and lit from the left is best explained by tonic egocentric and allocentric leftward attentional biases, respectively.
doi:10.3389/fpsyg.2013.00983
PMCID: PMC3870952  PMID: 24399990
brightness; perceived illumination; light-from-above bias; light-from-left bias; pseudoneglect; allocentric; egocentric; spatial attention
23.  Interdependent encoding of pitch, timbre and spatial location in auditory cortex 
Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects.
doi:10.1523/JNEUROSCI.4755-08.2009
PMCID: PMC2663390  PMID: 19228960
Auditory cortex; tuning; sound; spike trains; vocalization; localization; parallel; hearing
24.  The Poggendorff illusion affects manual pointing as well as perceptual judgements 
Neuropsychologia  2009;47(14):3217-3224.
Pointing movements made to a target defined by the imaginary intersection of a pointer with a distant landing line were examined in healthy human observers in order to determine whether such motor responses are susceptible to the Poggendorff effect. In this well-known geometric illusion observers make systematic extrapolation errors when the pointer abuts a second line (the inducer). The kinematics of extrapolation movements, in which no explicit target was present, where similar to those made in response to a rapid-onset (explicit) dot target. The results unambiguously demonstrate that motor (pointing) responses are susceptible to the illusion. In fact, raw motor biases were greater than for perceptual responses: in the absence of an inducer (and hence also the acute angle of the Poggendorff stimulus) perceptual responses were near-veridical, whilst motor responses retained a bias. Therefore, the full Poggendorff stimulus contained two biases: one mediated by the acute angle formed between the oblique pointer and the inducing line (the classic Poggendorff effect), which affected both motor and perceptual responses equally, and another bias, which was independent of the inducer and primarily affected motor responses. We conjecture that this additional motor bias is associated with an undershoot in the unknown direction of movement and provide evidence to justify this claim. In conclusion, both manual pointing and perceptual judgements are susceptible to the well-known Poggendorff effect, supporting the notion of a unitary representation of space for action and perception or else an early locus for the effect, prior to the divergence of processing streams.
doi:10.1016/j.neuropsychologia.2009.07.024
PMCID: PMC2852533  PMID: 19665467
Action; Perception; Dorsal; Ventral; Pointing; Illusions
25.  Verbal and novel multisensory associative learning in adults 
F1000Research  2013;2:34.
To date, few studies have focused on the behavioural differences between the learning of multisensory auditory-visual and intra-modal associations. More specifically, the relative benefits of novel auditory-visual and verbal-visual associations for learning have not been directly compared. In Experiment 1, 20 adult volunteers completed three paired associate learning tasks: non-verbal novel auditory-visual (novel-AV), verbal-visual (verbal-AV; using pseudowords), and visual-visual (shape-VV). Participants were directed to make a motor response to matching novel and arbitrarily related stimulus pairs. Feedback was provided to facilitate trial and error learning. The results of Signal Detection Theory analyses suggested a multisensory enhancement of learning, with significantly higher discriminability measures (d-prime) in both the novel-AV and verbal-AV tasks than the shape-VV task. Motor reaction times were also significantly faster during the verbal-AV task than during the non-verbal learning tasks.  Experiment 2 (n = 12) used a forced-choice discrimination paradigm to assess whether a difference in unisensory stimulus discriminability could account for the learning trends in Experiment 1. Participants were significantly slower at discriminating unisensory pseudowords than the novel sounds and visual shapes, which was notable given that these stimuli produced superior learning. Together the findings suggest that verbal information has an added enhancing effect on multisensory associative learning in adults
doi:10.12688/f1000research.2-34.v1
PMCID: PMC3907154  PMID: 24627770

Results 1-25 (921367)