Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. Previous work has found that non-sensory inputs, namely spatial memory, can play a powerful role in angular path integration (Arthur et al., 2007, 2009). Here we investigated the conditions under which spatial memory facilitates angular path integration. We hypothesized that the benefit of spatial memory is particularly likely in spatial updating tasks in which one’s self-location estimate is referenced to external space. To test this idea, we administered passive, nonvisual body rotations (ranging 40° – 140°) about the yaw axis and asked participants to use verbal reports or open-loop manual pointing to indicate the magnitude of the rotation. Prior to some trials, previews of the surrounding environment were given. We found that when participants adopted an egocentric frame of reference, the previously-observed benefit of previews on within-subject response precision was not manifested, regardless of whether remembered spatial frameworks were derived from vision or spatial language. We conclude that the powerful effect of spatial memory is dependent on one’s frame of reference during self-motion updating.
spatial memory; path integration; vestibular navigation; manual pointing; perception and action
We explored a system that constructs environment-centered frames of reference and coordinates memory for the azimuth of an object in an enclosed space. For one group, we provided two environmental cues (doors): one in the front, and one in the rear. For a second group, we provided two object cues: a front and a rear cue. For a third group, we provided no external cues; we assumed that for this group, their reference frames would be determined by the orthogonal geometry of the floor-and-wall junction that divides a space in half or into multiple territories along the horizontal continuum. Using Huttenlocher, Hedges, and Duncan’s (Psychological Review 98: 352-376, 1991) category-adjustment model (cue-based fuzzy boundary version) to fit the data, we observed different reference frames than have been seen in prior studies involving two-dimensional domains. The geometry of the environment affected all three conditions and biased the remembered object locations within a two-category (left vs. right) environmental frame. The influence of the environmental geometry remained observable even after the participants’ heading within the environment changed due to a body rotation, attenuating the effect of the front but not of the rear cue. The door and object cues both appeared to define boundaries of spatial categories when they were used for reorientation. This supports the idea that both types of cues can assist in environment-centered memory formation.
Place memory; Coarse-grain representation; Environmental geometry; Landmark; Category; Bias
Visual perception of absolute distance (between an observer and an object) is based upon multiple sources of information that must be extracted during scene viewing. The viewing duration needed to fully extract distance information, particularly in navigable real-world environments, is unknown. In a visually-directed walking task, a sensitive response to distance was observed with 9-ms glimpses when floor- and eye-level targets were employed. However, response compression occurred with eye-level targets when angular size was rendered uninformative. Performance at brief durations was characterized by underestimation, unless preceded by a block of extended-viewing trials. The results indicate a role for experience in the extraction of information during brief glimpses. Even without prior experience, the extraction of useful information is virtually immediate when the cues of angular size or angular declination are informative.
Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0–2° C water), a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1) and contrast thresholds (Experiment 2) were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.
The present study examined how cold pressor stimulation influences electrophysiological correlates of arousal. We measured the P50 auditory evoked response potential in two groups of subjects who immersed their foot in either cold (0–2°C) or room temperature (22–24°C) water for 50 seconds. The P50, which was recorded before and after stimulation, is sleep-state dependent and sensitive to states of arousal in clinical populations. We found a significant reduction in P50 amplitude after exposure to cold, but not room temperature water. In comparison with other studies, these results indicate that cold pressor stimulation in normal subjects may evoke a regulatory process that modulates the P50 amplitude, perhaps to preserve the integrity of sensory perception, even as autonomic and subjective aspects of arousal increase.
arousal; auditory evoked response potential; cold pressor stimulation; P50 ERP; regulatory arousal response; sensory perception
In order to gain insight into the nature of human spatial representations, the current study examined how those representations are affected by blind rotation. Evidence was sought on the possibility that whereas certain environmental aspects may be updated independently of one another, other aspects may be grouped (or chunked) together and updated as a unit. Participants learned the locations of an array of objects around them in a room, then were blindfolded and underwent a succession of passive, whole-body rotations. After each rotation, participants pointed to remembered target locations. Targets were located more precisely relative to each other if they were (a) separated by smaller angular distances, (b) contained within the same regularly configured arrangement, or (c) corresponded to parts of a common object. A hypothesis is presented describing the roles played by egocentric and allocentric information within the spatial updating system. Results are interpreted in terms of an existing neural systems model, elaborating the model’s conceptualization of how parietal (egocentric) and medial temporal (allocentric) representations interact.
spatial memory; spatial updating; egocentric; allocentric; chunking
Non-sensory (cognitive) inputs can play a powerful role in monitoring one’s self-motion. Previously, we showed that access to spatial memory dramatically increases response precision in an angular self-motion updating task . Here, we examined whether spatial memory also enhances a particular type of self-motion updating – angular path integration. “Angular path integration” refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. It was hypothesized that remembered spatial frameworks derived from vision and spatial language should facilitate angular path integration by decreasing the uncertainty of self-location estimates. To test this we implemented a whole-body rotation paradigm with passive, non-visual body rotations (ranging 40°–140°) administered about the yaw axis. Prior to the rotations, visual previews (Experiment 1) and verbal descriptions (Experiment 2) of the surrounding environment were given to participants. Perceived angular displacement was assessed by open-loop pointing to the origin (0°). We found that within-subject response precision significantly increased when participants were provided a spatial context prior to whole-body rotations. The present study goes beyond our previous findings by first establishing that memory of the environment enhances the processing of idiothetic self-motion signals. Moreover, we show that knowledge of one’s immediate environment, whether gained from direct visual perception or from indirect experience (i.e., spatial language), facilitates the integration of incoming self-motion signals.
Spatial memory; path integration; vestibular navigation; manual pointing
Blind walking has become a common measure of perceived target location. This article addresses the possibility that blind walking might vary systematically within an experimental session as participants accrue exposure to nonvisual locomotion. Such variations could complicate the interpretation of blind walking as a measure of perceived location. We measured walked distance, velocity, and pace length in indoor and outdoor environments (1.5–16.0 m target distances). Walked distance increased over 37 trials by approximately 9.33% of the target distance; velocity (and to a lesser extent, pace length) also increased, primarily in the first few trials. In addition, participants exhibited more unintentional forward drift in a blindfolded marching-in-place task after exposure to nonvisual walking. The results suggest that participants not only gain confidence as blind-walking exposure increases, but also adapt to nonvisual walking in a way that biases responses toward progressively longer walked distances.
D. R. Proffitt and colleagues (e. g., D. R. Proffitt, J. Stefanucci, T. Banton, & W. Epstein, 2003) have suggested that objects appear farther away if more effort is required to act upon them (e.g., by having to throw a ball). The authors attempted to replicate several findings supporting this view but found no effort-related effects in a variety of conditions differing in environment, type of effort, and intention to act. Although they did find an effect of effort on verbal reports when participants were instructed to take into account nonvisual (cognitive) factors, no effort-related effect was found under apparent- and objective-distance instruction types. The authors’ interpretation is that in the paradigms tested, effort manipulations are prone to influencing response calibration because they encourage participants to take nonperceptual connotations of distance into account while leaving perceived distance itself unaffected. This in no way rules out the possibility that effort influences perception in other contexts, but it does focus attention on the role of response calibration in any verbal distance estimation task.
egocentric distance perception; effort; calibration; visual perception; instruction type
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking.
Blindwalking has become a common measure of perceived absolute distance and location, but it requires a relatively large testing space and cannot be used with people for whom walking is difficult or impossible. In the present article, we describe an alternative response type that is closely matched to blindwalking in several important respects but is less resource intensive. In the blindpulling technique, participants view a target, then close their eyes and pull a length of tape or rope between the hands to indicate the remembered target distance. As with blindwalking, this response requires integration of cyclical, bilateral limb movements over time. Blindpulling and blindwalking responses are tightly linked across a range of viewing conditions, and blindpulling is accurate when prior exposure to visually guided pulling is provided. Thus, blindpulling shows promise as a measure of perceived distance that may be used in nonambulatory populations and when the space available for testing is limited.
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70° and 200° whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70° and 200° rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally “drifted” off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.
spatial representation; egocentric–allocentric frames of reference; spatial updating
While an increasing number of behavioral studies examining spatial cognition use experimental paradigms involving disorientation, the process by which one becomes disoriented is not well explored. The current study examined this process using a paradigm in which participants were blindfolded and underwent a succession of 70° or 200° passive, whole body rotations around a fixed vertical axis. After each rotation, participants used a pointer to indicate either their heading at the start of the most recent turn or their heading at the start of the current series of turns. Analyses showed that in both cases, mean pointing errors increased gradually over successive turns. In addition to the gradual loss of orientation indicated by this increase, analysis of the pointing errors also showed evidence of occasional, abrupt loss orientation. Results indicate multiple routes from an oriented to a disoriented state, and shed light on the process of becoming disoriented.
spatial cognition; disorientation
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
manual pointing; auditory space perception; perception / action; perceived direction; spatial cognition
Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
open loop pointing; spatial cognition; perception/action; perceived direction