In manipulating a pointer to indicate subjective straight ahead (SSA), participants were more variable after a series of whole-body rotations in conjunction with external sensory blockade than after external sensory blockade alone. The variability of reported SSA did not increase consequent to a temporal delay matched to the time taken by the rotation procedure. The results suggest that an observer’s egocentric reference frame is more complex and less stable than has previously been thought.
We explored a system that constructs environment-centered frames of reference and coordinates memory for the azimuth of an object in an enclosed space. For one group, we provided two environmental cues (doors): one in the front, and one in the rear. For a second group, we provided two object cues: a front and a rear cue. For a third group, we provided no external cues; we assumed that for this group, their reference frames would be determined by the orthogonal geometry of the floor-and-wall junction that divides a space in half or into multiple territories along the horizontal continuum. Using Huttenlocher, Hedges, and Duncan’s (Psychological Review 98: 352-376, 1991) category-adjustment model (cue-based fuzzy boundary version) to fit the data, we observed different reference frames than have been seen in prior studies involving two-dimensional domains. The geometry of the environment affected all three conditions and biased the remembered object locations within a two-category (left vs. right) environmental frame. The influence of the environmental geometry remained observable even after the participants’ heading within the environment changed due to a body rotation, attenuating the effect of the front but not of the rear cue. The door and object cues both appeared to define boundaries of spatial categories when they were used for reorientation. This supports the idea that both types of cues can assist in environment-centered memory formation.
Place memory; Coarse-grain representation; Environmental geometry; Landmark; Category; Bias
We explored the effect of superficial priming in episodic recognition and found it to be different from the effect of semantic priming in episodic recognition. Participants made recognition judgments to pairs of items, with each pair consisting of a prime item and a test item. Correct positive responses to the test item were impeded if the prime and test item were superficially related; this was the case when the items were words and the crucial relationship was phonological and orthographic as well as when the items were letter strings and the crucial relationship was orthographic. The results of further experiments suggested that the priming effect cannot be attributed to a process of discounting or to habituation in a familiarity assessment process.
On each trial of the experimental procedure, the participant read a list of words and made successive recognition judgments to multiple test words. The bias for a given recognition judgment was more conservative if the judgment followed a correct positive response to a target than if it followed a correct negative response to a lure. Similar results were not observed for successive semantic recognition judgments. The bias shift was greater when the study list was short than when the list was long. The results suggest that participants in a recognition task have a sense of the size of the set of targets that might possibly be presented on the next trial, and that, under conditions in which a word can only be presented once during the test phase, their bias becomes more conservative after a positive response to a target because the set is depleted.
recognition; response bias; sequential effects
In order to gain insight into the nature of human spatial representations, the current study examined how those representations are affected by blind rotation. Evidence was sought on the possibility that whereas certain environmental aspects may be updated independently of one another, other aspects may be grouped (or chunked) together and updated as a unit. Participants learned the locations of an array of objects around them in a room, then were blindfolded and underwent a succession of passive, whole-body rotations. After each rotation, participants pointed to remembered target locations. Targets were located more precisely relative to each other if they were (a) separated by smaller angular distances, (b) contained within the same regularly configured arrangement, or (c) corresponded to parts of a common object. A hypothesis is presented describing the roles played by egocentric and allocentric information within the spatial updating system. Results are interpreted in terms of an existing neural systems model, elaborating the model’s conceptualization of how parietal (egocentric) and medial temporal (allocentric) representations interact.
spatial memory; spatial updating; egocentric; allocentric; chunking
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70° and 200° whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70° and 200° rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally “drifted” off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.
spatial representation; egocentric–allocentric frames of reference; spatial updating
While an increasing number of behavioral studies examining spatial cognition use experimental paradigms involving disorientation, the process by which one becomes disoriented is not well explored. The current study examined this process using a paradigm in which participants were blindfolded and underwent a succession of 70° or 200° passive, whole body rotations around a fixed vertical axis. After each rotation, participants used a pointer to indicate either their heading at the start of the most recent turn or their heading at the start of the current series of turns. Analyses showed that in both cases, mean pointing errors increased gradually over successive turns. In addition to the gradual loss of orientation indicated by this increase, analysis of the pointing errors also showed evidence of occasional, abrupt loss orientation. Results indicate multiple routes from an oriented to a disoriented state, and shed light on the process of becoming disoriented.
spatial cognition; disorientation
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
manual pointing; auditory space perception; perception / action; perceived direction; spatial cognition
Participants read lists of words and then made recognition judgments to pairs of words, each of which consisted of a prime word and a test word. At issue was the effect of a semantic relationship between the prime word and the test word on the recognition judgment to the test word. Under standard recognition conditions, semantic priming impeded correct recognition judgments to new test words and had no effect on recognition judgments to old test words. The overall effect was to reduce the level of discrimination for recognition judgments to the test word. Under conditions in which familiarity assessment would be expected to play a greater role in judgments to old test words, semantic priming facilitated those judgments. The results are explained in terms of a dual process account of recognition.
In studies of anaphor comprehension, the capacity for recognizing a noun in a sentence decreases following the resolution of a repeated-noun anaphor (Gernsbacher, 1989). In studies of recognition memory, the capacity for recognizing a noun in a scrambled sentence decreases following the recognition that another noun has occurred before in the scrambled sentence (Dopkins & Ngo, 2002). The results of the present study suggest that these two phenomena reflect the same recognition memory process. The results suggest further that this is not because participants in studies of anaphor comprehension ignore the discourse properties of the stimulus materials and treat them as lists of words upon which memory tests are to be given. These results suggest that recognition processes play a role in anaphor comprehension and that such processes are in part the means by which repeated-noun anaphors are identified as such.
Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency.
memory; recognition; retrieval; memory decrement; suppression