Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79 years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan.
Event cognition; Episodic memory; Cognitive aging
Research investigating how people remember the distance of paths they walk has shown two apparently conflicting effects of experience during encoding on subsequent distance judgments. By the feature accumulation effect discrete path features such as turns, houses or other landmarks cause an increase in remembered distance. By the distractor effect performance of a concurrent task during path encoding causes a decrease in remembered distance. This study asks: What are the conditions that determine whether the feature accumulation or the distractor effect dominates distortions of space? In two experiments, blindfolded participants were guided along two legs of a right triangle while reciting nonsense syllables. On some trials, one of the two legs contained features: horizontally mounted car antennas (gates) that bent out of the way as participants walked past. At the end of the second leg participants either indicated the remembered path leg lengths using their hands in a ratio estimation task, or attempted to walk, unguided, straight back to the beginning. In addition to response mode, visual access to the paths and time between encoding and response were manipulated to determine if these factors affected feature accumulation or distractor effects. Path legs with added features were remembered as shorter than those without, but this result was only significant in the haptic response mode data. This finding suggests that when people form spatial memory representations with the intention of navigating in room-scale spaces, interfering with information accumulation substantially distorts spatial memory.
Deficits in memory for everyday activities are common complaints among healthy and demented older adults. The medial temporal lobes and dorsolateral prefrontal cortex are both affected by aging and early-stage Alzheimer's disease, and are known to influence performance on laboratory memory tasks. We investigated whether the volume of these structures predicts everyday memory. Cognitively healthy older adults and older adults with mild Alzheimer's-type dementia watched movies of everyday activities and completed memory tests on the activities. Structural MRI was used to measure brain volume. Medial temporal but not prefrontal volume strongly predicted subsequent memory. Everyday memory depends on segmenting activity into discrete events during perception, and medial temporal volume partially accounted for the relationship between performance on the memory tests and performance on an event-segmentation task. The everyday-memory measures used in this study involve retrieval of episodic and semantic information as well as working memory updating. Thus, the current findings suggest that during perception, the medial temporal lobes support the construction of event representations that determine subsequent memory.
perception; memory; cognitive neuroscience; aging
We explored a system that constructs environment-centered frames of reference and coordinates memory for the azimuth of an object in an enclosed space. For one group, we provided two environmental cues (doors): one in the front, and one in the rear. For a second group, we provided two object cues: a front and a rear cue. For a third group, we provided no external cues; we assumed that for this group, their reference frames would be determined by the orthogonal geometry of the floor-and-wall junction that divides a space in half or into multiple territories along the horizontal continuum. Using Huttenlocher, Hedges, and Duncan’s (Psychological Review 98: 352-376, 1991) category-adjustment model (cue-based fuzzy boundary version) to fit the data, we observed different reference frames than have been seen in prior studies involving two-dimensional domains. The geometry of the environment affected all three conditions and biased the remembered object locations within a two-category (left vs. right) environmental frame. The influence of the environmental geometry remained observable even after the participants’ heading within the environment changed due to a body rotation, attenuating the effect of the front but not of the rear cue. The door and object cues both appeared to define boundaries of spatial categories when they were used for reorientation. This supports the idea that both types of cues can assist in environment-centered memory formation.
Place memory; Coarse-grain representation; Environmental geometry; Landmark; Category; Bias
We explored the effect of superficial priming in episodic recognition and found it to be different from the effect of semantic priming in episodic recognition. Participants made recognition judgments to pairs of items, with each pair consisting of a prime item and a test item. Correct positive responses to the test item were impeded if the prime and test item were superficially related; this was the case when the items were words and the crucial relationship was phonological and orthographic as well as when the items were letter strings and the crucial relationship was orthographic. The results of further experiments suggested that the priming effect cannot be attributed to a process of discounting or to habituation in a familiarity assessment process.
On each trial of the experimental procedure, the participant read a list of words and made successive recognition judgments to multiple test words. The bias for a given recognition judgment was more conservative if the judgment followed a correct positive response to a target than if it followed a correct negative response to a lure. Similar results were not observed for successive semantic recognition judgments. The bias shift was greater when the study list was short than when the list was long. The results suggest that participants in a recognition task have a sense of the size of the set of targets that might possibly be presented on the next trial, and that, under conditions in which a word can only be presented once during the test phase, their bias becomes more conservative after a positive response to a target because the set is depleted.
recognition; response bias; sequential effects
In order to gain insight into the nature of human spatial representations, the current study examined how those representations are affected by blind rotation. Evidence was sought on the possibility that whereas certain environmental aspects may be updated independently of one another, other aspects may be grouped (or chunked) together and updated as a unit. Participants learned the locations of an array of objects around them in a room, then were blindfolded and underwent a succession of passive, whole-body rotations. After each rotation, participants pointed to remembered target locations. Targets were located more precisely relative to each other if they were (a) separated by smaller angular distances, (b) contained within the same regularly configured arrangement, or (c) corresponded to parts of a common object. A hypothesis is presented describing the roles played by egocentric and allocentric information within the spatial updating system. Results are interpreted in terms of an existing neural systems model, elaborating the model’s conceptualization of how parietal (egocentric) and medial temporal (allocentric) representations interact.
spatial memory; spatial updating; egocentric; allocentric; chunking
Human spatial representations of object locations in a room-sized environment were probed for evidence that the object locations were encoded relative not just to the observer (egocentrically) but also to each other (allocentrically). Participants learned the locations of 4 objects and then were blindfolded and either (a) underwent a succession of 70° and 200° whole-body rotations or (b) were fully disoriented and then underwent a similar sequence of 70° and 200° rotations. After each rotation, participants pointed to the objects without vision. Analyses of the pointing errors suggest that as participants lost orientation, represented object directions generally “drifted” off of their true directions as an ensemble, not in random, unrelated directions. This is interpreted as evidence that object-to-object (allocentric) relationships play a large part in the human spatial updating system. However, there was also some evidence that represented object directions occasionally drifted off of their true directions independently of one another, suggesting a lack of allocentric influence. Implications regarding the interplay of egocentric and allocentric information are considered.
spatial representation; egocentric–allocentric frames of reference; spatial updating
While an increasing number of behavioral studies examining spatial cognition use experimental paradigms involving disorientation, the process by which one becomes disoriented is not well explored. The current study examined this process using a paradigm in which participants were blindfolded and underwent a succession of 70° or 200° passive, whole body rotations around a fixed vertical axis. After each rotation, participants used a pointer to indicate either their heading at the start of the most recent turn or their heading at the start of the current series of turns. Analyses showed that in both cases, mean pointing errors increased gradually over successive turns. In addition to the gradual loss of orientation indicated by this increase, analysis of the pointing errors also showed evidence of occasional, abrupt loss orientation. Results indicate multiple routes from an oriented to a disoriented state, and shed light on the process of becoming disoriented.
spatial cognition; disorientation
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space.
manual pointing; auditory space perception; perception / action; perceived direction; spatial cognition
Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced.
open loop pointing; spatial cognition; perception/action; perceived direction
Participants read lists of words and then made recognition judgments to pairs of words, each of which consisted of a prime word and a test word. At issue was the effect of a semantic relationship between the prime word and the test word on the recognition judgment to the test word. Under standard recognition conditions, semantic priming impeded correct recognition judgments to new test words and had no effect on recognition judgments to old test words. The overall effect was to reduce the level of discrimination for recognition judgments to the test word. Under conditions in which familiarity assessment would be expected to play a greater role in judgments to old test words, semantic priming facilitated those judgments. The results are explained in terms of a dual process account of recognition.
Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency.
memory; recognition; retrieval; memory decrement; suppression