Two experiments examined effects of mixed stimulus-response mappings and tasks for older and younger adults. In Experiment 1, participants performed two-choice spatial reaction tasks with blocks of pure and mixed compatible and incompatible mappings. In Experiment 2, a compatible or incompatible mapping was mixed with a Simon task for which the mapping of stimulus color to location was relevant and stimulus location irrelevant. In both experiments older adults showed larger mixing costs than younger adults and larger compatibility effects, with the differences particularly pronounced in Experiment 1 when location mappings were mixed. In mixed conditions, when stimulus location was relevant, older adults benefited more than younger adults from complete repetition of the task and stimulus from the preceding trial. When stimulus location was irrelevant, the benefit of complete repetition did not differ reliably between age groups. The results suggest that the age-related deficit associated with mixing mappings and tasks is primarily due to older adults having more difficulty separating task sets that activate conflicting response codes.
Aging; Attention; Response Selection; 2860 Gerontology; 2346 Attention
The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers' body midline, while the orientation of the stimulus figure's body (trunk, arms, and legs), which was the task-irrelevant feature of interest, was manipulated (oriented toward the left or right visual hemifield) with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. In line with a model put forward by Hietanen (1999), this reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.
social attention; spatial attention; Simon task; head-body orientation; implied motion
Responses are faster when the side of stimulus and response correspond than when they do not correspond, even if stimulus location is irrelevant to the task at hand: the correspondence, spatial compatibility effect, or Simon effect. Generally, it is assumed that an automatically generated spatial code is responsible for this effect, but the precise mechanism underlying the formation of this code is still under dispute. Two major alternatives have been proposed: the referential-coding account, which can be subdivided into a static version and an attention-centered version, and the attention-shift account. These accounts hold clear-cut predictions for attentional cuing experiments. The former would assume a Simon effect irrespective of attentional cuing in its static version, whereas the attention-centered version of the referential-coding account and the attention-shift account would predict a decreased Simon effect on validly as opposed to invalidly cued trials. However, results from previous studies are equivocal to the effects of attentional cuing on the Simon effect. We argue here that attentional cueing reliably modulates the Simon effect if some crucial experimental conditions, mostly relevant for optimizing attentional allocation, are met. Furthermore, we propose that the Simon effect may be better understood within the perspective of supra-modal spatial attention, thereby providing an explanation for observed discrepancies in the literature.
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making.
With the present study we investigated cue-induced preparation in a Simon task and measured electroencephalogram and functional magnetic resonance imaging (fMRI) data in two within-subjects sessions. Cues informed either about the upcoming (1) spatial stimulus-response compatibility (rule cues), or (2) the stimulus location (position cues), or (3) were non-informative. Only rule cues allowed anticipating the upcoming compatibility condition. Position cues allowed anticipation of the upcoming location of the Simon stimulus but not its compatibility condition. Rule cues elicited fastest and most accurate performance for both compatible and incompatible trials. The contingent negative variation (CNV) in the event-related potential (ERP) of the cue-target interval is an index of anticipatory preparation and was magnified after rule cues. The N2 in the post-target ERP as a measure of online action control was reduced in Simon trials after rule cues. Although compatible trials were faster than incompatible trials in all cue conditions only non-informative cues revealed a compatibility effect in additional indicators of Simon task conflict like accuracy and the N2. We thus conclude that rule cues induced anticipatory re-coding of the Simon task that did not involve cognitive conflict anymore. fMRI revealed that rule cues yielded more activation of the left rostral, dorsal, and ventral prefrontal cortex as well as the pre-SMA as compared to POS and NON-cues. Pre-SMA and ventrolateral prefrontal activation after rule cues correlated with the effective use of rule cues in behavioral performance. Position cues induced a smaller CNV effect and exhibited less prefrontal and pre-SMA contributions in fMRI. Our data point to the importance to disentangle different anticipatory adjustments that might also include the prevention of upcoming conflict via task re-coding.
cognitive conflict; cueing; EEG; fMRI; pre-SMA; Simon task; anticipation; cognitive control
Crossing the hands over, whether across the body midline or with respect to each other, leads to measurable changes in spatial compatibility, spatial attention, and frequently to a general decrement in discrimination performance for tactile stimuli. The majority of multisensory crossed hands effects, however, have been demonstrated with explicit or implicit spatial discrimination tasks, raising the question of whether non-spatial discrimination tasks also show spatial effects when the hands are crossed. We designed a novel, non-spatial tactile discrimination task to address this issue. Participants made speeded discriminations of single-versus double-pulse vibrotactile targets, while trying to ignore simultaneous visual distractor stimuli, in both hands uncrossed, and hands crossed postures. Tactile discrimination performance was significantly affected by the visual distractors (demonstrating a significant crossmodal congruency effect), and was affected most by visual distractors in the same external location as the tactile target (i.e., spatial modulation), regardless of the posture (uncrossed or crossed) of the hands (i.e., spatial ‘re-mapping’ of visual-tactile interactions). Finally, crossing the hands led to a general performance decrement with visual distractors, but not in a control task with unimodal visual or tactile judgements. These results demonstrate, for the first time, significant spatial and postural modulations of crossmodal congruency effects in a non-spatial discrimination task.
Multisensory; Crossmodal congruency; Crossed hands; Somatosensation; Vision
When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
fMRI; human; vision; attention; cortex; orientation
We investigated the influence of hand posture in handedness recognition, while varying the spatial correspondence between stimulus and response in a modified Simon task. Drawings of the left and right hands were displayed either in a back or palm view while participants discriminated stimulus handedness by pressing either a left or right key with their hands resting either in a prone or supine posture. As a control, subjects performed a regular Simon task using simple geometric shapes as stimuli. Results showed that when hands were in a prone posture, the spatially corresponding trials (i.e., stimulus and response located on the same side) were faster than the non-corresponding trials (i.e., stimulus and response on opposite sides). In contrast, for the supine posture, there was no difference between corresponding and non-corresponding trials. Control experiments with the regular Simon task showed that the posture of the responding hand had no influence on performance. When the stimulus is the drawing of a hand, however, the posture of the responding hand affects the spatial correspondence effect because response location is coded based on multiple reference points, including the body of the hand.
handedness recognition; Simon effect; hand posture; motor imagery
Automatic processing of irrelevant stimulus dimensions has been demonstrated in a variety of tasks. Previous studies have shown that conflict between relevant and irrelevant dimensions can be reduced when a feature of the irrelevant dimension is repeated. The specific level at which the automatic process is suppressed (e.g., perceptual repetition, response repetition), however, is less understood. In the current experiment we used the numerical Stroop paradigm, in which the processing of irrelevant numerical values of 2 digits interferes with the processing of their physical size, to pinpoint the precise level of the suppression. Using a sequential analysis, we dissociated perceptual repetition from response repetition of the relevant and irrelevant dimension. Our analyses of reaction times, error rates, and diffusion modeling revealed that the congruity effect is significantly reduced or even absent when the response sequence of the irrelevant dimension, rather than the numerical value or the physical size, is repeated. These results suggest that automatic activation of the irrelevant dimension is suppressed at the response level. The current results shed light on the level of interaction between numerical magnitude and physical size as well as the effect of variability of responses and stimuli on automatic processing.
automaticity; congruity effect; diffusion modeling; executive control; inhibition
Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.
Task switching; Inhibition; Working memory; Simon effect; Asymmetric switch costs; Global and local switch costs; Stimulus-response compatibility; Development; Children; Frontal lobe
When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of “hearing visual rhythms.”
In the standard Simon task, participants carry out spatially defined responses to non-spatial stimulus attributes. Responses are typically faster when stimulus location and response location correspond. This effect disappears when a participant responds to only one of the two stimuli and reappears when another person carries out the other response. This social Simon effect (SSE) has been considered as providing an index for action co-representation. Here, we investigated whether joint-action effects in a social Simon task involve mechanisms of action co-representation, as measured by the amount of incorporation of another person's action. We combined an auditory social Simon task with a manipulation of the sense of ownership of another person's hand (rubber hand illusion). If the SSE is established by action co-representation, then the incorporation of the other person's hand into one's own body representation should increase the SSE (synchronous > asynchronous stroking). However, we found the SSE to be smaller in the synchronous as compared to the asynchronous stroking condition (Experiment 1), suggesting that the SSE reflects the separation of spatial action events rather than the integration of the other person's action. This effect is independent of the active involvement (Experiment 2) and the presence of another person (Experiment 3). These findings suggest that the “social” Simon effect is not really social in nature but is established when an interaction partner produces events that serve as a spatial reference for one's own actions.
joint action; social Simon; social cognition; rubber hand illusion
Conflicts in spatial stimulus–response tasks occur when the task-relevant feature of a stimulus implies a response toward a certain location which does not match the location of stimulus presentation. This conflict leads to increased error rates and longer reaction times, which has been termed Simon effect. A model of dual route processing (automatic and intentional) of stimulus features has been proposed, predicting response conflicts if the two routes are incongruent. Although there is evidence that the prefrontal cortex, notably the anterior cingulate cortex (ACC), plays a crucial role in conflict processing, the neuronal basis of dual route architecture is still unknown. In this study, we pursue a novel approach using positron emission tomography (PET) to identify relevant brain areas in a rat model of an auditory Simon task, a neuropsychological interference task, which is commonly used to study conflict processing in humans. For combination with PET we used the metabolic tracer [18F]fluorodeoxyglucose, which accumulates in metabolically active brain cells during the behavioral task. Brain areas involved in conflict processing are supposed to be activated when automatic and intentional route processing lead to different responses (dual route model). Analysis of PET data revealed specific activation patterns for different task settings applicable to the dual route model as established for response conflict processing. The rat motor cortex (M1) may be part of the automatic route or involved in its facilitation, while premotor (M2), prelimbic, and ACC seemed to be essential for inhibiting the incorrect, automatic response, indicating conflict monitoring functions. Our findings and the remarkable similarities to the pattern of activated regions reported during conflict processing in humans demonstrate that our rodent model opens novel opportunities to investigate the anatomical basis of conflict processing and dual route architecture.
prefrontal cortex; rodent model; cognitive conflict; Simon task
The ability to select and integrate relevant information in the presence of competing irrelevant information can be enhanced by advance information to direct attention and guide response selection. Attentional preparation can reduce perceptual and response conflict, yet little is known about the neural source of conflict resolution, whether it is resolved by modulating neural responses for perceptual selection to emphasize task-relevant information or for action selection to inhibit pre-potent responses to interfering information. We manipulated perceptual information that either matched or did not match the relevant color feature of an upcoming Stroop stimulus and recorded hemodynamic brain responses to these events. Longer reaction times to incongruent than congruent color-word Stroop stimuli indicated conflict; however, conflict was even greater when a color cue correctly predicted the Stroop target’s color (match) than when it did not (nonmatch). A predominantly anterior network was activated for Stroop-match and a predominantly posterior network was activated for Stroop-nonmatch. Thus, when a stimulus feature did not match the expected feature, a perceptually-driven posterior attention system was engaged, whereas when interfering, automatically-processed semantic information required inhibition of pre-potent responses, an action-driven anterior control system was engaged. These findings show a double dissociation of anterior and posterior cortical systems engaging in different types of control for perceptually-driven and action-driven conflict resolution.
Attention; Conflict; Control; fMRI; Perceptual Cueing
The current study examined selective encoding in visual working memory by systematically investigating interference from task-irrelevant features. The stimuli were objects defined by three features (color, shape, and location), and during a delay period, any of the features could switch between two objects. Additionally, single- and whole-probe trials were randomized within experimental blocks to investigate effects of memory retrieval. A series of relevant-feature switch detection tasks, where one feature was task-irrelevant, showed that interference from the task-irrelevant feature was only observed in the color-shape task, suggesting that color and shape information could be successfully filtered out, but location information could not, even when location was a task-irrelevant feature. Therefore, although location information is added to object representations independent of task demands in a relatively automatic manner, other features (e.g., color, shape) can be flexibly added to object representations.
In what form are multiple spatial locations represented in working memory? The current study revealed that people often maintain the configural properties (inter-item relationships) of visuospatial stimuli even when this information is explicitly task-irrelevant. However, results also indicate that the voluntary allocation of selective attention prior to stimulus presentation, as well as feature-based perceptual segregation of relevant from irrelevant stimuli, can eliminate the influences of stimulus configuration on location change detection performance. In contrast, voluntary attention cued to the relevant target location following presentation of the stimulus array failed to attenuate these influences. Thus, whereas voluntary selective attention can isolate or prevent the encoding of irrelevant stimulus locations and configural properties, people, perhaps due to limitations in attentional resources, reliably fail to isolate or suppress configural representations that have been encoded into working memory.
Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. Here, we show that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while varying stimulus contrast. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty, but a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers’ attention field was wider with than without spatial uncertainty. Our results support key predictions of the normalization model of attention, and reconcile previous, seemingly contradictory, findings on the effects of visual attention.
Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination.
An attractive view on human information processing proposes that inference problems are dealt with in a statistically optimal fashion. This hypothesis can explain aspects of perception, movement planning, cognition and decision making. In the present study, I use a new psychophysical paradigm that reveals surprisingly suboptimal perceptual decision making. Observers discriminate between two sinusoidal gratings of a different spatial frequency. Making use of visual noise, I induce an asymmetry in neural population responses to the gratings and find this asymmetry to effectively bias perceptual decision making. A simple ideal observer model, uninformed about the presence of visual noise but only considering the two grating spatial frequencies relevant to the task at hand, manages to avoid such a bias. I conclude that observers are limited in their ability to make use of prior knowledge of relevant visual features when performing this task. These results are in line with a growing number of findings suggesting that near-optimal decoders, although straightforward to implement and achieving near-maximal performance, consistently overestimate empirical performance in simple perceptual tasks.
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
We report an experiment that compared two explanations for the effect of congruency between a word's on screen spatial position and its meaning. On one account, congruency is explained by the match between position and a mental simulation of meaning. Alternatively, congruency is explained by the polarity alignment principle. To distinguish between these accounts we presented the same object names (e.g., shark, helicopter) in a sky decision task or an ocean decision task, such that response polarity and typical location were disentangled. Sky decision responses were faster to words at the top of the screen compared to words at the bottom of the screen, but the reverse was found for ocean decision responses. These results are problematic for the polarity principle, and support the claim that spatial attention is directed by mental simulation of the task-relevant conceptual dimension.
spatial attention; concepts; polarity alignment; grounded cognition
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In Experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In Experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In Experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In Experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants generated an eye movement to the target object. In experiment 1, responses were slower when the spoken word referred to the distractor object than when it referred to the target object. In experiment 2, responses were slower when the spoken word referred to a distractor object than when it referred to an object not in the display. In experiment 3, the cue was a small shift in location of the target object and participants indicated the direction of the shift. Responses were slowest when the word referred to the distractor object, faster when the word did not have a referent, and fastest when the word referred to the target object. Taken together, the results demonstrate that referents of spoken words capture attention.
visual attention; attentional capture; eye movements; lexical processing; visual-world paradigm
Spatial properties of stimuli are sometimes encoded even when incidental to the demands of a particular learning task. Incidental encoding of spatial information may interfere with learning by (i) causing a failure to generalize learning between trials in which a cue is presented in different spatial locations and (ii) adding common spatial features to stimuli that predict different outcomes. Hippocampal lesions have been found to facilitate acquisition of certain tasks. This facilitation may occur because hippocampal lesions impair incidental encoding of spatial information that interferes with learning. To test this prediction mice with lesions of the hippocampus were trained on appetitive simple simultaneous discrimination tasks using inserts in the goal arms of a T-maze. It was found that hippocampal lesioned mice were facilitated at learning the discriminations, but they were sensitive to changes in spatial information in a manner that was similar to control mice. In a second experiment it was found that both control and hippocampal lesioned mice showed equivalent incidental encoding of egocentric spatial properties of the inserts, but both groups did not encode the allocentric information. These results demonstrate that mice show incidental encoding of egocentric spatial information that decreases the ability to solve simultaneous discrimination tasks. The normal egocentric spatial encoding in hippocampal lesioned mice contradicts theories of hippocampal function that suggest that the hippocampus is necessary for incidental learning per se, or is required for modulating stimulus representations based on the relevancy of information. The facilitated learning suggests that the hippocampal lesions can enhance learning of the same qualitative information as acquired by control mice. © 2011 Wiley Periodicals, Inc.
spatial learning; memory; mouse; lesion; conjunctive representations
Our representation of the visual world can be modulated by spatially specific attentional biases that depend flexibly on task goals. We compared searching for task-relevant features in perceived versus remembered objects. When searching perceptual input, selected task-relevant and suppressed task-irrelevant features elicited contrasting spatiotopic ERP effects, despite them being perceptually identical. This was also true when participants searched a memory array, suggesting that memory had retained the spatial organization of the original perceptual input and that this representation could be modulated in a spatially specific fashion. However, task-relevant selection and task-irrelevant suppression effects were of the opposite polarity when searching remembered compared to perceived objects. We suggest that this surprising result stems from the nature of feature- and object-based representations when stored in visual short-term memory. When stored, features are integrated into objects, meaning that the spatially specific selection mechanisms must operate upon objects rather than specific feature-level representations.
spatial attention; visual short-term memory; working memory; ERPs; electrophysiology; task-set control
Deficits in processing spatial information have been observed in clinical populations who have abnormalities within the dopamine (DA) system. As psychostimulants such as methamphetamine (MA) are particularly neurotoxic to the dopaminergic system it was of interest to examine the performance of MA-dependent individuals on a task of spatial attention.
51 MA-dependent subjects and 22 age-matched non-substance abusing control subjects were tested on a Spatial Stroop attention test. MR Spectroscopy (MRS) imaging data were analyzed from 32 MA abusers and 13 controls.
No group differences in response time or accuracy emerged on the behavioral task with both groups exhibiting equivalent slowing when the word meaning and the spatial location of the word were in conflict. MRS imaging data from the MA abusers revealed a strong inverse correlation between NAA/Cr ratios in the primary visual cortex (PVC) and Spatial interference (p=0.0001). Moderate inverse correlations were also seen in the anterior cingulate cortex (ACC) (p = 0.02). No significant correlations were observed in the controls, perhaps due to the small sample of imaging data available (n=13).
The strong correlation between spatial conflict suppression and NAA/Cr levels within the PVC in the MA-dependent individuals suggests that preserved neuronal integrity within the PVC of stimulant abusers may modulate cognitive mechanisms that process implicit spatial information.
Visual Cortex; Methamphetamine; MRS; spatial attention; Stroop; substance abuse