The Simon effect refers to the performance (response time and accuracy) advantage for responses that spatially correspond to the task-irrelevant location of a stimulus. It has been attributed to a natural tendency to respond toward the source of stimulation. When location is task-relevant, however, and responses are intentionally directed away (incompatible) or toward (compatible) the source of the stimulation, there is also an advantage for spatially compatible responses over spatially incompatible responses. Interestingly, a number of studies have demonstrated a reversed, or reduced, Simon effect following practice with a spatial incompatibility task. One interpretation of this finding is that practicing a spatial incompatibility task disables the natural tendency to respond toward stimuli. Here, the temporal dynamics of this stimulus-response (S-R) transfer were explored with speed-accuracy trade-offs (SATs). All experiments used the mixed-task paradigm in which Simon and spatial compatibility/incompatibility tasks were interleaved across blocks of trials. In general, bidirectional S-R transfer was observed: while the spatial incompatibility task had an influence on the Simon effect, the task-relevant S-R mapping of the Simon task also had a small impact on congruency effects within the spatial compatibility and incompatibility tasks. These effects were generally greater when the task contexts were similar. Moreover, the SAT analysis of performance in the Simon task demonstrated that the tendency to respond to the location of the stimulus was not eliminated because of the spatial incompatibility task. Rather, S-R transfer from the spatial incompatibility task appeared to partially mask the natural tendency to respond to the source of stimulation with a conflicting inclination to respond away from it. These findings support the use of SAT methodology to quantitatively describe rapid response tendencies.
speed-accuracy trade-off; stimulus-response compatibility; Simon effect; spatial compatibility; S-R associations
Stimulus position is coded even if it is task-irrelevant, leading to faster response times when the stimulus and the response locations are compatible (spatial Stimulus–Response Compatibility–spatial SRC). Faster responses are also found when the handle of a visual object and the response hand are located on the same side; this is known as affordance effect (AE). Two contrasting accounts for AE have been classically proposed. One is focused on the recruitment of appropriate grasping actions on the object handle, and the other on the asymmetry in the object shape, which in turn would cause a handle-hand correspondence effect (CE). In order to disentangle these two accounts, we investigated the possible transfer of practice in a spatial SRC task executed with a S–R incompatible mapping to a subsequent affordance task in which objects with either their intact handle or a broken one were used. The idea was that using objects with broken handles should prevent the recruitment of motor information relative to object grasping, whereas practice transfer should prevent object asymmetry in driving handle-hand CE. A total of three experiments were carried out. In Experiment 1 participants underwent an affordance task in which common graspable objects with their intact or broken handle were used. In Experiments 2 and 3, the affordance task was preceded by a spatial SRC task in which an incompatible S–R mapping was used. Inter-task delays of 5 or 30 min were employed to assess the duration of transfer effect. In Experiment 2 objects with their intact handle were presented, whereas in Experiment 3 the same objects had their handle broken. Although objects with intact and broken handles elicited a handle-hand CE in Experiment 1, practice transfer from an incompatible spatial SRC to the affordance task was found in Experiment 3 (broken-handle objects), but not in Experiment 2 (intact-handle objects). Overall, this pattern of results indicate that both object asymmetry and the activation of motor information contribute to the generation of the handle-hand CE effect, and that the handle AE cannot be reduced to a SRC effect.
affordance effect; Simon effect; spatial S–R compatibility; transfer of practice; intact and broken handle
Two experiments examined effects of mixed stimulus-response mappings and tasks for older and younger adults. In Experiment 1, participants performed two-choice spatial reaction tasks with blocks of pure and mixed compatible and incompatible mappings. In Experiment 2, a compatible or incompatible mapping was mixed with a Simon task for which the mapping of stimulus color to location was relevant and stimulus location irrelevant. In both experiments older adults showed larger mixing costs than younger adults and larger compatibility effects, with the differences particularly pronounced in Experiment 1 when location mappings were mixed. In mixed conditions, when stimulus location was relevant, older adults benefited more than younger adults from complete repetition of the task and stimulus from the preceding trial. When stimulus location was irrelevant, the benefit of complete repetition did not differ reliably between age groups. The results suggest that the age-related deficit associated with mixing mappings and tasks is primarily due to older adults having more difficulty separating task sets that activate conflicting response codes.
Aging; Attention; Response Selection; 2860 Gerontology; 2346 Attention
Responses are faster when the side of stimulus and response correspond than when they do not correspond, even if stimulus location is irrelevant to the task at hand: the correspondence, spatial compatibility effect, or Simon effect. Generally, it is assumed that an automatically generated spatial code is responsible for this effect, but the precise mechanism underlying the formation of this code is still under dispute. Two major alternatives have been proposed: the referential-coding account, which can be subdivided into a static version and an attention-centered version, and the attention-shift account. These accounts hold clear-cut predictions for attentional cuing experiments. The former would assume a Simon effect irrespective of attentional cuing in its static version, whereas the attention-centered version of the referential-coding account and the attention-shift account would predict a decreased Simon effect on validly as opposed to invalidly cued trials. However, results from previous studies are equivocal to the effects of attentional cuing on the Simon effect. We argue here that attentional cueing reliably modulates the Simon effect if some crucial experimental conditions, mostly relevant for optimizing attentional allocation, are met. Furthermore, we propose that the Simon effect may be better understood within the perspective of supra-modal spatial attention, thereby providing an explanation for observed discrepancies in the literature.
Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination.
An attractive view on human information processing proposes that inference problems are dealt with in a statistically optimal fashion. This hypothesis can explain aspects of perception, movement planning, cognition and decision making. In the present study, I use a new psychophysical paradigm that reveals surprisingly suboptimal perceptual decision making. Observers discriminate between two sinusoidal gratings of a different spatial frequency. Making use of visual noise, I induce an asymmetry in neural population responses to the gratings and find this asymmetry to effectively bias perceptual decision making. A simple ideal observer model, uninformed about the presence of visual noise but only considering the two grating spatial frequencies relevant to the task at hand, manages to avoid such a bias. I conclude that observers are limited in their ability to make use of prior knowledge of relevant visual features when performing this task. These results are in line with a growing number of findings suggesting that near-optimal decoders, although straightforward to implement and achieving near-maximal performance, consistently overestimate empirical performance in simple perceptual tasks.
Researchers have previously suggested a working memory (WM) account of spatial codes, and based on this suggestion, the present study carries out three experiments to investigate how the task-relevant attribute representation (verbal or visual) in the typical Simon task affects the Simon effect. Experiment 1 compared the Simon effect between the between- and within-category color conditions, which required subjects to discriminate between red and blue stimuli (presumed to be represented by verbal WM codes because it was easy and fast to name the colors verbally) and to discriminate between two similar green stimuli (presumed to be represented by visual WM codes because it was hard and time-consuming to name the colors verbally), respectively. The results revealed a reliable Simon effect that only occurs in the between-category condition. Experiment 2 assessed the Simon effect by requiring subjects to discriminate between two different isosceles trapezoids (within-category shapes) and to discriminate isosceles trapezoid from rectangle (between-category shapes), and the results replicated and expanded the findings of Experiment 1. In Experiment 3, subjects were required to perform both tasks from Experiment 1. Wherein, in Experiment 3A, the between-category task preceded the within-category task; in Experiment 3B, the task order was opposite. The results showed the reliable Simon effect when subjects represented the task-relevant stimulus attributes by verbal WM encoding. In addition, the response times (RTs) distribution analysis for both the between- and within-category conditions of Experiments 3A and 3B showed decreased Simon effect with the RTs lengthened. Altogether, although the present results are consistent with the temporal coding account, we put forth that the Simon effect also depends on the verbal WM representation of task-relevant stimulus attribute.
With the present study we investigated cue-induced preparation in a Simon task and measured electroencephalogram and functional magnetic resonance imaging (fMRI) data in two within-subjects sessions. Cues informed either about the upcoming (1) spatial stimulus-response compatibility (rule cues), or (2) the stimulus location (position cues), or (3) were non-informative. Only rule cues allowed anticipating the upcoming compatibility condition. Position cues allowed anticipation of the upcoming location of the Simon stimulus but not its compatibility condition. Rule cues elicited fastest and most accurate performance for both compatible and incompatible trials. The contingent negative variation (CNV) in the event-related potential (ERP) of the cue-target interval is an index of anticipatory preparation and was magnified after rule cues. The N2 in the post-target ERP as a measure of online action control was reduced in Simon trials after rule cues. Although compatible trials were faster than incompatible trials in all cue conditions only non-informative cues revealed a compatibility effect in additional indicators of Simon task conflict like accuracy and the N2. We thus conclude that rule cues induced anticipatory re-coding of the Simon task that did not involve cognitive conflict anymore. fMRI revealed that rule cues yielded more activation of the left rostral, dorsal, and ventral prefrontal cortex as well as the pre-SMA as compared to POS and NON-cues. Pre-SMA and ventrolateral prefrontal activation after rule cues correlated with the effective use of rule cues in behavioral performance. Position cues induced a smaller CNV effect and exhibited less prefrontal and pre-SMA contributions in fMRI. Our data point to the importance to disentangle different anticipatory adjustments that might also include the prevention of upcoming conflict via task re-coding.
cognitive conflict; cueing; EEG; fMRI; pre-SMA; Simon task; anticipation; cognitive control
Aging is associated with delayed processing in choice reaction time (CRT) tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40%) by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each) by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs) were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year). Central processing time (CPT), isolated by subtracting simple reaction times (SRT) (obtained in a companion experiment performed on the same day) from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18–82 years). CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and production.
aging; timing; processing speed; motor; handedness; hemisphere; replication; executive function
The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers' body midline, while the orientation of the stimulus figure's body (trunk, arms, and legs), which was the task-irrelevant feature of interest, was manipulated (oriented toward the left or right visual hemifield) with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. In line with a model put forward by Hietanen (1999), this reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.
social attention; spatial attention; Simon task; head-body orientation; implied motion
The Simon effect, that is the advantage of the spatial correspondence between stimulus and response locations when stimulus location is a task-irrelevant dimension, occurs even when the task is performed together by two participants, each performing a go/no-go task. Previous studies showed that this joint Simon effect, considered by some authors as a measure of self-other integration, does not emerge when during task performance co-actors are required to compete. The present study investigated whether and for how long competition experienced during joint performance of one task can affect performance in a following joint Simon task. In two experiments, we required pairs of participants to perform together a joint Simon task, before and after jointly performing together an unrelated non-spatial task (the Eriksen flanker task). In Experiment 1, participants always performed the joint Simon task under neutral instructions, before and after performing the joint flanker task in which they were explicitly required either to cooperate with (i.e., cooperative condition) or to compete against a co-actor (i.e., competitive condition). In Experiment 2, they were required to compete during the joint flanker task and to cooperate during the subsequent joint Simon task. Competition experienced in one task affected the way the subsequent joint task was performed, as revealed by the lack of the joint Simon effect, even though, during the Simon task participants were not required to compete (Experiment 1). However, prior competition no longer affected subsequent performance if a new goal that created positive interdependence between the two agents was introduced (Experiment 2). These results suggest that the emergence of the joint Simon effect is significantly influenced by how the goals of the co-acting individuals are related, with the effect of competition extending beyond the specific competitive setting and affecting subsequent interactions.
This study investigated neural processing interactions during Stroop interference by varying the temporal separation of relevant and irrelevant features of congruent, neutral, and incongruent colored-bar/color-word stimulus components. High-density event-related potentials (ERPs) and behavioral performance were measured as participants reported the bar color as quickly as possible, while ignoring the color words. The task-irrelevant color words could appear at 1 of 5 stimulus onset asynchronies (SOAs) relative to the task-relevant bar-color occurrence: −200 or −100 ms before, +100 or +200 ms after, or simultaneously. Incongruent relative to congruent presentations elicited slower reaction times and higher error rates (with neutral in between), and ERP difference waves containing both an early, negative-polarity, central-parietal deflection, and a later, more left-sided, positive-polarity component. These congruency-related differences interacted with SOA, showing the greatest behavioral and electrophysiological effects when irrelevant stimulus information preceded the task-relevant target and reduced effects when the irrelevant information followed the relevant target. We interpret these data as reflecting 2 separate processes: 1) a ‘priming influence’ that enhances the magnitude of conflict-related facilitation and conflict-related interference when a task-relevant target is preceded by an irrelevant distractor; and 2) a reduced ‘backward influence’ of stimulus conflict when the irrelevant distractor information follows the task-relevant target.
conflict processing; event-related potentials (ERPs); incongruency; stimulus onset asynchrony (SOA); Stroop task
Cognitive control in response compatibility tasks is modulated by the task context. Two types of contextual modulations have been demonstrated; sustained (block-wise) and transient (trial-by-trial). Recent research suggests that these modulations have different underlying mechanisms. This study presents new evidence supporting this claim by comparing false alarm (FA) responses on no-go trials of the Simon task between the sustained and transient contexts. In Experiment 1, the sustained context was manipulated so that a block included a larger number of incongruent trials. Results showed that participants made more FA responses by the hand opposite to the stimulus location. This suggests a generation of response bias in which the task-irrelevant location information is utilized in a reversed manner (i.e., to respond with the right hand to a stimulus presented on the left side and vice versa). Next, Experiment 2 examined the effect of the transient context and found that overall FA rate was lower when a no-go trial was preceded by an incongruent trial than by a congruent trial, whereas such response bias as that shown in Experiment 1 was not demonstrated. This suggests that the transient conflict context enhances inhibition of the task-irrelevant process but does not make the task-irrelevant information actively usable. Based on these results, we propound two types of cognitive control modulations as adaptive behaviors: response biasing based on utilization of the task-irrelevant information under the sustained conflict context and transient enhancement of inhibition of the task-irrelevant process based on the online conflict monitoring.
The interaction of space and time affects perception of extents: (1) the longer the exposure duration, the longer the line length is perceived and vice versa; (2) the shorter the line length is, the shorter the exposure duration is perceived. Previous studies have shown that space-time interactions in human vision are asymmetrical; spatial cognition has a larger effect on temporal cognition rather than vice versa (Merritt et al., 2010). What makes the interactions asymmetrical? In this study, participants were asked to judge exposure duration of lines that differed in length or to judge the lengths of the lines with different exposure time; to judge the task-relevant stimulus extents that also varied in the task-irrelevant stimulus extents. Paired spatial and temporal tasks in which the ranges of task-relevant and -irrelevant stimulus values were common, were conducted. In our hypothesis, the imbalance in saliency of spatial and temporal information would cause asymmetrical space-time interaction. To assess the saliency, task difficulty was rated. If saliency of relevant stimuli is high, the difficulty of discrimination task would be low, and vice versa. The saliency of irrelevant stimuli in one task would be reflected in the difficulty of the other task, in the pair of tasks. If saliency of irrelevant stimuli is high, the difficulty of paired task would be low, and vice versa. The result supports our hypothesis; spatial cognition asymmetrically affected on temporal cognition when the difficulty of temporal task was significantly higher than that of spatial task.
space-time interaction; temporal cognition; spatial cognition; saliency; human vision; task difficulty
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space.
It has been shown that fluid intelligence (gf) is fundamental to overcome interference due to information of a previously encoded item along a task-relevant domain. However, the biasing effect of task-irrelevant dimensions is still unclear as well as its relation with gf. The present study aimed at clarifying these issues. Gf was assessed in 60 healthy subjects. In a different session, the same subjects performed two versions (letter-detection and spatial) of a three-back working memory task with a set of physically identical stimuli (letters) presented at different locations on the screen. In the letter-detection task, volunteers were asked to match stimuli on the basis of their identity whereas, in the spatial task, they were required to match items on their locations. Cross-domain bias was manipulated by pseudorandomly inserting a match between the current and the three back items on the irrelevant domain. Our findings showed that a task-irrelevant feature of a salient stimulus can actually bias the ongoing performance. We revealed that, at trials in which the current and the three-back items matched on the irrelevant domain, group accuracy was lower (interference). On the other hand, at trials in which the two items matched on both the relevant and irrelevant domains, the group showed an enhancement of the performance (facilitation). Furthermore, we demonstrated that individual differences in fluid intelligence covaries with the ability to override cross-domain interference in that higher gf subjects showed better performance at interference trials than low gf subjects. Altogether, our findings suggest that stimulus features irrelevant to the task can affect cognitive performance along the relevant domain and that gf plays an important role in protecting relevant memory contents from the hampering effect of such a bias.
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.
fMRI; human; vision; attention; cortex; orientation
Perception and action are tightly linked: objects may be perceived not only in terms of visual features, but also in terms of possibilities for action. Previous studies showed that when a centrally located object has a salient graspable feature (e.g., a handle), it facilitates motor responses corresponding with the feature's position. However, such so-called affordance effects have been criticized as resulting from spatial compatibility effects, due to the visual asymmetry created by the graspable feature, irrespective of any affordances. In order to dissociate between affordance and spatial compatibility effects, we asked participants to perform a simple reaction-time task to typically graspable and non-graspable objects with similar visual features (e.g., lollipop and stop sign). Responses were measured using either electromyography (EMG) on proximal arm muscles during reaching-like movements, or with finger key-presses. In both EMG and button press measurements, participants responded faster when the object was either presented in the same location as the responding hand, or was affordable, resulting in significant and independent spatial compatibility and affordance effects, but no interaction. Furthermore, while the spatial compatibility effect was present from the earliest stages of movement preparation and throughout the different stages of movement execution, the affordance effect was restricted to the early stages of movement execution. Finally, we tested a small group of unilateral arm amputees using EMG, and found residual spatial compatibility but no affordance, suggesting that spatial compatibility effects do not necessarily rely on individuals' available affordances. Our results show dissociation between affordance and spatial compatibility effects, and suggest that rather than evoking the specific motor action most suitable for interaction with the viewed object, graspable objects prompt the motor system in a general, body-part independent fashion.
affordance; amputees; EMG; hand; stimulus-response
Vision plays a crucial role in human interaction by facilitating the coordination of one's own actions with those of others in space and time. While previous findings have demonstrated that vision determines the default use of reference frames, little is known about the role of visual experience in coding action-space during joint action. Here, we tested if and how visual experience influences the use of reference frames in joint action control. Dyads of congenitally-blind, blindfolded-sighted, and seeing individuals took part in an auditory version of the social Simon task, which required each participant to respond to one of two sounds presented to the left or right of both participants. To disentangle the contribution of external—agent-based and response-based—reference frames during joint action, participants performed the task with their respective response (right) hands uncrossed or crossed over one another. Although the location of the auditory stimulus was completely task-irrelevant, participants responded overall faster when the stimulus location spatially corresponded to the required response side than when they were spatially non-corresponding: a phenomenon known as the social Simon effect (SSE). In sighted participants, the SSE occurred irrespective of whether hands were crossed or uncrossed, suggesting the use of external, response-based reference frames. Congenitally-blind participants also showed an SSE, but only with uncrossed hands. We argue that congenitally-blind people use both agent-based and response-based reference frames resulting in conflicting spatial information when hands are crossed and, thus, canceling out the SSE. These results imply that joint action control functions on the basis of external reference frames independent of the presence or (transient/permanent) absence of vision. However, the type of external reference frames used for organizing motor control in joint action seems to be determined by visual experience.
Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.
attention; visual objects; neuronal representation; motion; color; middle temporal area MT
•Fornix damage mildly impair spatial biconditional and passive place learning tasks.•Fornix lesions impair spatial go/no-go and alternation problems.•Fornix lesions impair tests making flexible demands on spatial memory.•Fornix connections are not always required for learning fixed spatial responses.
The present study sought to understand how the hippocampus and anterior thalamic nuclei are conjointly required for spatial learning by examining the impact of cutting a major tract (the fornix) that interconnects these two sites. The initial experiments examined the consequences of fornix lesions in rats on spatial biconditional discrimination learning. The rationale arose from previous findings showing that fornix lesions spare the learning of spatial biconditional tasks, despite the same task being highly sensitive to both hippocampal and anterior thalamic nuclei lesions. In the present study, fornix lesions only delayed acquisition of the spatial biconditional task, pointing to additional contributions from non-fornical routes linking the hippocampus with the anterior thalamic nuclei. The same fornix lesions spared the learning of an analogous nonspatial biconditional task that used local contextual cues. Subsequent tests, including T-maze place alternation, place learning in a cross-maze, and a go/no-go place discrimination, highlighted the impact of fornix lesions when distal spatial information is used flexibly to guide behaviour. The final experiment examined the ability to learn incidentally the spatial features of a square water-maze that had differently patterned walls. Fornix lesions disrupted performance but did not stop the rats from distinguishing the various corners of the maze. Overall, the results indicate that interconnections between the hippocampus and anterior thalamus, via the fornix, help to resolve problems with flexible spatial and temporal cues, but the results also signal the importance of additional, non-fornical contributions to hippocampal-anterior thalamic spatial processing, particularly for problems with more stable spatial solutions.
Biconditional learning; Cognitive map; Configural learning; Hippocampus; Incidental learning; Thalamus
In this study we leveraged the high-temporal resolution of EEG to examine the neural mechanisms underlying the flexible regulation of cognitive control that unfolds over different timescales. We measured behavioral and neural effects of color-word incongruency as different groups of human participants performed three different versions of color-word ‘Stroop’ tasks in which the relative timing of the color and word features varied trial-to-trial. For this purpose we used a standard ‘Stroop’ color-identification task with equal congruent-to-incongruent proportions (50/50%), along with two versions of the ‘Reverse Stroop’ word-identification tasks, for which we manipulated the incongruency proportion (50/50% and 80/20%). Two canonical ERP markers of neural processing of stimulus incongruency, the fronto-central negative-polarity incongruency wave (NINC) and the late positive component (LPC), were evoked across the various conditions. Results indicated that color-word incongruency interacted with the relative feature timing, producing greater neural and behavioral effects when the task-irrelevant stimulus preceded the target, but still significant effects when it followed. Additionally, both behavioral and neural incongruency effects were reduced by nearly half in the word-identification task (ReverseStroop-50/50) relative to the color-identification task (Stroop-50/50), with these effects essentially fully recovering when incongruent trials appeared only infrequently (ReverseStroop-80/20). Across the conditions, NINC amplitudes closely paralleled reaction times, indicating this component is sensitive to the overall level of stimulus conflict. In contrast, LPC amplitudes were largest with infrequent incongruent trials, suggesting a possible readjustment role when proactive control is reduced. These findings thus unveil distinct control mechanisms that unfold over time in response to conflicting stimulus input under different contexts.
Stroop Task; Reverse Stroop Task; Conflict Processing; Event-Related Potentials (ERPs); Incongruency; Stimulus Onset Asynchrony (SOA)
Past studies have explored the relative strengths of auditory features in a selective attention task by pitting features against one another and asking listeners to report the words perceived in a given sentence. While these studies show that the continuity of competing features affects streaming, they did not address whether the influence of specific features is modulated by volitionally directed attention. Here, we explored whether the continuity of a task-irrelevant feature affects the ability to selectively report one of two competing speech streams when attention is specifically directed to a different feature. Sequences of simultaneous pairs of spoken digits were presented in which exactly one digit of each pair matched a primer phrase in pitch and exactly one digit of each pair matched the primer location. Within a trial, location and pitch were randomly paired; they either were consistent with each other from digit to digit or were switched (e.g., the sequence from the primer's location changed pitch across digits). In otherwise identical blocks, listeners were instructed to report digits matching the primer either in location or in pitch. Listeners were told to ignore the irrelevant feature, if possible, in order to perform well. Listener responses depended on task instructions, proving that top–down attention alters how a subject performs the task. Performance improved when the separation of the target and masker in the task-relevant feature increased. Importantly, the values of the task-irrelevant feature also influenced performance in some cases. Specifically, when instructed to attend location, listeners performed worse as the separation between target and masker pitch increased, especially when the spatial separation between digits was small. These results indicate that task-relevant and task-irrelevant features are perceptually bound together: continuity of task-irrelevant features influences selective attention in an automatic, obligatory manner, consistent with the idea that auditory attention operates on objects.
psychophysics; streaming; top–down; bottom–up; object-based attention
Spatial properties of stimuli are sometimes encoded even when incidental to the demands of a particular learning task. Incidental encoding of spatial information may interfere with learning by (i) causing a failure to generalize learning between trials in which a cue is presented in different spatial locations and (ii) adding common spatial features to stimuli that predict different outcomes. Hippocampal lesions have been found to facilitate acquisition of certain tasks. This facilitation may occur because hippocampal lesions impair incidental encoding of spatial information that interferes with learning. To test this prediction mice with lesions of the hippocampus were trained on appetitive simple simultaneous discrimination tasks using inserts in the goal arms of a T-maze. It was found that hippocampal lesioned mice were facilitated at learning the discriminations, but they were sensitive to changes in spatial information in a manner that was similar to control mice. In a second experiment it was found that both control and hippocampal lesioned mice showed equivalent incidental encoding of egocentric spatial properties of the inserts, but both groups did not encode the allocentric information. These results demonstrate that mice show incidental encoding of egocentric spatial information that decreases the ability to solve simultaneous discrimination tasks. The normal egocentric spatial encoding in hippocampal lesioned mice contradicts theories of hippocampal function that suggest that the hippocampus is necessary for incidental learning per se, or is required for modulating stimulus representations based on the relevancy of information. The facilitated learning suggests that the hippocampal lesions can enhance learning of the same qualitative information as acquired by control mice. © 2011 Wiley Periodicals, Inc.
spatial learning; memory; mouse; lesion; conjunctive representations
Crossing the hands over, whether across the body midline or with respect to each other, leads to measurable changes in spatial compatibility, spatial attention, and frequently to a general decrement in discrimination performance for tactile stimuli. The majority of multisensory crossed hands effects, however, have been demonstrated with explicit or implicit spatial discrimination tasks, raising the question of whether non-spatial discrimination tasks also show spatial effects when the hands are crossed. We designed a novel, non-spatial tactile discrimination task to address this issue. Participants made speeded discriminations of single-versus double-pulse vibrotactile targets, while trying to ignore simultaneous visual distractor stimuli, in both hands uncrossed, and hands crossed postures. Tactile discrimination performance was significantly affected by the visual distractors (demonstrating a significant crossmodal congruency effect), and was affected most by visual distractors in the same external location as the tactile target (i.e., spatial modulation), regardless of the posture (uncrossed or crossed) of the hands (i.e., spatial ‘re-mapping’ of visual-tactile interactions). Finally, crossing the hands led to a general performance decrement with visual distractors, but not in a control task with unimodal visual or tactile judgements. These results demonstrate, for the first time, significant spatial and postural modulations of crossmodal congruency effects in a non-spatial discrimination task.
Multisensory; Crossmodal congruency; Crossed hands; Somatosensation; Vision