The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter.
Selective attention is closely linked to eye movements. Prior to a saccade, attention shifts to the saccadic goal at the expense of surrounding locations. Such a constricted attentional field, while useful to ensure accurate saccades, constrains the spatial range of high-quality perceptual analysis. The present study showed that the attention could be allocated to locations other than the saccadic goal without disrupting the ongoing pattern of saccades. Saccades were made sequentially along a color-cued path. Attention was assessed by a visual memory task presented during a random pause between successive saccades. Saccadic planning had several effects on memory: (1) fewer letters were remembered during intersaccadic pauses than during maintained fixation; (2) letters appearing on the saccadic path, including locations previously examined, could be remembered; off-path performance was near chance; (3) memory was better at the saccadic target than all other locations, including the currently fixated location. These results show that the distribution of attention during intersaccadic pauses results from a combination of top-down enhancement at the saccadic target coupled with a more automatic allocation of attention to selected display locations. This suggests that the visual system has mechanisms to control the distribution of attention without interfering with ongoing saccadic programming.
Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here we demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. We hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a fundamental task faced by the visual system thousands of times each day. In four experiments, memory-based gaze correction was found to be accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load was found to interfere with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system.
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.
The purpose of the present study was to examine the influence of task set on the spatial and temporal characteristics of eye movements during scene perception. In previous work, when strong control was exerted over the viewing task via specification of a target object (as in visual search), task set biased spatial, rather than temporal, parameters of eye movements. Here, we find that more participant-directed tasks (in which the task establishes general goals of viewing rather than specific objects to fixate) affect not only spatial (e.g., saccade amplitude) but also temporal parameters (e.g. fixation duration). Further, task set influenced the rate of change in fixation duration over the course of viewing but not saccade amplitude, suggesting independent mechanisms for control of these parameters.
Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.
With each eye movement, stationary objects in the world change position on the retina, yet we perceive the world as stable. Spatial updating, or remapping, is one neural mechanism by which the brain compensates for shifts in the retinal image caused by voluntary eye movements. Remapping of a visual representation is believed to arise from a widespread neural circuit including parietal and frontal cortex. The current experiment tests the hypothesis that extrastriate visual areas in human cortex have access to remapped spatial information. We tested this hypothesis using functional magnetic resonance imaging (fMRI). We first identified the borders of several occipital lobe visual areas using standard retinotopic techniques. We then tested subjects while they performed a single-step saccade task analogous to the task used in neurophysiological studies in monkeys, and two conditions that control for visual and motor effects. We analyzed the fMRI time series data with a nonlinear, fully Bayesian hierarchical statistical model. We identified remapping as activity in the single-step task that could not be attributed to purely visual or oculomotor effects. The strength of remapping was roughly monotonic with position in the visual hierarchy: remapped responses were largest in areas V3A and hV4 and smallest in V1 and V2. These results demonstrate that updated visual representations are present in cortical areas that are directly linked to visual perception.
The lateral intraparietal area (LIP) has been implicated as a salience map for control of saccadic eye movements and visual attention. Here, we report evidence to link the encoding of saccades and saliency in LIP to modulation of several other sensory-motor behaviors in monkeys. In many LIP neurons, there was a significant trial-by-trial correlation between the firing rate just before a saccade and the post- or pre-saccadic pursuit eye velocity. Some neurons also showed trail-by-trial correlations of the firing rate of LIP neurons with the speed of “glissades” that occur at the end of saccades to stationary targets. LIP-pursuit correlations were spatially-specific and were strong only when the target appeared in the receptive/movement field of the neuron under study. We suggest that LIP is a component of a salience representation that modulates the strength of visual-motor transmission for pursuit, and that may play a similar role for many movements, beyond its traditional roles in guiding saccadic eye movements and localizing attention.
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search.
eye movements; visual search; low-vision; glaucoma; peripheral vision; and natural scenes
Many common tasks require us to individuate in parallel two or more objects out of a complex scene. Although the mechanisms underlying our abilities to count the number of items, remember the visual properties of objects and to make saccadic eye movements towards targets have been studied separately, each of these tasks require selection of individual objects and shows a capacity limit. Here we show that a common factor—salience—determines the capacity limit in the various tasks. We manipulated bottom-up salience (visual contrast) and top-down salience (task relevance) in enumeration and visual memory tasks. As one item became increasingly salient, the subitizing range was reduced and memory performance for all other less-salient items was decreased. Overall, the pattern of results suggests that our abilities to enumerate and remember small groups of stimuli are grounded in an attentional priority or salience map which represents the location of important items.
The primate superior colliculus (SC) has long been known to be involved in saccade generation. However, SC neurons also exhibit fixation-related and smooth-pursuit-related activity. A parsimonious explanation for these seemingly disparate findings is that the SC contains a map of behaviorally relevant goal locations, rather than just a motor map for saccades and fixation. This explanation predicts that SC activity should reflect the behavioral goal, even when the behavioral response is not fixation or saccades, and even if the goal does not correspond to a visual stimulus. We tested this prediction by employing a tracking task that dissociates the stimulus and goal locations. In this task, monkeys tracked the invisible midpoint between two peripheral bars, such that the visual stimuli were peripheral but the goal was foveal/parafoveal. We recorded from SC neurons representing peripheral locations associated with the stimulus or central locations associated with the goal. Most neurons with peripheral response fields did not respond differently during tracking than during passive viewing of the stimulus under fixation; most neurons with central response fields responded more during tracking than during fixation, despite the lack of a visual stimulus. Moreover, the spatial distribution of activity during tracking was larger than that during fixation or tracking of a foveal stimulus, suggesting that the greater spatial uncertainty about the invisible goal corresponded to more widespread SC activity. These results demonstrate the flexibility with which activity across the SC represents the location - and also the spatial precision - of behaviorally relevant goals for multiple eye movements.
Superior Colliculus; Pursuit; Voluntary Eye Movement; Stimulus-Response; Behavioral Goal; Population Coding
What determines whether a scene is remembered or forgotten? Our results show how visual scenes are encoded into memory at behaviorally relevant points in time.
The ability to remember a briefly presented scene depends on a number of factors, such as its saliency, novelty, degree of threat, or behavioral relevance to a task. Here, however, we show that the encoding of a scene into memory may depend not only on what the scene contains but also when it occurs. Participants performed an attentionally demanding target detection task at fixation while also viewing a rapid sequence of full-field photographs of urban and natural scenes. Participants were then tested on whether they recognized a specific scene from the previous sequence. We found that scenes were recognized reliably only when presented concurrently with a target at fixation. This is evidence of a mechanism where traces of a visual scene are automatically encoded into memory at behaviorally relevant points in time regardless of the spatial focus of attention.
What determines whether a visual scene is remembered or forgotten? The ability to remember a briefly presented scene depends on a number of factors, such as its saliency, novelty, degree of threat, or relevance to a behavioral outcome. Generally, attention is thought to be key, in that you can only remember the part of a visual scene you were paying attention to at any given moment. Here, we show that memory for visual scenes may not depend on your attention or what a scene contains, but when the scene is presented. In this study, attention to one task enhances recognition performance for scenes in a second task only in situations when the first task has behavioral relevance. Our results suggest a mechanism where traces of a visual scene are automatically encoded into memory, even though the scene is not the spatial focus of attention.
Many daily activities involve intrinsic or extrinsic goal-directed eye and hand movements. An extensive visuomotor coordination network including nigro-striatal pathways is required for efficient timing and positioning of eyes and hands. The aim of this study was to investigate how Parkinson’s disease (PD) affects eye-hand coordination in tasks with different cognitive complexity.
We used a touch screen, an eye-tracking device and a motion capturing system to quantify changes in eye-hand coordination in early-stage PD patients (H&Y < 2.5) and age-matched controls. Timing and kinematics of eye and hand were quantified in four eye-hand coordination tasks (pro-tapping, dual planning, anti-tapping and spatial memory task).
In the pro-tapping task, saccade initiation towards extrinsic goals was not impaired. However, in the dual planning and anti-tapping task initiation of saccades towards intrinsic goals was faster in PD patients. Hand movements were differently affected: initiation of the hand movement was only delayed in the pro-tapping and dual planning task. Overall, hand movements in PD patients were slower executed compared to controls.
Whereas initiation of saccades in an extrinsic goal-directed task (pro-tapping task) is not affected, early stage PD patients have difficulty in suppressing reflexive saccades towards extrinsic goals in tasks where the endpoint is an intrinsic goal (e.g. dual planning and anti-tapping task). This is specific for eye movements, as hand movements have delayed responses in the pro-tapping and dual planning task. This suggests that reported impairment of the dorsolateral prefrontal cortex in early-stage PD patients affects only inhibition of eye movements. We conclude that timing and kinematics of eye and hand movements in visuomotor tasks are affected in PD patients. This result may have clinical significance by providing a behavioral marker for the early diagnosis of PD.
Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation (∼170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy.
eye movements; object recognition; psychophysics; top-down attention; visual search; confidence judgement
Over the past decade several research groups have taken a renewed interest in the special role of a type of small eye movement, called ‘microsaccades’, in various visual processes, such as the activation of neurons in the central nervous system, or the prevention of image fading. As the study of microsaccades and their relation to visual processes goes back at least half a century, it seems appropriate to review the more recent reports in light of the history of research on maintained oculomotor fixation, in general, and on microsaccades in particular. Our review shows that there is no compelling evidence to support the view that microsaccades (or, fixation saccades more generally) serve a necessary role in improving oculomotor control or in keeping the visual world visible. The role of the retinal transients produced by small saccades during fixation needs to be evaluated in the context of both the brisk image motions present during active visual tasks performed by freely moving people, as well as the role of selective attention in modulating the strength of signals throughout the visual field.
microsaccade; oculomotor; vision; saccades; stabilized image; slow control; retinal image motion; VOR; attention; fixation
Visual attention and saccades are typically studied in artificial situations, with stimuli presented to the steadily fixating eye, or saccades made along specified paths. By contrast, in the real world saccadic patterns are constrained only by the demands of the motivating task. We studied attention during pauses between saccades made to perform 3 free-viewing tasks: counting dots, pointing to the same dots with a visible cursor, or simply looking at the dots using a freely-chosen path. Attention was assessed by the ability to identify the orientation of a briefly-presented Gabor probe. All primary tasks produced losses in identification performance, with counting producing the largest losses, followed by pointing and then looking-only. Looking-only resulted in a 37% increase in contrast thresholds in the orientation task. Counting produced more severe losses that were not overcome by increasing Gabor contrast. Detection or localization of the Gabor, unlike identification, were largely unaffected by any of the primary tasks. Taken together, these results show that attention is required to control saccades, even with freely-chosen paths, but the attentional demands of saccades are less than those attached to tasks such as counting, which have a significant cognitive load. Counting proved to be a highly demanding task that either exhausted momentary processing capacity (e.g., working memory or executive functions), or, alternatively, encouraged a strategy of filtering out all signals irrelevant to counting itself. The fact that the attentional demands of saccades (as well as those of detection/localization) are relatively modest makes it possible to continually adjust both the spatial and temporal pattern of saccades so as to re-allocate attentional resources as needed to handle the complex and multifaceted demands of real-world environments.
saccades; attention; counting; pointing; natural tasks; eye movements; perception; psychophysics; orientation identification; localization
With each eye movement, the image received by the visual system changes drastically. To maintain stable spatiotopic (world-centered) representations, the relevant retinotopic (eye-centered) coordinates must be continually updated. Although updating or remapping of visual scene representations can occur very rapidly, J. D. Golomb, M. M. Chun, and J. A. Mazer (2008) demonstrated that representations of sustained attention update more slowly than the remapping literature would predict; attentional benefits at previously attended retinotopic locations linger after completion of the saccade, even when this location is no longer behaviorally relevant. The present study explores the robustness of this “retinotopic attentional trace.” We report significant retinotopic facilitation despite attempts to eliminate or reduce it by enhancing spatiotopic reference frames with permanent visual cues in the stimulus display and by introducing a different task where the attended location is the saccade target itself. Our results support and extend our earlier model of native retinotopically organized salience maps that must be dynamically updated to reflect the task-relevant spatiotopic location with each saccade. Consistent with the idea that attentional facilitation arises from persistent, recurrent neural activity, it takes measurable time for this facilitation to decay, leaving behind a retinotopic attentional trace after the saccade has been executed, regardless of conflicting task demands.
coordinate systems; spatial attention; saccades; reference frame; remapping
Continuous visual information is important for movement initiation in a variety of motor tasks. However, even in the absence of visual information people are able to initiate their responses by using motion extrapolation processes. Initiation of actions based on these cognitive processes, however, can demand more attentional resources than that required in situations in which visual information is uninterrupted. In the experiment reported we sought to determine whether the absence of visual information would affect the latency to inhibit an anticipatory action.
The participants performed an anticipatory timing task where they were instructed to move in synchrony with the arrival of a moving object at a determined contact point. On 50% of the trials, a stop sign appeared on the screen and it served as a signal for the participants to halt their movements. They performed the anticipatory task under two different viewing conditions: Full-View (uninterrupted) and Occluded-View (occlusion of the last 500 ms prior to the arrival at the contact point).
The results indicated that the absence of visual information prolonged the latency to suppress the anticipatory movement.
We suggest that the absence of visual information requires additional cortical processing that creates competing demand for neural resources. Reduced neural resources potentially causes increased reaction time to the inhibitory input or increased time estimation variability, which in combination would account for prolonged latency.
The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was no evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants’ fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text.
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP.
trans-saccadic perception; saccades; spatial updating; parietal eye fields; frontal eye fields; transcranial magnetic stimulation
In the natural world, the brain must handle inherent delays in visual processing. This is a problem particularly during dynamic tasks. A possible solution to visuo-motor delays is prediction of a future state of the environment based on the current state and properties of the environment learned from experience. Prediction is well known to occur in both saccades and pursuit movements and is likely to depend on some kind of internal visual model as the basis for this prediction. However, most evidence comes from controlled laboratory studies using simple paradigms. In this study, we examine eye movements made in the context of demanding natural behavior, while playing squash. We show that prediction is a pervasive component of gaze behavior in this context. We show in addition that these predictive movements are extraordinarily precise and operate continuously in time across multiple trajectories and multiple movements. This suggests that prediction is based on complex dynamic visual models of the way that balls move, accumulated over extensive experience. Since eye, head, arm, and body movements all co-occur, it seems likely that a common internal model of predicted visual state is shared by different effectors to allow flexible coordination patterns. It is generally agreed that internal models are responsible for predicting future sensory state for control of body movements. The present work suggests that model-based prediction is likely to be a pervasive component in natural gaze control as well.
Saccadic eye movements; Prediction; Internal models; Squash; Gaze pursuit
Gaze changes and the resultant fixations that orchestrate the sequential acquisition of information from the visual environment are the central feature of primate vision. How are we to understand their function? For the most part, theories of fixation targets have been image based: The hypothesis being that the eye is drawn to places in the scene that contain discontinuities in image features such as motion, colour, and texture. But are these features the cause of the fixations, or merely the result of fixations that have been planned to serve some visual function? This paper examines the issue and reviews evidence from various image-based and task-based sources. Our conclusion is that the evidence is overwhelmingly in favour of fixation control being essentially task based.
Gaze control; Saliency; Task modeling; Saccades
The study addressed whether top-down control of visual cortex supports volitional behavioral control in a novel antisaccade task. The hypothesis was that anticipatory modulations of visual cortex activity would differentiate trials on which subjects knew an anti- versus a pro-saccade response was required. Trials consisted of flickering checkerboards in both peripheral visual fields, followed by brightening of one checkerboard (target) while both kept flickering. Neural activation related to checkerboards before target onset (bias signal) was assessed using electroencephalography. Pre-target visual cortex responses to checkerboards were strongly modulated by task demands (significantly lower on antisaccade trials), an effect that may reduce the predisposition to saccade generation instigated by visual capture. The results illustrate how top-down sensory regulation can complement motor preparation to facilitate adaptive voluntary behavioral control.
attention; bias signal; EEG; saccade; antisaccade visual steady-state
This article reviews the past 25 of research on eye movements (1986–2011). Emphasis is on three oculomotor behaviors: gaze control, smooth pursuit and saccades, and on their interactions with vision. Focus over the past 25 years has remained on the fundamental and classical questions: What are the mechanisms that keep gaze stable with either stationary or moving targets? How does the motion of the image on the retina affect vision? Where do we look – and why – when performing a complex task? How can the world appear clear and stable despite continual movements of the eyes? The past 25 years of investigation of these questions has seen progress and transformations at all levels due to new approaches (behavioral, neural and theoretical) aimed at studying how eye movements cope with real-world visual and cognitive demands. The work has led to a better understanding of how prediction, learning and attention work with sensory signals to contribute to the effective operation of eye movements in visually rich environments.
It has been suggested that some psychotic symptoms reflect
‘aberrant salience’, related to dysfunctional reward
learning. To test this hypothesis we investigated whether patients with
schizophrenia showed impaired learning of task-relevant
stimulus–reinforcement associations in the presence of distracting
We tested 20 medicated patients with schizophrenia and 17 controls on a
reaction time game, the Salience Attribution Test. In this game,
participants made a speeded response to earn money in the presence of
conditioned stimuli (CSs). Each CS comprised two visual dimensions, colour
and form. Probability of reinforcement varied over one of these dimensions
(task-relevant), but not the other (task-irrelevant). Measures of adaptive
and aberrant motivational salience were calculated on the basis of latency
and subjective reinforcement probability rating differences over the
task-relevant and task-irrelevant dimensions respectively.
Participants rated reinforcement significantly more likely and responded
significantly faster on high-probability-reinforced relative to
low-probability-reinforced trials, representing adaptive motivational
salience. Patients exhibited reduced adaptive salience relative to controls,
but the two groups did not differ in terms of aberrant salience. Patients
with delusions exhibited significantly greater aberrant salience than those
without delusions, and aberrant salience also correlated with negative
symptoms. In the controls, aberrant salience correlated significantly with
‘introvertive anhedonia’ schizotypy.
These data support the hypothesis that aberrant salience is related to the
presence of delusions in medicated patients with schizophrenia, but are also
suggestive of a link with negative symptoms. The relationship between
aberrant salience and psychotic symptoms warrants further investigation in
Aberrant salience; dopamine; psychosis; reinforcement; salience attribution test; schizophrenia