Saccades are so called ballistic movements which are executed without online visual feedback. After each saccade the saccadic motor plan is modified in response to post-saccadic feedback with the mechanism of saccadic adaptation. The post-saccadic feedback is provided by the retinal position of the target after the saccade. If the target moves after the saccade, gaze may follow the moving target. In that case, the eyes are controlled by the pursuit system, a system that controls smooth eye movements. Although these two systems have in the past been considered as mostly independent, recent lines of research point towards many interactions between them. We were interested in the question if saccade amplitude adaptation is induced when the target moves smoothly after the saccade. Prior studies of saccadic adaptation have considered intra-saccadic target steps as learning signals. In the present study, the intra-saccadic target step of the McLaughlin paradigm of saccadic adaptation was replaced by target movement, and a post-saccadic pursuit of the target. We found that saccadic adaptation occurred in this situation, a further indication of an interaction of the saccadic system and the pursuit system with the aim of optimized eye movements.
Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called “saccadic suppression”, it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution–by shifting a visual target at saccade onset and blanking it at saccade offset–induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.
Many natural actions require the coordination of two different kinds of movements. How are targets chosen under these circumstances: do central commands instruct different movement systems in parallel, or does the execution of one movement activate a serial chain that automatically chooses targets for the other movement? We examined a natural eye tracking action that consists of orienting saccades and tracking smooth pursuit eye movements, and found strong physiological evidence for a serial strategy. Monkeys chose freely between two identical spots that appeared at different sites in the visual field and moved in orthogonal directions. If a saccade was evoked to one of the moving targets by microstimulation in either the frontal eye field (FEF) or the superior colliculus (SC), then the same target was automatically chosen for pursuit. Our results imply that the neural signals responsible for saccade execution can also act as an internal command of target choice for other movement systems.
The ability to recognize one's own movement visually is important for motor control and, through attribution of agency, for social interactions. Agency of actions may be decided by comparisons of visual feedback, efferent signals and proprioceptive inputs. Because the ability to identify own visual feedback from passive movements is decreased relative to active movements, or in some cases is even absent, the role of proprioception in self-recognition has been questioned. Proprioception during passive and active movements may, however, differ and so to address any role for proprioception in the sense of agency the active movement condition must be examined. Here we tested a chronically deafferented man (IW) and an age-matched group of 6 healthy controls in a task requiring judgement of the timing of action. Subjects performed finger movements and watched a visual cursor that moved either synchronously or asynchronously with a random delay, and reported whether or not they felt they controlled the cursor. Movement accuracy was matched between groups. In the absence of proprioception, IW was less able than the control group to discriminate self-from computer-produced cursor movement based on the timing of movement. In a control visual discrimination task with concurrent similar finger movements but no agency detection, IW was unimpaired, suggesting that this effect was task specific. We conclude that proprioception does contribute to the visual identification of ownership during active movements and thus to the sense of agency.
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).
Sentence comprehension; Eye movements; Visual scene interpretation; Situation models
It is widely assumed that intracranial recordings from the brain are only minimally affected by contamination due to ocular-muscle electromyogram (oEMG). Here we show that this is not always the case. In intracranial recordings from five surgical epilepsy patients we observed that eye movements caused a transient biphasic potential at the onset of a saccade, resembling the saccadic spike potential commonly seen in scalp EEG, accompanied by an increase in broadband power between 20 and 200 Hz. Using concurrently recorded eye movements and high-density intracranial EEG (iEEG) we developed a detailed overview of the spatial distribution and temporal characteristics of the saccade-related oculomotor signal within recordings from ventral, medial and lateral temporal cortex. The occurrence of the saccadic spike was not explained solely by reference contact location, and was observed near the temporal pole for small (< 2 deg) amplitude saccades and over a broad area for larger saccades. We further examined the influence of saccade-related oEMG contamination on measurements of spectral power and interchannel coherence. Contamination manifested in both spectral power and coherence measurements, in particular, over the anterior half of the ventral and medial temporal lobe. Next, we compared methods for removing the contaminating signal and found that nearest-neighbor bipolar re-referencing and ICA filtering were effective for suppressing oEMG at locations far from the orbits, but tended to leave some residual contamination at the temporal pole. Finally, we show that genuine cortical broadband gamma responses observed in averaged data from ventral temporal cortex can bear a striking similarity in time course and band-width to oEMG contamination recorded at more anterior locations. We conclude that eye movement-related contamination should be ruled out when reporting high gamma responses in human intracranial recordings, especially those obtained near anterior and medial temporal lobe.
intracranial EEG; ECoG; saccadic spike; EMG; gamma band; Eye Movement; Eye Muscle
For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention.
lateral intraparietal area; LIP; saccade; visual search; salience; vision
During reading, we generate saccadic eye movements to move words into the center of the visual field for word processing. However, due to systematic and random errors in the oculomotor system, distributions of within-word landing positions are rather broad and show overlapping tails, which suggests that a fraction of fixations is mislocated and falls on words to the left or right of the selected target word. Here we propose a new procedure for the self-consistent estimation of the likelihood of mislocated fixations in normal reading. Our approach is based on iterative computation of the proportions of several types of oculomotor errors, the underlying probabilities for word-targeting, and corrected distributions of landing positions. We found that the average fraction of mislocated fixations ranges from about 10% to more than 30% depending on word length. These results show that fixation probabilities are strongly affected by oculomotor errors.
Saccades modulate the relationship between visual motion and smooth eye movement. Before a saccade, pursuit eye movements reflect a vector average of motion across the visual field. After a saccade, pursuit primarily reflects the motion of the target closest to the endpoint of the saccade. We tested the hypothesis that the saccade produces a spatial weighting of motion around the endpoint of the saccade. Using a moving pursuit stimulus that stepped to a new spatial location just before a targeting saccade, we controlled the distance between the endpoint of the saccade and the position of the moving target. We demonstrate that the smooth eye velocity following the targeting saccade weights the presaccadic visual motion inputs by the distance from their location in space to the endpoint of the saccade, defining the extent of a spatiotemporal filter for driving the eyes. The center of the filter is located at the endpoint of the saccade in space, not at the position of the fovea. The filter is stable in the face of a distracter target, is present for saccades to stationary and moving targets, and affects both the speed and direction of the postsaccadic eye movement. The spatial filter can explain the target-selecting gain change in postsaccadic pursuit, and has intriguing parallels to the process by which perceptual decisions about a restricted region of space are enhanced by attention. The effect of the spatial saccade plan on the pursuit response to a given retinal motion describes the dynamics of a coordinate transformation.
pursuit; saccades; target choice; gain control; visual tracking; attention; salience; mislocalization
This article reviews the past 25 of research on eye movements (1986–2011). Emphasis is on three oculomotor behaviors: gaze control, smooth pursuit and saccades, and on their interactions with vision. Focus over the past 25 years has remained on the fundamental and classical questions: What are the mechanisms that keep gaze stable with either stationary or moving targets? How does the motion of the image on the retina affect vision? Where do we look – and why – when performing a complex task? How can the world appear clear and stable despite continual movements of the eyes? The past 25 years of investigation of these questions has seen progress and transformations at all levels due to new approaches (behavioral, neural and theoretical) aimed at studying how eye movements cope with real-world visual and cognitive demands. The work has led to a better understanding of how prediction, learning and attention work with sensory signals to contribute to the effective operation of eye movements in visually rich environments.
We studied the neural correlates of rapid eye movement during sleep (REM) by timing REMs from video recording and using rapid event-related functional MRI. Consistent with the hypothesis that REMs share the brain systems and mechanisms with waking eye movements and are visually-targeted saccades, we found REM-locked activation in the primary visual cortex, thalamic reticular nucleus (TRN), ‘visual claustrum’, retrosplenial cortex (RSC, only on the right hemisphere), fusiform gyrus, anterior cingulate cortex, and the oculomotor circuit that controls awake saccadic eye movements (and subserves awake visuospatial attention). Unexpectedly, robust activation also occurred in non-visual sensory cortices, motor cortex, language areas, and the ascending reticular activating system, including basal forebrain, the major source of cholinergic input to the entire cortex. REM-associated activation of these areas, especially non-visual primary sensory cortices, TRN and claustrum, parallels findings from waking studies on the interactions between multiple sensory data, and their ‘binding’ into a unified percept, suggesting that these mechanisms are also shared in waking and dreaming and that the sharing goes beyond the expected visual scanning mechanisms. Surprisingly, REMs were associated with a decrease in signal in specific periventricular subregions, matching the distribution of the serotonergic supraependymal plexus. REMs might serve as a useful task-free probe into major brain systems for functional brain imaging.
fMRI; REM; thalamic reticular nucleus; multisensory; oculomotor circuit; visual scanning
Internal monitoring of oculomotor commands may help to anticipate and keep track of changes in perceptual input imposed by our eye movements. Neurophysiological studies in non-human primates identified corollary discharge (CD) signals of oculomotor commands that are conveyed via thalamus to frontal cortices. We tested whether disruption of these monitoring pathways on the thalamic level impairs the perceptual matching of visual input before and after an eye movement in human subjects. Fourteen patients with focal thalamic stroke and 20 healthy control subjects performed a task requiring a perceptual judgment across eye movements. Subjects reported the apparent displacement of a target cue that jumped unpredictably in sync with a saccadic eye movement. In a critical condition of this task, six patients exhibited clearly asymmetric perceptual performance for rightward vs. leftward saccade direction. Furthermore, perceptual judgments in seven patients systematically depended on oculomotor targeting errors, with self-generated targeting errors erroneously attributed to external stimulus jumps. Voxel-based lesion-symptom mapping identified an area in right central thalamus as critical for the perceptual matching of visual space across eye movements. Our findings suggest that trans-thalamic CD transmission decisively contributes to a correct prediction of the perceptual consequences of oculomotor actions.
efference copy; corollary discharge; visual stability; prediction; thalamus; human; lesion; sensorimotor
Impulsivity is the tendency to act without forethought. It is a personality trait commonly used in the diagnosis of many psychiatric diseases. In clinical practice, impulsivity is estimated using written questionnaires. However, answers to questions might be subject to personal biases and misinterpretations. In order to alleviate this problem, eye movements could be used to study differences in decision processes related to impulsivity. Therefore, we investigated correlations between impulsivity scores obtained with a questionnaire in healthy subjects and characteristics of their anticipatory eye movements in a simple smooth pursuit task. Healthy subjects were asked to answer the UPPS questionnaire (Urgency Premeditation Perseverance and Sensation seeking Impulsive Behavior scale), which distinguishes four independent dimensions of impulsivity: Urgency, lack of Premeditation, lack of Perseverance, and Sensation seeking. The same subjects took part in an oculomotor task that consisted of pursuing a target that moved in a predictable direction. This task reliably evoked anticipatory saccades and smooth eye movements. We found that eye movement characteristics such as latency and velocity were significantly correlated with UPPS scores. The specific correlations between distinct UPPS factors and oculomotor anticipation parameters support the validity of the UPPS construct and corroborate neurobiological explanations for impulsivity. We suggest that the oculomotor approach of impulsivity put forth in the present study could help bridge the gap between psychiatry and physiology.
Visual neurons have spatial receptive fields that encode the positions of objects relative to the fovea. Because foveate animals execute frequent saccadic eye movements, this position information is constantly changing, even though the visual world is generally stationary. Interestingly, visual receptive fields in many brain regions have been found to exhibit changes in strength, size, or position around the time of each saccade, and these changes have often been suggested to be involved in the maintenance of perceptual stability. Crucial to the circuitry underlying perisaccadic changes in visual receptive fields is the superior colliculus (SC), a brainstem structure responsible for integrating visual and oculomotor signals. In this work we have studied the time-course of receptive field changes in the SC. We find that the distribution of the latencies of SC responses to stimuli placed outside the fixation receptive field is bimodal: The first mode is comprised of early responses that are temporally locked to the onset of the visual probe stimulus and stronger for probes placed closer to the classical receptive field. We suggest that such responses are therefore consistent with a perisaccadic rescaling, or enhancement, of weak visual responses within a fixed spatial receptive field. The second mode is more similar to the remapping that has been reported in the cortex, as responses are time-locked to saccade onset and stronger for stimuli placed in the postsaccadic receptive field location. We suggest that these two temporal phases of spatial updating may represent different sources of input to the SC.
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays.
saccade; compressed exponential function; gaze-contingent display; eye movements; gaze tracking; gaze prediction
Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word's referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information.
Human vision requires fast eye movements (saccades). Each saccade causes a self-induced motion signal, but we are not aware of this potentially jarring visual input. Among the theorized causes of this phenomenon is a decrease in visual sensitivity before (presaccadic suppression) and during (intrasaccadic suppression) saccades. We investigated intrasaccadic suppression using a perceptual template model (PTM) relating visual detection to different signal-processing stages. One stage changes the gain on the detector's input; another increases uncertainty about the stimulus, allowing more noise into the detector; and other stages inject noise into the detector in a stimulus-dependent or -independent manner. By quantifying intrasaccadic suppression of flashed horizontal gratings at varying external noise levels, we obtained threshold-versus-noise (TVN) data, allowing us to fit the PTM. We tested if any of the PTM parameters changed significantly between the fixation and saccade models and could therefore account for intrasaccadic suppression. We found that the dominant contribution to intrasaccadic suppression was a reduction in the gain of the visual detector. We discuss how our study differs from previous ones that have pointed to uncertainty as an underlying cause of intrasaccadic suppression and how the equivalent noise approach provides a framework for comparing the disparate neural correlates of saccadic suppression.
saccadic suppression; perceptual template model; equivalent noise; eye movements; noise injection; gain reduction; spatial uncertainty
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye–head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.
inflow versus outflow; reference frames; Bayesian; neural; whole-body motion
The study presented here analyzed the patterns of relationship between oculomotor performance and psychopathology, focusing on depression, bipolar disorder, schizophrenia, attention-deficit hyperactivity disorder, and anxiety disorder.
Scientific articles published from 1967 to 2013 in the PubMed/Medline, ISI Web of Knowledge, Cochrane, and SciELO databases were reviewed.
Saccadic eye movement appears to be heavily involved in psychiatric diseases covered in this review via a direct mechanism. The changes seen in the execution of eye movement tasks in patients with psychopathologies of various studies confirm that eye movement is associated with the cognitive and motor system.
Saccadic eye movement changes appear to be heavily involved in the psychiatric disorders covered in this review and may be considered a possible marker of some disorders. The few existing studies that approach the topic demonstrate a need to improve the experimental paradigms, as well as the methods of analysis. Most of them report behavioral variables (latency/reaction time), though electrophysiological measures are absent.
depression; bipolar disorder; attention-deficit hyperactivity disorder; schizophrenia; anxiety disorder
When goal-directed movements are inaccurate, two responses are generated by the brain: a fast motor correction toward the target and an adaptive motor recalibration developing progressively across subsequent trials. For the saccadic system, there is a clear dissociation between the fast motor correction (corrective saccade production) and the adaptive motor recalibration (primary saccade modification). Error signals used to trigger corrective saccades and to induce adaptation are based on post-saccadic visual feedback. The goal of this study was to determine if similar or different error signals are involved in saccadic adaptation and in corrective saccade generation. Saccadic accuracy was experimentally altered by systematically displacing the visual target during motor execution. Post-saccadic error signals were studied by manipulating visual information in two ways. First, the duration of the displaced target after primary saccade termination was set at 15, 50, 100 or 800 ms in different adaptation sessions. Second, in some sessions, the displaced target was followed by a visual mask that interfered with visual processing. Because they rely on different mechanisms, the adaptation of reactive saccades and the adaptation of voluntary saccades were both evaluated. We found that saccadic adaptation and corrective saccade production were both affected by the manipulations of post-saccadic visual information, but in different ways. This first finding suggests that different types of error signal processing are involved in the induction of these two motor corrections. Interestingly, voluntary saccades required a longer duration of post-saccadic target presentation to reach the same amount of adaptation as reactive saccades. Finally, the visual mask interfered with the production of corrective saccades only during the voluntary saccades adaptation task. These last observations suggest that post-saccadic perception depends on the previously performed action and that the differences between saccade categories of motor correction and adaptation occur at an early level of visual processing.
The durations and trajectories of our saccadic eye movements are remarkably stereotyped. We have no voluntary control over these properties but they are determined by the movement amplitude and, to a smaller extent, also by the movement direction and initial eye orientation. Here we show that the stereotyped durations and trajectories are optimal for minimizing the variability in saccade endpoints that is caused by motor noise. The optimal duration can be understood from the nature of the motor noise, which is a combination of signal-dependent noise favoring long durations, and constant noise, which prefers short durations. The different durations of horizontal vs. vertical and of centripetal vs. centrifugal saccades, and the somewhat surprising properties of saccades in oblique directions are also accurately predicted by the principle of minimizing movement variability. The simple and sensible principle of minimizing the consequences of motor noise thus explains the full stereotypy of saccadic eye movements. This suggests that saccades are so stereotyped because that is the best strategy to minimize movement errors for an open-loop motor system.
In primates, it is well known that there is a consistent relationship between the duration, peak velocity and amplitude of saccadic eye movements, known as the ‘main sequence’. The reason why such a stereotyped relationship evolved is unknown. We propose that a fundamental constraint on the deployment of foveal vision lies in the motor system that is perturbed by signal-dependent noise on the motor command. This noise imposes a compromise between the speed and accuracy of an eye movement. We propose that saccade trajectories have evolved to optimize a trade-off between the accuracy and duration of the movement. Taking a semi-analytical approach we use Pontryagin’s minimum principle to show that there is an optimal trajectory for a given amplitude and duration; and that there is an optimal duration for a given amplitude. It follows that the peak velocity is also fixed for a given amplitude. These predictions are in good agreement with observed saccade trajectories and the main sequence. Moreover, this model predicts a small saccadic dead-zone in which it is better to stay eccentric of target than make a saccade onto target. We conclude that the main sequence has evolved as a strategy to optimize the trade-off between accuracy and speed.
The neural correlates of creativity are poorly understood. Freestyle rap provides a unique opportunity to study spontaneous lyrical improvisation, a multidimensional form of creativity at the interface of music and language. Here we use functional magnetic resonance imaging to characterize this process. Task contrast analyses indicate that improvised performance is characterized by dissociated activity in medial and dorsolateral prefrontal cortices, providing a context in which stimulus-independent behaviors may unfold in the absence of conscious monitoring and volitional control. Connectivity analyses reveal widespread improvisation-related correlations between medial prefrontal, cingulate motor, perisylvian cortices and amygdala, suggesting the emergence of a network linking motivation, language, affect and movement. Lyrical improvisation appears to be characterized by altered relationships between regions coupling intention and action, in which conventional executive control may be bypassed and motor control directed by cingulate motor mechanisms. These functional reorganizations may facilitate the initial improvisatory phase of creative behavior.
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g. ‘spatiotopic’ receptive fields; Melcher, D., and Morrone, M.C. (2003). Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat Neurosci 6, 877-881). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location prior to the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed irrespective of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
eye movements; vision; motion; spatial coding; saccade; psychophysics
The countermanding (or stop signal) task probes the control of the initiation of a movement by measuring subjects’ ability to withhold a movement in various degrees of preparation in response to an infrequent stop signal. Previous research found that saccades are initiated when the activity of movement-related neurons reaches a threshold, and saccades are withheld if the growth of activity is interrupted. To extend and evaluate this relationship of frontal eye field (FEF) activity to saccade initiation, two new analyses were performed. First, we fit a neurometric function that describes the proportion of trials with a stop signal in which neural activity exceeded a criterion discharge rate as a function of stop signal delay, to the inhibition function that describes the probability of producing a saccade as a function of stop signal delay. The activity of movement-related but not visual neurons provided the best correspondence between neurometric and inhibition functions. Second, we determined the criterion discharge rate that optimally discriminated between the distributions of discharge rates measured on trials when saccades were produced or withheld. Differential activity of movement-related but not visual neurons could distinguish whether a saccade occurred. The threshold discharge rates determined for individual neurons through these two methods agreed. To investigate how reliably movement-related activity predicted movement initiation; the analyses were carried out with samples of activity from increasing numbers of trials from the same or from different neurons. The reliability of both measures of initiation threshold improved with number of trials and neurons to an asymptote of between 10 to 20 movement-related neurons. Combining the activity of visual neurons did not improve the reliability of predicting saccade initiation. These results demonstrate how the activity of a population of movement-related but not visual neurons in the FEF contributes to the control of saccade initiation. The results also validate these analytical procedures for identifying signals that control saccade initiation in other brain structures.
FRONTAL CORTEX; MOTOR CONTROL; OCULOMOTOR; REACTION TIME; RESPONSE TIME; STOCHASTIC MODELS; STOP SIGNAL; SACCADE LATENCY