It has been suggested that one way we may create a stable percept of the visual world across multiple eye movements is to pass information from one set of neurons to another around the time of each eye movement. Previous studies have shown that some neurons in the lateral intraparietal area (LIP) exhibit anticipatory remapping: these neurons produce a visual response to a stimulus that will enter their receptive field after a saccade, but before it actually does so. LIP responses during fixation are thought to represent attentional priority, behavioral relevance or value. In this study, we test whether the remapped response represents this attentional priority, by examining the activity of LIP neurons while animals perform a visual foraging task. We find that the population responds more to a target than to a distractor before the saccade even begins to bring the stimulus into the receptive field. Within 20 ms of the saccade ending, the responses in almost a third of LIP neurons closely resemble the responses that will emerge during stable fixation. Finally, we show that in these neurons and in the population as a whole, this remapping occurs for all stimuli in all locations across the visual field and for both long and short saccades. We conclude that this complete remapping of attentional priority across the visual field could underlie spatial stability across saccades.
The dorsal attentional network is known for its role in directing top-down visual attention toward task-relevant stimuli. This goal-directed nature of the dorsal network makes it a suitable candidate for processing and extracting predictive information from the visual environment. In this review we briefly summarize some of the findings that delineate the neural substrates that contribute to predictive learning at both levels within the dorsal attentional system: including the frontal eye field (FEF) and posterior parietal cortex (PPC). We also discuss the similarities and differences between these two regions when it comes to learning predictive information. The current findings from the literature suggest that the FEFs may be more involved in top-down spatial attention, whereas the parietal cortex is involved in processing task-relevant attentional influences driven by stimulus salience, both contribute to the processing of predictive cues at different time points.
probability; predictability; visual attention; eye movements; TMS; transcranial magnetic stimulation
The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter.
Selective attention is closely linked to eye movements. Prior to a saccade, attention shifts to the saccadic goal at the expense of surrounding locations. Such a constricted attentional field, while useful to ensure accurate saccades, constrains the spatial range of high-quality perceptual analysis. The present study showed that the attention could be allocated to locations other than the saccadic goal without disrupting the ongoing pattern of saccades. Saccades were made sequentially along a color-cued path. Attention was assessed by a visual memory task presented during a random pause between successive saccades. Saccadic planning had several effects on memory: (1) fewer letters were remembered during intersaccadic pauses than during maintained fixation; (2) letters appearing on the saccadic path, including locations previously examined, could be remembered; off-path performance was near chance; (3) memory was better at the saccadic target than all other locations, including the currently fixated location. These results show that the distribution of attention during intersaccadic pauses results from a combination of top-down enhancement at the saccadic target coupled with a more automatic allocation of attention to selected display locations. This suggests that the visual system has mechanisms to control the distribution of attention without interfering with ongoing saccadic programming.
Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here we demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. We hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a fundamental task faced by the visual system thousands of times each day. In four experiments, memory-based gaze correction was found to be accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load was found to interfere with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system.
Recent studies provide evidence for task-specific influences on saccadic eye movements. For instance, saccades exhibit higher peak velocity when the task requires coordinating eye and hand movements. The current study shows that the need to process task-relevant visual information at the saccade endpoint can be, in itself, sufficient to cause such effects. In this study, participants performed a visual discrimination task which required a saccade for successful completion. We compared the characteristics of these task-related saccades to those of classical target-elicited saccades, which required participants to fixate a visual target without performing a discrimination task. The results show that task-related saccades are faster and initiated earlier than target-elicited saccades. Differences between both saccade types are also noted in their saccade reaction time distributions and their main sequences, i.e., the relationship between saccade velocity, duration, and amplitude.
The purpose of the present study was to examine the influence of task set on the spatial and temporal characteristics of eye movements during scene perception. In previous work, when strong control was exerted over the viewing task via specification of a target object (as in visual search), task set biased spatial, rather than temporal, parameters of eye movements. Here, we find that more participant-directed tasks (in which the task establishes general goals of viewing rather than specific objects to fixate) affect not only spatial (e.g., saccade amplitude) but also temporal parameters (e.g. fixation duration). Further, task set influenced the rate of change in fixation duration over the course of viewing but not saccade amplitude, suggesting independent mechanisms for control of these parameters.
Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.
With each eye movement, stationary objects in the world change position on the retina, yet we perceive the world as stable. Spatial updating, or remapping, is one neural mechanism by which the brain compensates for shifts in the retinal image caused by voluntary eye movements. Remapping of a visual representation is believed to arise from a widespread neural circuit including parietal and frontal cortex. The current experiment tests the hypothesis that extrastriate visual areas in human cortex have access to remapped spatial information. We tested this hypothesis using functional magnetic resonance imaging (fMRI). We first identified the borders of several occipital lobe visual areas using standard retinotopic techniques. We then tested subjects while they performed a single-step saccade task analogous to the task used in neurophysiological studies in monkeys, and two conditions that control for visual and motor effects. We analyzed the fMRI time series data with a nonlinear, fully Bayesian hierarchical statistical model. We identified remapping as activity in the single-step task that could not be attributed to purely visual or oculomotor effects. The strength of remapping was roughly monotonic with position in the visual hierarchy: remapped responses were largest in areas V3A and hV4 and smallest in V1 and V2. These results demonstrate that updated visual representations are present in cortical areas that are directly linked to visual perception.
The lateral intraparietal area (LIP) has been implicated as a salience map for control of saccadic eye movements and visual attention. Here, we report evidence to link the encoding of saccades and saliency in LIP to modulation of several other sensory-motor behaviors in monkeys. In many LIP neurons, there was a significant trial-by-trial correlation between the firing rate just before a saccade and the post- or pre-saccadic pursuit eye velocity. Some neurons also showed trail-by-trial correlations of the firing rate of LIP neurons with the speed of “glissades” that occur at the end of saccades to stationary targets. LIP-pursuit correlations were spatially-specific and were strong only when the target appeared in the receptive/movement field of the neuron under study. We suggest that LIP is a component of a salience representation that modulates the strength of visual-motor transmission for pursuit, and that may play a similar role for many movements, beyond its traditional roles in guiding saccadic eye movements and localizing attention.
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search.
eye movements; visual search; low-vision; glaucoma; peripheral vision; and natural scenes
Many common tasks require us to individuate in parallel two or more objects out of a complex scene. Although the mechanisms underlying our abilities to count the number of items, remember the visual properties of objects and to make saccadic eye movements towards targets have been studied separately, each of these tasks require selection of individual objects and shows a capacity limit. Here we show that a common factor—salience—determines the capacity limit in the various tasks. We manipulated bottom-up salience (visual contrast) and top-down salience (task relevance) in enumeration and visual memory tasks. As one item became increasingly salient, the subitizing range was reduced and memory performance for all other less-salient items was decreased. Overall, the pattern of results suggests that our abilities to enumerate and remember small groups of stimuli are grounded in an attentional priority or salience map which represents the location of important items.
Perception of scenes has typically been investigated by using static or simplified visual displays. How attention is used to perceive and evaluate dynamic, realistic scenes is more poorly understood, in part due to the problem of comparing eye fixations to moving stimuli across observers. When the task and stimulus is common across observers, consistent fixation location can indicate that that region has high goal-based relevance. Here we investigated these issues when an observer has a specific, and naturalistic, task: closed-circuit television (CCTV) monitoring. We concurrently recorded eye movements and ratings of perceived suspiciousness as different observers watched the same set of clips from real CCTV footage. Trained CCTV operators showed greater consistency in fixation location and greater consistency in suspiciousness judgements than untrained observers. Training appears to increase between-operators consistency by learning “knowing what to look for” in these scenes. We used a novel “Dynamic Area of Focus (DAF)” analysis to show that in CCTV monitoring there is a temporal relationship between eye movements and subsequent manual responses, as we have previously found for a sports video watching task. For trained CCTV operators and for untrained observers, manual responses were most highly related to between-observer eye position spread when a temporal lag was introduced between the fixation and response data. Several hundred milliseconds after between-observer eye positions became most similar, observers tended to push the joystick to indicate perceived suspiciousness. Conversely, several hundred milliseconds after between-observer eye positions became dissimilar, observers tended to rate suspiciousness as low. These data provide further support for this DAF method as an important tool for examining goal-directed fixation behavior when the stimulus is a real moving image.
eye movements; scene perception; expertise; security and human factors; visual search
The primate superior colliculus (SC) has long been known to be involved in saccade generation. However, SC neurons also exhibit fixation-related and smooth-pursuit-related activity. A parsimonious explanation for these seemingly disparate findings is that the SC contains a map of behaviorally relevant goal locations, rather than just a motor map for saccades and fixation. This explanation predicts that SC activity should reflect the behavioral goal, even when the behavioral response is not fixation or saccades, and even if the goal does not correspond to a visual stimulus. We tested this prediction by employing a tracking task that dissociates the stimulus and goal locations. In this task, monkeys tracked the invisible midpoint between two peripheral bars, such that the visual stimuli were peripheral but the goal was foveal/parafoveal. We recorded from SC neurons representing peripheral locations associated with the stimulus or central locations associated with the goal. Most neurons with peripheral response fields did not respond differently during tracking than during passive viewing of the stimulus under fixation; most neurons with central response fields responded more during tracking than during fixation, despite the lack of a visual stimulus. Moreover, the spatial distribution of activity during tracking was larger than that during fixation or tracking of a foveal stimulus, suggesting that the greater spatial uncertainty about the invisible goal corresponded to more widespread SC activity. These results demonstrate the flexibility with which activity across the SC represents the location - and also the spatial precision - of behaviorally relevant goals for multiple eye movements.
Superior Colliculus; Pursuit; Voluntary Eye Movement; Stimulus-Response; Behavioral Goal; Population Coding
Many daily activities involve intrinsic or extrinsic goal-directed eye and hand movements. An extensive visuomotor coordination network including nigro-striatal pathways is required for efficient timing and positioning of eyes and hands. The aim of this study was to investigate how Parkinson’s disease (PD) affects eye-hand coordination in tasks with different cognitive complexity.
We used a touch screen, an eye-tracking device and a motion capturing system to quantify changes in eye-hand coordination in early-stage PD patients (H&Y < 2.5) and age-matched controls. Timing and kinematics of eye and hand were quantified in four eye-hand coordination tasks (pro-tapping, dual planning, anti-tapping and spatial memory task).
In the pro-tapping task, saccade initiation towards extrinsic goals was not impaired. However, in the dual planning and anti-tapping task initiation of saccades towards intrinsic goals was faster in PD patients. Hand movements were differently affected: initiation of the hand movement was only delayed in the pro-tapping and dual planning task. Overall, hand movements in PD patients were slower executed compared to controls.
Whereas initiation of saccades in an extrinsic goal-directed task (pro-tapping task) is not affected, early stage PD patients have difficulty in suppressing reflexive saccades towards extrinsic goals in tasks where the endpoint is an intrinsic goal (e.g. dual planning and anti-tapping task). This is specific for eye movements, as hand movements have delayed responses in the pro-tapping and dual planning task. This suggests that reported impairment of the dorsolateral prefrontal cortex in early-stage PD patients affects only inhibition of eye movements. We conclude that timing and kinematics of eye and hand movements in visuomotor tasks are affected in PD patients. This result may have clinical significance by providing a behavioral marker for the early diagnosis of PD.
What determines whether a scene is remembered or forgotten? Our results show how visual scenes are encoded into memory at behaviorally relevant points in time.
The ability to remember a briefly presented scene depends on a number of factors, such as its saliency, novelty, degree of threat, or behavioral relevance to a task. Here, however, we show that the encoding of a scene into memory may depend not only on what the scene contains but also when it occurs. Participants performed an attentionally demanding target detection task at fixation while also viewing a rapid sequence of full-field photographs of urban and natural scenes. Participants were then tested on whether they recognized a specific scene from the previous sequence. We found that scenes were recognized reliably only when presented concurrently with a target at fixation. This is evidence of a mechanism where traces of a visual scene are automatically encoded into memory at behaviorally relevant points in time regardless of the spatial focus of attention.
What determines whether a visual scene is remembered or forgotten? The ability to remember a briefly presented scene depends on a number of factors, such as its saliency, novelty, degree of threat, or relevance to a behavioral outcome. Generally, attention is thought to be key, in that you can only remember the part of a visual scene you were paying attention to at any given moment. Here, we show that memory for visual scenes may not depend on your attention or what a scene contains, but when the scene is presented. In this study, attention to one task enhances recognition performance for scenes in a second task only in situations when the first task has behavioral relevance. Our results suggest a mechanism where traces of a visual scene are automatically encoded into memory, even though the scene is not the spatial focus of attention.
Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing. We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate.
eye movements; saccadic timing; saccades; visual search; saccadic planning
Visual search is a ubiquitous task of great importance: it allows us to quickly find the objects that we are looking for. During active search for an object (target), eye movements are made to different parts of the scene. Fixation locations are chosen based on a combination of information about the target and the visual input. At the end of a successful search, the eyes typically fixate on the target. But does this imply that target identification occurs while looking at it? The duration of a typical fixation (∼170 ms) and neuronal latencies of both the oculomotor system and the visual stream indicate that there might not be enough time to do so. Previous studies have suggested the following solution to this dilemma: the target is identified extrafoveally and this event will trigger a saccade towards the target location. However this has not been experimentally verified. Here we test the hypothesis that subjects recognize the target before they look at it using a search display of oriented colored bars. Using a gaze-contingent real-time technique, we prematurely stopped search shortly after subjects fixated the target. Afterwards, we asked subjects to identify the target location. We find that subjects can identify the target location even when fixating on the target for less than 10 ms. Longer fixations on the target do not increase detection performance but increase confidence. In contrast, subjects cannot perform this task if they are not allowed to move their eyes. Thus, information about the target during conjunction search for colored oriented bars can, in some circumstances, be acquired at least one fixation ahead of reaching the target. The final fixation serves to increase confidence rather then performance, illustrating a distinct role of the final fixation for the subjective judgment of confidence rather than accuracy.
eye movements; object recognition; psychophysics; top-down attention; visual search; confidence judgement
Over the past decade several research groups have taken a renewed interest in the special role of a type of small eye movement, called ‘microsaccades’, in various visual processes, such as the activation of neurons in the central nervous system, or the prevention of image fading. As the study of microsaccades and their relation to visual processes goes back at least half a century, it seems appropriate to review the more recent reports in light of the history of research on maintained oculomotor fixation, in general, and on microsaccades in particular. Our review shows that there is no compelling evidence to support the view that microsaccades (or, fixation saccades more generally) serve a necessary role in improving oculomotor control or in keeping the visual world visible. The role of the retinal transients produced by small saccades during fixation needs to be evaluated in the context of both the brisk image motions present during active visual tasks performed by freely moving people, as well as the role of selective attention in modulating the strength of signals throughout the visual field.
microsaccade; oculomotor; vision; saccades; stabilized image; slow control; retinal image motion; VOR; attention; fixation
Visual attention and saccades are typically studied in artificial situations, with stimuli presented to the steadily fixating eye, or saccades made along specified paths. By contrast, in the real world saccadic patterns are constrained only by the demands of the motivating task. We studied attention during pauses between saccades made to perform 3 free-viewing tasks: counting dots, pointing to the same dots with a visible cursor, or simply looking at the dots using a freely-chosen path. Attention was assessed by the ability to identify the orientation of a briefly-presented Gabor probe. All primary tasks produced losses in identification performance, with counting producing the largest losses, followed by pointing and then looking-only. Looking-only resulted in a 37% increase in contrast thresholds in the orientation task. Counting produced more severe losses that were not overcome by increasing Gabor contrast. Detection or localization of the Gabor, unlike identification, were largely unaffected by any of the primary tasks. Taken together, these results show that attention is required to control saccades, even with freely-chosen paths, but the attentional demands of saccades are less than those attached to tasks such as counting, which have a significant cognitive load. Counting proved to be a highly demanding task that either exhausted momentary processing capacity (e.g., working memory or executive functions), or, alternatively, encouraged a strategy of filtering out all signals irrelevant to counting itself. The fact that the attentional demands of saccades (as well as those of detection/localization) are relatively modest makes it possible to continually adjust both the spatial and temporal pattern of saccades so as to re-allocate attentional resources as needed to handle the complex and multifaceted demands of real-world environments.
saccades; attention; counting; pointing; natural tasks; eye movements; perception; psychophysics; orientation identification; localization
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength.
With each eye movement, the image received by the visual system changes drastically. To maintain stable spatiotopic (world-centered) representations, the relevant retinotopic (eye-centered) coordinates must be continually updated. Although updating or remapping of visual scene representations can occur very rapidly, J. D. Golomb, M. M. Chun, and J. A. Mazer (2008) demonstrated that representations of sustained attention update more slowly than the remapping literature would predict; attentional benefits at previously attended retinotopic locations linger after completion of the saccade, even when this location is no longer behaviorally relevant. The present study explores the robustness of this “retinotopic attentional trace.” We report significant retinotopic facilitation despite attempts to eliminate or reduce it by enhancing spatiotopic reference frames with permanent visual cues in the stimulus display and by introducing a different task where the attended location is the saccade target itself. Our results support and extend our earlier model of native retinotopically organized salience maps that must be dynamically updated to reflect the task-relevant spatiotopic location with each saccade. Consistent with the idea that attentional facilitation arises from persistent, recurrent neural activity, it takes measurable time for this facilitation to decay, leaving behind a retinotopic attentional trace after the saccade has been executed, regardless of conflicting task demands.
coordinate systems; spatial attention; saccades; reference frame; remapping
The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was no evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants’ fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text.
Continuous visual information is important for movement initiation in a variety of motor tasks. However, even in the absence of visual information people are able to initiate their responses by using motion extrapolation processes. Initiation of actions based on these cognitive processes, however, can demand more attentional resources than that required in situations in which visual information is uninterrupted. In the experiment reported we sought to determine whether the absence of visual information would affect the latency to inhibit an anticipatory action.
The participants performed an anticipatory timing task where they were instructed to move in synchrony with the arrival of a moving object at a determined contact point. On 50% of the trials, a stop sign appeared on the screen and it served as a signal for the participants to halt their movements. They performed the anticipatory task under two different viewing conditions: Full-View (uninterrupted) and Occluded-View (occlusion of the last 500 ms prior to the arrival at the contact point).
The results indicated that the absence of visual information prolonged the latency to suppress the anticipatory movement.
We suggest that the absence of visual information requires additional cortical processing that creates competing demand for neural resources. Reduced neural resources potentially causes increased reaction time to the inhibitory input or increased time estimation variability, which in combination would account for prolonged latency.
In the natural world, the brain must handle inherent delays in visual processing. This is a problem particularly during dynamic tasks. A possible solution to visuo-motor delays is prediction of a future state of the environment based on the current state and properties of the environment learned from experience. Prediction is well known to occur in both saccades and pursuit movements and is likely to depend on some kind of internal visual model as the basis for this prediction. However, most evidence comes from controlled laboratory studies using simple paradigms. In this study, we examine eye movements made in the context of demanding natural behavior, while playing squash. We show that prediction is a pervasive component of gaze behavior in this context. We show in addition that these predictive movements are extraordinarily precise and operate continuously in time across multiple trajectories and multiple movements. This suggests that prediction is based on complex dynamic visual models of the way that balls move, accumulated over extensive experience. Since eye, head, arm, and body movements all co-occur, it seems likely that a common internal model of predicted visual state is shared by different effectors to allow flexible coordination patterns. It is generally agreed that internal models are responsible for predicting future sensory state for control of body movements. The present work suggests that model-based prediction is likely to be a pervasive component in natural gaze control as well.
Saccadic eye movements; Prediction; Internal models; Squash; Gaze pursuit