The spatial representation of a visual scene in the early visual system is well known. The optics of the eye map the three-dimensional environment onto two-dimensional images on the retina. These retinotopic representations are preserved in the early visual system. Retinotopic representations and processing are among the most prevalent concepts in visual neuroscience. However, it has long been known that a retinotopic representation of the stimulus is neither sufficient nor necessary for perception. Saccadic Stimulus Presentation Paradigm and the Ternus-Pikler displays have been used to investigate non-retinotopic processes with and without eye movements, respectively. However, neither of these paradigms eliminates the retinotopic representation of the spatial layout of the stimulus. Here, we investigated how stimulus features are processed in the absence of a retinotopic layout and in the presence of retinotopic conflict. We used anorthoscopic viewing (slit viewing) and pitted a retinotopic feature-processing hypothesis against a non-retinotopic feature-processing hypothesis. Our results support the predictions of the non-retinotopic feature-processing hypothesis and demonstrate the ability of the visual system to operate non-retinotopically at a fine feature processing level in the absence of a retinotopic spatial layout. Our results suggest that perceptual space is actively constructed from the perceptual dimension of motion. The implications of these findings for normal ecological viewing conditions are discussed.
Non-retinotopic perception; feature attribution; feature binding; moving object perception; anorthoscopic perception
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain.
visual discrimination; 2-deoxyglucose; nucleus rotundus; figure-ground; lateralization
Under normal viewing conditions, adjustments in body posture and involuntary head movements continually shift the eyes in space. Like all translations, these movements may yield depth information in the form of motion parallax, the differential motion on the retina of objects at different distances from the observer. However, studies on depth perception rarely consider the possible contribution of this cue, as the resulting changes in viewpoint appear too small to be of perceptual significance. Here, we quantified the parallax present during fixation in normally standing observers. We measured the trajectories followed by the eyes in space by means of a high-resolution head-tracking system and used an optical model of the eye to reconstruct the stimulus on the observer’s retina. We show that, within several meters from the observer, relatively small changes in depth yield changes in the velocity of the retinal stimulus that are well above perceivable thresholds. Furthermore, relative velocities are little influenced by fixation distance, target eccentricity, and the precise oculomotor strategy followed by the observer to maintain fixation. These results demonstrate that the parallax available during normal head-free fixation is a reliable source of depth information, which the visual system may use in a variety of tasks.
eye movements; fixation; fixational instability; microsaccade; motion parallax; postural sway
Individual differences in face recognition are often contrasted with differences in object recognition using a single object category. Likewise, individual differences in perceptual expertise for a given object domain have typically been measured relative to only a single category baseline. In Experiment 1, we present a new test of object recognition, the Vanderbilt Expertise Test (VET), which is comparable in methods to the Cambridge Face Memory Task (CFMT) but uses eight different object categories. Principal component analysis reveals that the underlying structure of the VET can be largely explained by two independent factors, which demonstrate good reliability and capture interesting sex differences inherent in the VET structure. In Experiment 2, we show how the VET can be used to separate domain-specific from domain-general contributions to a standard measure of perceptual expertise. While domain-specific contributions are found for car matching for both men and women and for plane matching in men, women in this sample appear to use more domain-general strategies to match planes. In Experiment 3, we use the VET to demonstrate that holistic processing of faces predicts face recognition independently of general object recognition ability, which has a sex-specific contribution to face recognition. Overall, the results suggest that the VET is a reliable and valid measure of object recognition abilities and can measure both domain-general skills and domain-specific expertise, which were both found to depend on the sex of observers.
perceptual expertise; face; object recognition; sex differences
Ocular following responses (OFRs) are the initial tracking eye movements that can be elicited at ultra-short latency by sudden motion of a textured pattern. The OFR magnitude depends upon stimulus size, and also upon the spatial frequency (SF) of sine-wave gratings. Here we investigate the interaction of size and SF. We recorded initial OFRs in human subjects when 1D vertical sine-wave gratings were subject to horizontal motion. Gratings were restricted to elongated horizontal apertures—“strips”—aligned with the axis of motion. In Experiment 1 the SF and the height of a single strip was manipulated. The magnitude of the OFR increased with strip height up to some optimum value, while strip heights greater than this optimum produced smaller responses. This effect was strongly dependent on SF: the optimum strip height was smaller for higher SFs. In order to explore the underlying mechanism, Experiment 2 measured OFRs to stimuli composed of two thin horizontal strips—one in the upper visual field, the other in the lower visual field—whose vertical separation varied 32-fold. Stimuli of different sizes can be reconstructed from the sum of such horizontal strips. We found that the OFRs in Experiment 1 were smaller than the sum of the responses to the component stimuli, but greater than the average of those responses. We defined an averaging coefficient that described whether a given response was closer to the sum or to the average. For any one SF, the averaging coefficients were similar over a wide range of stimulus sizes, while they varied considerably (7-fold) for stimuli of different SFs.
visual motion; averaging; inhibition; normalization
•Individual differences in numerosity acuity predict mathematical ability.•We tested 300+ participants to see if this relationship is unique to numerosity.•Visual numerosity and orientation task performance predicted mathematics scores.•Performance improved with age, and males significantly outperformed females.•This highlights links between mathematics and multiple visuospatial abilities.
Sensitivity to visual numerosity has previously been shown to predict human mathematical performance. However, it is not clear whether it is discrimination of numerosity per se that is predictive of mathematical ability, or whether the association is driven by more general task demands. To test this notion we had over 300 participants (ranging in age from 6 to 73 years) perform a symbolic mathematics test and 4 different visuospatial matching tasks. The visual tasks involved matching 2 clusters of Gabor elements for their numerosity, density, size or orientation by a method of adjustment. Partial correlation and regression analyses showed that sensitivity to visual numerosity, sensitivity to visual orientation and mathematical education level predict a significant proportion of shared as well as unique variance in mathematics scores. These findings suggest that sensitivity to visual numerosity is not a unique visual psychophysical predictor of mathematical ability. Instead, the data are consistent with mathematics representing a multi-factorial process that shares resources with a number of visuospatial tasks.
Number; Density; Size; Orientation; Mathematics; Spatial vision; IPS, intraparietal sulcus
Random mutagenesis combined with phenotypic screening using carefully crafted functional tests has successfully led to the discovery of genes that are essential for a number of functions. This approach does not require prior knowledge of the identity of the genes that are involved and is a way to ascribe function to the nearly 6000 genes for which knowledge of the DNA sequence has been inadequate to determine the function of the gene product. In an effort to identify genes involved in the visual system via this approach, we have tested over 9000 first and third generation offspring of mice treated with the mutagen N-ethyl-N-nitrosourea (ENU) for visual defects, as evidenced by abnormalities in the electroretinogram and appearance of the fundus. We identified 61 putative mutations with this procedure and outline the steps needed to identify the affected genes.
Mutagenesis; Forward genetics; Screening for visual function; Electroretinogram; Visual function in mouse; Mouse; Ethynitrosourea; ENU; a-wave; b-wave; c-wave; STR; Functional test for vision
The visual system can use various cues to segment the visual scene into figure and background. We studied how human observers combine two of these cues, texture and color, in visual segmentation. In our task, the observers identified the orientation of an edge that was defined by a texture difference, a color difference, or both (cue combination). In a fourth condition, both texture and color information were available, but the texture and color edges were not spatially aligned (cue conflict). Performance markedly improved when the edges were defined by two cues, compared to the single-cue conditions. Observers only benefited from the two cues, however, when they were spatially aligned. A simple signal-detection model that incorporates interactions between texture and color processing accounts for the performance in all conditions. In a second experiment, we studied whether the observers are able to ignore a task-irrelevant cue in the segmentation task or whether it interferes with performance. Observers identified the orientation of an edge defined by one cue and were instructed to ignore the other cue. Three types of trial were intermixed: neutral trials, in which the second cue was absent; congruent trials, in which the second cue signaled the same edge as the target cue; and conflict trials, in which the second cue signaled an edge orthogonal to the target cue. Performance improved when the second cue was congruent with the target cue. Performance was impaired when the second cue was in conflict with the target cue, indicating that observers could not discount the second cue. We conclude that texture and color are not processed independently in visual segmentation.
visual segmentation; texture; color; cue combination; signal detection theory
High-level adaptation effects reveal important features of the neural coding of objects and faces. View-adaptation in particular is a highly useful means of characterizing how depth rotation of the face is represented and therefore, how view-invariant recognition of the face may be achieved. In the present study, we used view adaptation to determine the extent to which depth rotations of a face are represented in an image-based or object-based manner. Specifically, we dissociated object-based axes from image-based axes via a 90-degree planar rotation of the adapting face and observed that participants’ responses pre- and post-adaptation are most consistent with an image-based representation of depth rotations of the face. We discuss our data in the context of previous results describing the impact of planar rotation on related aspects of face perception.
Face perception; Face adaptation; Invariant Recognition; View coding
Emerging evidence indicates rods can communicate with retinal ganglion cells (RGCs) via pathways that do not involve gap-junctions. Here we investigated the significance of such pathways for central visual responses, using mice lacking a key gap junction protein (Cx36−/−) and carrying a mutation that disrupts cone phototransduction (Gnat2cpfl3). Electrophysiological recordings spanning the lateral geniculate revealed rod-mediated ON and OFF visual responses in virtually every cell from all major anatomical sub-compartments of this nucleus. Hence, we demonstrate that one or more classes of RGC receive input from Cx36-independent rod pathways and drive extensive ON and OFF responses across the visual thalamus.
mouse; scotopic; melanopsin; electroretinogram; connexin
Previous studies have suggested that photoreceptor synaptic inputs to depolarizing bipolar cells (DBCs or ON bipolar cells) are mediated by mGluR6 receptors and those to hyperpolarizing bipolar cells (HBCs or OFF bipolar cells) are mediated by AMPA/kainate receptors. Here we show that in addition to mGluR6 receptors which mediate the sign-inverting, depolarizing light responses, subpopulations of cone-dominated and rod/cone mixed DBCs use GluR4 AMPA receptors to generate a transient sign-preserving OFF response under light adapted conditions. These AMPA receptors are located at the basal junctions postsynaptic to rods and they are silent under dark-adapted conditions, as tonic glutamate release in darkness desensitizes these receptors. Light adaptation enhances rod-cone coupling and thus allows cone photocurrents with an abrupt OFF depolarization to enter the rods. The abrupt rod depolarization triggers glutamate activation of unoccupied AMPA receptors, resulting in a transient OFF response in DBCs. It has been widely accepted that the DNQX-sensitive, OFF transient responses in retinal amacrine cells and ganglion cells are mediated exclusively by HBCs. Our results suggests that this view needs revision as AMPA receptors in subpopulations of DBCs are likely to significantly contribute to the DNQX-sensitive OFF transient responses in light-adapted third- and higher-order visual neurons.
Depolarizing bipolar cells; transient OFF responses; GluR4 receptors; mGluR6 receptors; 6,7-dinitroquinoxaline-2,3-dione (DNQX); cyclothiazide; dark and light adaptation
Holistic processing (HP) of faces can be inferred from failure to selectively attend to part of a face. We explored how encoding time affects HP of faces by manipulating exposure duration of the study or test face in a sequential matching composite task. HP was observed for exposure as rapid as 50ms, and was unaffected by whether exposure of the study or test face was limited. Holistic effects emerge as soon as performance is above chance, and are not larger at rapid exposure durations. Limiting exposure at study vs. test did have differential effects on response biases at the fastest exposure durations. These findings provide key constraints for understanding mechanisms of face recognition. These results are first to demonstrate that HP of faces emerges for very briefly presented faces, and that limited perceptual encoding time affects response biases and overall level of performance but not whether faces are processed holistically.
face recognition; holistic processing; speeded recognition
In a previous study, Chung, Legge & Cheung (2004) showed that training using repeated presentation of trigrams (sequences of three random letters) resulted in an increase in the size of the visual span (number of letters recognized in a glance) and reading speed in the normal periphery. In this study, we asked whether we could optimize the benefit of trigram training on reading speed by using trigrams more specific to the reading task (i.e. trigrams frequently used in the English language) and presenting them according to their frequencies of occurrence in normal English usage and observers’ performance. Averaged across seven observers, our training paradigm (four days of training) increased the size of the visual span by 6.44 bits, with an accompanied 63.6% increase in the maximum reading speed, compared with the values before training. However, these benefits were not statistically different from those of Chung et al (2004) using a random-trigram training paradigm. Our findings confirm the possibility of increasing the size of the visual span and reading speed in the normal periphery with perceptual learning, and suggest that the benefits of training on letter recognition and maximum reading speed may not be linked to the types of letter strings presented during training.
•Female and male participants adapted to trustworthy and untrustworthy faces.•Adaptation made test faces look less like the adapting face in females.•Male participants did not adapt to trustworthy and untrustworthy faces.•Perception of trustworthiness is different in females and males.
Face adaptation paradigms have been used extensively to investigate the mechanisms underlying the processing of several different facial characteristics including face shape, identity, view and emotional expression. Judgements of facial trustworthiness can also be influenced by visual adaptation; to date these (un)trustworthy face aftereffects have only been shown following adaptation to emotional expression and facial masculinity/femininity. In this study we assessed how exposure to trustworthy and untrustworthy faces influenced the perception of the trustworthiness of subsequent test faces. In a mixed factorial design experiment, we tested the influence of adaptation to female and male faces on the perception of subsequent female and male faces in both female and male observers. In female observers, we found that following adaptation to trustworthy and untrustworthy faces subsequent test faces appeared less like the adapting stimuli. Sex of the adapting and test faces did not have significant influence on these (un)trustworthy face aftereffects. In male observers, however, we found no significant influence of the effect of adaptation on the subsequent perception of face trustworthiness. The clear difference in the visual aftereffects induced in female and male observers indicates the operation of different mechanisms underlying the perception of facial trustworthiness, and future studies should investigate these mechanisms separately in female and male observers.
Face; Adaptation; Trustworthiness; Gender difference
This study measured spatial bisection acuity for horizontally and vertically separated line targets in 5 observers with infantile nystagmus syndrome (INS) and no obvious associated sensory abnormalities, and in two normal observers during comparable horizontal retinal image motion. For small spatial separations between the line targets, bisection acuity for both horizontally and vertically separated lines is worse in the observers with IN than normal observers. In four of the five observers with IN, bisection acuity for small target separations is poorer for horizontally compared to vertically separated lines. Because the motion smear generated by the retinal image motion during IN would be expected influence horizontally separated targets, the degradation of bisection acuity for both vertical and horizontally separated lines indicates that a sensory neural deficit contributes to impaired visual functioning in observers with idiopathic IN.
infantile nystagmus; spatial bisection; hyperacuity; image motion; motion smear; meridional anisotropy
Perceiving biological motion is important for understanding the intentions and future actions of others. Perceiving an approaching person's behavior may be particularly important, because such behavior often precedes social interaction. To this end, the visual system may devote extra resources for perceiving an oncoming person's heading. If this were true, humans should show increased sensitivity for perceiving approaching headings, and as a result, a repulsive perceptual effect around the categorical boundary of leftward/rightward motion. We tested these predictions and found evidence for both. First, observers were especially sensitive to the heading of an approaching person; variability in estimates of a person's heading decreased near the category boundary of leftward/rightward motion. Second, we found a repulsion effect around the category boundary; a person walking approximately toward the observer was perceived as being repelled away from straight ahead. This repulsive effect was greatly exaggerated for perception of a very briefly presented person or perception of a chaotic crowd, suggesting that repulsion may protect against categorical errors when sensory noise is high. The repulsion effect with a crowd required integration of local motion and human form, suggesting an origin in high-level stages of visual processing. Similar repulsive effects may underlie categorical perception with other social features. Overall, our results show that a person's direction of walking is categorically perceived, with improved sensitivity at the category boundary and a concomitant repulsion effect.
categorical perception; reference repulsion; biological motion; ensemble coding; motion repulsion
Exogenous spatial attention can be automatically engaged by a cue presented in the visual periphery. To investigate the effects of exogenous attention, previous studies have generally used highly salient cues that reliably trigger attention. However, the cueing threshold of exogenous attention has been unexamined. We investigated whether the attentional effect varies with cue salience. We examined the magnitude of the attentional effect on apparent contrast [Carrasco, M., Ling, S., & Read, S. (2004). Attention alters appearance. Nature Neuroscience, 7(3), 308–313.] elicited by cues with negative Weber contrast between 6% and 100%. Cue contrast modulated the attentional effect, even at cue contrasts above the level at which observers can perfectly localize the cue; hence, the result is not due to an increase in cue visibility. No attentional effect is observed when the 100% contrast cue is presented after the stimuli, ruling out cue bias or sensory interaction between cues and stimuli as alternative explanations. A second experiment, using the same paradigm with high contrast motion stimuli gave similar results, providing further evidence against a sensory interaction explanation, as the stimuli and task were defined on a visual dimension independent from cue contrast. Although exogenous attention is triggered automatically and involuntarily, the attentional effect is gradual.
Attention; Appearance; Perceived contrast; Perceived speed; Exogenous cue; Threshold; Cue salience
To explore the relative development of the dorsal and ventral extrastriate processing streams, we studied the development of sensitivity to form and motion in macaque monkeys (Macaca nemestrina). We used Glass patterns and random dot kinematograms (RDK) to assay ventral and dorsal stream function, respectively. We tested 24 animals, longitudinally or cross-sectionally, between the ages of 5 weeks and 3 years. Each animal was tested with Glass patterns and RDK stimuli with each of two pattern types – circular and linear – at each age using a two alternative forced-choice task. We measured coherence threshold for discrimination of the global form or motion pattern from an incoherent control stimulus. Sensitivity to global motion appeared earlier than to global form and was higher at all ages, but performance approached adult levels at similar ages. Infants were most sensitive to large spatial scale (Δx) and fast speeds; sensitivity to fine scale and slow speeds developed more slowly independently of pattern type. Within the motion domain, pattern type had little effect on overall performance. However, within the form domain, sensitivity for linear Glass patterns was substantially poorer than that for concentric patterns. Our data show comparatively early onset for global motion integration ability, perhaps reflecting early development of the dorsal stream. However, both pathways mature over long time courses reaching adult levels between two and three years after birth.
Visual development; Glass pattern; global motion; global form; extrastriate pathways; macaque monkey
Binocular rivalry is an intriguing phenomenon: when different images are displayed to the two eyes, perception alternates between these two images. What determines whether two monocular images engage in fusion or in rivalry: the physical difference between these images or the difference between the percepts resulting from the images? We investigated that question by measuring the interocular difference of grid orientation needed to produce a transition from fusion to rivalry and by changing those transitions by means of a superimposed tilt illusion. Fusion was attested by a correct stereoscopic slant perception of the grid. The superimposed tilt illusion was achieved in displaying small segments on the grids. We found that the illusion can change the fusion-rivalry transitions indicating that rivalry and fusion are based on the perceived orientations rather than the displayed ones. In a second experiment, we confirmed that the absence of binocular rivalry resulted in fusion and stereoscopic slant perception. We conclude that the superimposed tilt illusion arises at a level of visual processing prior to those stages mediating binocular rivalry and stereoscopic depth extraction.
Binocular vision; binocular rivalry; stereopsis; depth perception; orientation illusion
A model hypothesizing that basic mechanisms of associative learning and generalization underlie object categorization in vertebrates can account for a large body of animal and human data. Here, we report two experiments which implicate error-driven associative learning in pigeons’ recognition of objects across changes in viewpoint. Experiment 1 found that object recognition across changes in viewpoint depends on how well each view predicts reward. Analyses of generalization performance, spatial position of pecks to images, and learning curves all showed behavioral patterns analogous to those found in prior studies of relative validity in associative learning. In Experiment 2, pigeons were trained to recognize objects from multiple viewpoints, which usually promotes robust performance at novel views of the trained objects. However, when the objects possessed a salient, informative metric property for solving the task, the pigeons did not show view-invariant recognition of the training objects, a result analogous to the overshadowing effect in associative learning.
Associative learning; prediction error; object recognition; view invariance; categorization; pigeon
Previous studies have found that subjects can increase the velocity of accommodation using visual exercises such as pencil push ups, flippers, Brock strings and the like and myriad papers have shown improvement in accommodation facility (speed) and sufficiency (amplitude) using subjective tests following vision training but few have objectively measured accommodation before and after training in either normal subjects or in patients diagnosed with accommodative infacility (abnormally slow dynamics). Accommodation is driven either directly by blur or indirectly by way of neural crosslinks from the vergence system. Until now, no study has objectively measured both accommodation and accommodative-vergence before and after vision training and the role vergence might play in modifying the speed of accommodation. In the present study, accommodation and accommodative-vergence were measured with a Purkinje Eye Tracker/Optometer before and after normal subjects trained in a flipper-like task in which the stimulus stepped between 0 and 2.5 diopters and back for over 200 cycles. Most subjects increased their speed of accommodation as well as their speed of accommodative vergence. Accommodative vergence led the accommodation response by approximately 77 msec before training and 100 msec after training and the vergence lead was most prominent in subjects with high accommodation and vergence velocities and the vergence leads tended to increase in conjunction with increases in accommodation velocity. We surmise that volitional vergence may help increase accommodation velocity by way of vergence-accommodation cross links.
accommodation; accommodative-vergence; blur; eye movements; human
Correctly perceiving the direction of a visible object with respect to one’s self (egocentric visual direction) requires that information about the location of the image on the retina (oculocentric visual direction) be combined with signals about the position of the eyes in the head. The Wells-Hering laws that govern the perception of visual direction and modern restatements of these laws assume implicitly that retinal and eye-position information are independent of one another. By measuring observers’ manual pointing responses to targets in different horizontal locations, we show that retinal and eye-position information are not treated independently in the brain. In particular, decreasing the relative visibility of one eye’s retinal image reduces the strength of the eye-position signal associated with that eye. The results can be accounted for by interactions between eye-specific retinal and eye-position signals at a common neural location.
Egocentric visual direction; visual suppression; eye-position; asymmetric vergence; heterophoria; pointing
First-order (contrast) surround suppression has been well characterized both psychophysically and physiologically, but relatively little is known as to whether the perception of second-order visual stimuli exhibits analogous center-surround interactions. Second-order surround suppression was characterized by requiring subjects to detect second-order modulation in stimuli presented alone or embedded in a surround. Both contrast-(CM) and orientation-modulated (OM) stimuli were used. For most subjects and both OM and CM stimuli, second-order surrounds caused thresholds to be higher, indicative of second-order suppression. For CM stimuli, suppression was orientation-specific, i.e., higher thresholds for parallel than for orthogonal surrounds. However, the evidence for orientation specificity of suppression for OM stimuli was weaker. These results suggest that normalization, leading to surround suppression, operates at multiple stages in cortical processing.
texture; 2nd-order; surround suppression; spatial vision
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such “multiplexing” has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing— and the computations required for demultiplexing— may enrich the study of the visual system by emphasizing the importance of a structured and balanced “encoding / decoding” framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of “neural correlates” for understanding neural computation.
In 1967, Yarbus presented qualitative data from one observer showing that the patterns of eye movements were dramatically affected by an observer's task, suggesting that complex mental states could be inferred from scan paths. The strong claim of this very influential finding has an never been rigorously tested. Our observers viewed photographs for 10 seconds each. They performed one of four image-based tasks while eye movements were recorded. A pattern classifier, given features from the static scan paths, could identify the image and the observer at above-chance levels. However, it could not predict a viewer's task. Shorter and longer (60 sec) viewing epochs produced similar results. Critically, human judges also failed to identify the tasks performed by the observers based on the static scan paths. The Yarbus finding is evocative, and while it is possible an observer's mental state might be decoded from some aspect of eye movements, static scan paths alone do not appear to be adequate to infer complex mental states of an observer.
eye movements; multivariate pattern classification; Yarbus; task