Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another – in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input – a ‘predictive summation’. Another possible explanation is a phase sensitive ‘spatial summation’, a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position – it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation.
Motion; Spatial coding; Spatial summation; Predictive coding
Glaucoma is a multifactorial progressive ocular pathology, clinically presenting with damage to the retina and optic nerve, ultimately leading to blindness. Retinal ganglion cell loss in glaucoma ultimately results in vision loss. Vesl/Homer proteins are scaffolding proteins that are critical for maintaining synaptic integrity by clustering, organizing and functionally regulating synaptic proteins. Current anti-glaucoma therapies target IOP as the sole modifiable clinical parameters. Long-term pharmacotherapy and surgical treatment do not prevent gradual visual field loss as the disease progresses, highlighting the need for new complementary, alternative and comprehensive treatment approaches. Vesl/Homer expression was measured in the retinae of DBA/2J mice, a preclinical genetic glaucoma model with spontaneous mutations resulting in a phenotype reminiscent of chronic human pigmentary glaucoma. Vesl/Homer proteins were differentially expressed in the aged, glaucomatous DBA/2J retina, both at the transcriptional and translational level. Immunoreactivity for the long Vesl-1L/Homer 1c isoform, but not of the immediate early gene product Vesl-1S/Homer 1a was increased in the synaptic layers of the retina. This increased protein level of Vesl-1L/Homer 1c was correlated with phenotypes of increased disease severity and a decrease in visual performance. The increased expression of Vesl-1L/Homer 1c in the glaucomatous retina likely results in increased intracellular Ca2+ release through enhancement of synaptic coupling. The ensuing Ca2+ toxicity may thus activate neurodegenerative pathways and lead to the progressive loss of synaptic function in glaucoma. Our data suggest that higher levels of Vesl-1L/Homer 1c generate a more severe disease phenotype and may represent a viable target for therapy development.
glaucoma; neurodegeneration; Vesl/Homer; synaptic clustering; calcium channel; DBA/2J
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention.
Multisensory interactions can lead to illusory percepts, as exemplified by the sound-induced extra flash illusion (SIFI: Shams et al., 2000, 2002). In this illusion, an audio-visual stimulus sequence consisting of two pulsed sounds and a light flash presented within a 100 ms time window generates the visual percept of two flashes. Here, we used colored visual stimuli to investigate whether concurrent auditory stimuli can affect the perceived features of the illusory flash. Zero, one or two pulsed sounds were presented concurrently with either a red or green flash or with two flashes of different colors (red followed by green) in rapid sequence. By querying both the number and color of the participants visual percepts, we found that the double flash illusion is stimulus specific: i.e., two sounds paired with one red or one green flash generated the percept of two red or two green flashes, respectively. This implies that the illusory second flash is induced at a level of visual processing after perceived color has been encoded. In addition, we found that the presence of two sounds influenced the integration of color information from two successive flashes. In the absence of any sounds, a red and a green flash presented in rapid succession fused to form a single orange percept, but when accompanied by two sounds, this integrated orange percept was perceived to flash twice on a significant proportion of trials. In addition, the number of concurrent auditory stimuli modified the degree to which the successive flashes were integrated to an orange percept versus maintained as separate red-green percepts. Overall, these findings show that concurrent auditory input can affect both the temporal and featural properties of visual percepts.
auditory; visual; multisensory; flash illusion; color fusion; integration
Rhodopsin is trafficked to the rod outer segment of vertebrate rod cells with high fidelity. When rhodopsin transport is disrupted retinal photoreceptors apoptose, resulting in the blinding disease autosomal dominant retinitis pigmentosa. Herein, we introduce rhodopsin-photoactivatable GFP-1D4 (rhodopsin-paGFP-1D4) for the purposes of monitoring rhodopsin transport in living cells. Rhodopsin-paGFP-1D4 contains photoactivatable GFP (paGFP) fused to rhodopsin’s C-terminus and the last eight amino acids of rhodopsin (1D4) appended to the C-terminus of paGFP. The fusion protein binds the chromophore 11-cis retinal and photoisomerizes upon light activation similarly to rhodopsin. It activates the G-protein transducin with similar kinetics as does rhodopsin. Rhodopsin-paGFP-1D4 localizes to the same compartments, the primary cilium in cultured IMCD cells and the outer segment of rod cells, as rhodopsin in vitro and in vivo. This enables its use as a model of rhodopsin transport and details the importance of a free rhodopsin C-terminus in rod cell localization and health.
Rhodopsin; GPCR; photoactivatable GFP; photoreceptors; trafficking; paGFP
Ocular following responses (OFRs) are the initial tracking eye movements
elicited at ultra-short latency by sudden motion of a textured pattern. We
wished to evaluate quantitatively the impact that subcortical stages of visual
processing might have on the OFRs. In three experiments we recorded the OFRs of
human subjects to brief horizontal motion of 1D vertical sine-wave gratings
restricted to an elongated horizontal aperture. Gratings were composed of a
variable number of abutting horizontal strips where alternate strips were in
counterphase. In one of the experiments we also utilized gratings occupying a
variable number of horizontal strips separated vertically by mean-luminance
gaps. We modeled retinal center/surround receptive fields as a difference of two
2-D Gaussian functions. When the characteristics of such local filters were
selected in accord with the known properties of primate retinal ganglion cells,
a single-layer model was capable to quantitatively account for the observed
changes in the OFR amplitude for stimuli composed of counterphase strips of
different heights (Experiment 1), for a wide range of stimulus contrasts
(Experiment 2) and spatial frequencies (Experiment 3). A similar model using
oriented filters that resemble cortical simple cells was also able to account
for these data. Since similar filters can be constructed from the linear
summation of retinal filters, and these filters alone can explain the data, we
conclude that retinal processing determines the response to these stimuli. Thus,
with appropriately chosen stimuli, OFRs can be used to study visual spatial
integration processes as early as in the retina.
visual motion; retinal ganglion cells; contrast gain control; surround inhibition
The veridical perception of collinearity between two separated lines is distorted by two parallel lines in the space between them (the Poggendorff illusion). This paper tests the conjecture that the perception of collinearity of separated lines is based on a two-stage mechanism. The first stage encodes the orientation of the virtual line between the proximal terminators of the target lines. The second stage compares this virtual orientation with the orientation of the target lines themselves. Errors can and do arise from either process. Two parallel lines, abutting against the target lines, cause the classical Poggendorff misalignment bias. The magnitude of the bias is increased by Gaussian blur, as is a version of the Poggendorff figure containing only acute angles. In the obtuse-angle figure, on the other hand, blur decreases the misalignment bias. We argue that the acute- and obtuse-angle biases depend upon different mechanisms, and that the obtuse-angle effect is more related to the obtuse-angle version of the Muller–Lyer illusion, which is also decreased by blur. If observers attempt to match the orientation of the virtual line between the two line intersections in the Poggendorff figure they make an error in the same direction as the Poggendorff bias. The orientation of the target lines in the figure, however, is veridically matched to a Gabor-patch probe, unless the target lines are very short, in which case the error is in the same direction as the Poggendorff bias. A small bend in the target lines where they abut the parallels increases the Poggendorff bias if it makes the line more orthogonal to the parallel, but has little effect in the opposite direction. The Poggendorff bias is unlikely to depend upon biases in first-stage linear filters because (a) it still exists in figures composed of short, luminance-balanced lines which are defined by contrast only; and (b) it also exists if the parallels are replaced by grating patches with the same mean luminance as the background. The orientation of the grating in the latter case affects the magnitude of the bias, but even an orientation which should reverse the Poggendorff bias by the mechanism of cross-orientation inhibition fails to do so. The Poggendorff bias is a complex effect arising from several sources. Blurring in second-stage filters with large receptive fields can explain many aspects of the phenomenon.
Poggendorff bias; Muller–Lyer illusion; Gaussian blur
The processing of texture patterns has been characterized by a model that first filters the image to isolate one texture component, then applies a rectifying nonlinearity that converts texture variation into intensity variation, and finally processes the resulting pattern with mechanisms similar to those used in processing luminance-defined images (spatial-frequency- and orientation-tuned filters). This model, known as FRF for filter rectify filter, has the appeal of explaining sensitivity to second-order patterns in terms of mechanisms known to exist for processing first-order patterns. This model implies an unexpected interaction between the first and second stages of filtering; if the first-stage filter consists of narrowband mechanisms tuned to detect the carrier texture, then sensitivity to high-frequency texture modulations should be much lower than is observed in humans. We propose that the human visual system must pool over first-order channels tuned to a wide range of spatial frequencies and orientations to achieve texture demodulation, and provide psychophysical evidence for pooling in a cross-carrier adaptation experiment and in an experiment that measures modulation contrast sensitivity at very low first-order contrast.
second-order vision; texture
Older adults commonly report difficulties in visual tasks of everyday living that involve visual clutter, secondary task demands, and time sensitive responses. These difficulties often cannot be attributed to visual sensory impairment. Techniques for measuring visual processing speed under divided attention conditions and among visual distractors have been developed and have established construct validity in that those older adults performing poorly in these tests are more likely to exhibit daily visual task performance problems. Research suggests that computer-based training exercises can increase visual processing speed in older adults and that these gains transfer to enhancement of health and functioning and a slowing in functional and health decline as people grow older.
Visual processing speed; attention; aging; everyday visual tasks; useful field of view
Contrast sensitivity defines the threshold between the visible and invisible, which has obvious significance for basic and clinical vision science. Fechner's 1860 review reported that threshold contrast is 1% for a remarkably wide range of targets and conditions. While printed charts are still in use, computer testing is becoming more popular because it offers efficient adaptive measurement of threshold for a wide range of stimuli. Both basic and clinical studies usually want to know fundamental visual capability, regardless of the observer's subjective criterion. Criterion effects are minimized by the use of an objective task: multiple-alternative forced-choice detection or identification. Having many alternatives reduces the guessing rate, which makes each trial more informative, so fewer trials are needed. Finally, populations who may experience crowding or target confusion should be tested with one target at a time.
Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing.
LCD; CRT; Luminance; Linearization; Display Calibration; Contrast
The common marmoset, Callithrix jacchus, is a primate model for emmetropization studies. The refractive development of the marmoset eye depends on visual experience, so knowledge of the optical quality of the eye is valuable. We report on the wavefront aberrations of the marmoset eye, measured with a clinical Hartmann-Shack aberrometer (COAS, AMO Wavefront Sciences). Aberrations were measured on both eyes of 23 marmosets whose ages ranged from 18 to 452 days. Twenty-one of the subjects were members of studies of emmetropization and accommodation, and two were untreated normal subjects. Eleven of the 21 experimental subjects had worn monocular diffusers or occluders and ten had worn binocular spectacle lenses of equal power. Monocular deprivation or lens rearing began at about 45 days of age and ended at about 108 days of age. All refractions and aberration measures were performed while the eyes were cyclopleged; most aberration measures were made while subjects were awake, but some control measurements were performed under anesthesia. Wavefront error was expressed as a seventh-order Zernike polynomial expansion, using the Optical Society of America’s naming convention. Aberrations in young marmosets decreased up to about 100 days of age, after which the higher-order RMS aberration leveled off to about 0.10 micron over a 3 mm diameter pupil. Higher-order aberrations were 1.8 times greater when the subjects were under general anesthesia than when they were awake. Young marmoset eyes were characterized by negative spherical aberration. Visually deprived eyes of the monocular deprivation animals had greater wavefront aberrations than their fellow untreated eyes, particularly for asymmetric aberrations in the odd-numbered Zernike orders. Both lens-treated and deprived eyes showed similar significant increases in Z3-3 trefoil aberration, suggesting the increase in trefoil may be related to factors that do not involve visual feedback.
optical aberrations; development; emmetropization; refractive error; marmoset; primate
To clarify the role of visual feedback in the generation of corrective movements after inaccurate primary saccades, we used a visually-triggered saccade task in which we varied how long the target was visible. The target was on for only 100 ms (OFF100ms), on until the start of the primary saccade (OFFonset) or on for 2 s (ON). We found that the tolerance for the post-saccadic error was small (− 2%) with a visual signal (ON) but greater (−6%) without visual feedback (OFF100ms). Saccades with an error of −10%, however, were likely to be followed by corrective saccades regardless of whether or not visual feedback was present. Corrective saccades were generally generated earlier when visual error information was available; their latency was related to the size of the error. The LATER (Linear Approach to Threshold with Ergodic Rate) model analysis also showed a comparable small population of short latency corrective saccades irrespective of the target visibility. Finally, we found, in the absence of visual feedback, the accuracy of corrective saccades across subjects was related to the latency of the primary saccade. Our findings provide new insights into the mechanisms underlying the programming of corrective saccades: 1) the preparation of corrective saccades begins along with the preparation of the primary saccades, 2) the accuracy of corrective saccades depends on the reaction time of the primary saccades and 3) if visual feedback is available after the initiation of the primary saccade, the prepared correction can be updated.
Primary saccade; Corrective saccade; Visual feedback; LATER model; Forward control
There is controversy regarding whether or not involuntary attention improves response accuracy at a cued location when the cue is non-predictive and if these cueing effects are dependent on backward masking. Various perceptual and decisional mechanisms of performance enhancement have been proposed, such as signal enhancement, noise reduction, spatial uncertainty reduction, and decisional processes. Herein we review a recent report of mask-dependent accuracy improvements with low contrast stimuli and demonstrate that the experiments contained stimulus artifacts whereby the cue impaired perception of low contrast stimuli, leading to an absence of improved response accuracy with unmasked stimuli. Our experiments corrected these artifacts by implementing an isoluminant cue and increasing its distance relative to the targets. The results demonstrate that cueing effects are robust for unmasked stimuli presented in the periphery, resolving some of the controversy concerning cueing enhancement effects from involuntary attention and mask dependency. Unmasked low contrast and/or short duration stimuli as implemented in these experiments may have a short enough iconic decay that the visual system functions similarly as if a mask were present leading to improved accuracy with a valid cue.
We determined whether distracting the observer’s attention from an adapting stimulus could decrease the motion after-effect. Unlike previous studies we used a relatively bias-free 2AFC procedure to measure the strength of adaptation. The strength of motion adaptation was measured by the effects of a moving grating on the contrast discrimination (T vs. C) function for gratings moving in the same or opposite direction. As in previous reports, the effect of adaptation was to move the T vs. C function upwards and right-wards, consistent with an increase in the C50 (semi-saturation) response in the transduction function of the neural mechanism underlying the discrimination. On the other hand, manipulating the attentional load of a distracting task during adaptation had no consistent effect on contrast discrimination, including the absolute detection threshold. It is suggested that previous reported effects of attentional load on adaptation may have depended on response bias, rather than changes in sensitivity.
Motion; Adaptation; Attention; Contrast detection; Contrast discrimination
Prolonged inspection of moving stimuli causes stationary stimuli to appear moving in the opposite direction to the adapting stimulus (the Waterfall effect). It has been claimed that distracting the viewer’s attention from the adapting stimulus by a secondary task reduces the strength of adaptation. However, the method used to show the effect of distraction (the duration of the aftereffect) is potentially susceptible to bias. The experiments reported here show no effect in genuinely naïve subjects, or in experienced observers using a variety of cancellation procedures to measure the effect.
Motion; Adaptation; Attention; 2AFC; Bias
This review is concerned primarily with psychophysical and physiological evidence relevant to the question of the existence of spatial features or spatial primitives in human vision. The review will be almost exclusively confined to features defined in the luminance domain. The emphasis will be on the experimental and computational methods that have been used for revealing features, rather than on a detailed comparison between different models of feature extraction. Color and texture fall largely outside the scope of the review, though the principles may be similar. Stereo matching and motion matching are also largely excluded because they are covered in other contributions to this volume, although both have addressed the question of the spatial primitives involved in matching. Similarities between different psychophysically-based model will be emphasized rather than minor differences. All the models considered in the review are based on the extraction of directional spatial derivatives of the luminance profile, typically the first and second, but in one case the third order, and all have some form of non-linearity, be it rectification or thresholding.
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli.
aftereffect transfer; contingent aftereffects; cross-order adaptation; subjective contours; illusory contours; 1/f noise
When a light and also its surrounding context slowly oscillate in chromaticity over time, the color appearance of the light depends on the relative phase of center and surround. The influence of the surround is generally accounted for by retinotopic center-surround organization with the surround inhibiting signals from the center. The traditional neural account, however, cannot rule out lateral inhibition due to cortical mechanisms sensitive to object segmentation cues. Experiments here reveal that illusory contours are sufficient to separate a center from its surround. Observers adjusted the Michelson contrast of a matching disk to equal the perceived modulation depth of a central area within a surround. Both the central test and matching disk were maintained at constant luminance and modulated in-phase at 2 Hz along one chromatic axis (L/(L+M) or S/(L+M)). The center was perceptually segmented from the surround by either a physical (retinotopic separation) or illusory (cortically represented) triangle contour. Segmentation of center from surround by the illusory contour strongly attenuated the perceived modulation depth for both chromatic axes. Further, the strength of attenuation was consistently greater with the illusory than the physically segmenting triangle. This cannot be accounted for by retinal center-surround antagonism; instead it points to a cortical neural representation of contours, with lateral inhibition following neural mechanisms sensitive to object segmentation cues.
color appearance; perceptual segmentation; object representation; illusory contours; center-surround antagonism
Attending to a feature (e.g., color or motion direction) can enhance the early visual processing of that feature. However, it is not known whether one can simultaneously enhance multiple features. We examined people's ability to attend to multiple features in a feature cueing paradigm. Each trial contained two intervals consisting of a random dot motion stimulus. One interval (noise) had 0% coherence (no net motion), while the other interval (signal) moved in a particular direction with varying levels of coherence. Participants reported which interval contained the signal in one of three cueing conditions. In the one-cue condition, a line segment preceded the stimuli indicating the direction of the signal with 100% validity. In the two-cue condition, two lines preceded the stimuli, indicating the signal would move in one of the two cued directions. In the no-cue condition, no line segment appeared before the dot stimuli. In several experiments, we consistently observed a lower detection threshold in the one-cue condition than the no-cue condition, showing that participants can enhance processing of a single feature. However, detection threshold was consistently higher for the two-cue than one-cue condition, indicating that participants could not simultaneously enhance two motion directions as effectively as one direction. This finding revealed a severe capacity limit in our ability to enhance early visual processing for multiple features.
attention; feature; capacity; motion; coherence
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment.
Visual attention; functional magnetic resonance imaging; visual search; memory; learning; context
What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target’s location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target’s location. Furthermore, sensitivity was higher for probes moving in directions similar to the target’s direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that–contingent on recent experience–it flexibly distributes resources to potentially relevant locations and features.
pre-saccadic attention; spatial attention; feature-based attention; eye movements; motion perception; intertrial effects
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas.
attention; fMRI; eccentricity; retinotopy
Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field’s common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans.