Mutations in the key rod phototransduction enzyme phosphodiesterase 6 (PDE6) are known to cause recessive retinitis pigmentosa in humans. Mouse models of mutant PDE6 represent a common approach to understanding the mechanisms of visual disorders related to PDE6 defects. Mutation N605S in the PDE6B subunit is linked to atypical retinal degeneration 3 (atrd3) in mice. We examined PDE6 in atrd3 mice and an atrd3 mutant counterpart of human cone PDE6C expressed in rods of transgenic Xenopus laevis. These animal models revealed remarkably different phenotypes. In contrast to dramatic downregulation of the mutant rod PDE6 protein and activity levels in mice, expression and localization of the cone PDE6C in X. laevis were essentially unaffected by this mutation. Examination of the PDE6B mRNA in atrd3 retina showed that the mutation-carrying exon 14 was spliced-out in the majority of the transcript. Thus, retinal degeneration in atrd3 mice is caused by low levels of PDE6 protein due to defective processing of PDE6B pre-mRNA rather than by deleterious effects of the N605S mutation on PDE6 folding, stability or function.
retina; rods; cones; PDE6; phototransduction
We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing.
face processing; face recognition; rigid facial motion; dynamic faces; featural processing; holistic processing, face composite effect
Current vision science adaptive optics systems use near infrared wavefront sensor ‘beacons’ that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli(Jameson, D. and Hurvich, L. M., 1967. Fixation-light bias: an unwanted by-product of fixation control. Vis. Res. 7, 805 – 809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in 5 subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color.
Color appearance; fixation light bias; adaptive optics psychophysics; color naming
Vernier thresholds are known to be elevated when a target pair has opposite contrast polarity. Polarity reversal is used to assess the role of luminance and chromatic pathways in hyperacuity performance. Psychophysical hyperacuity thresholds were measured for pairs of gratings of various combinations of luminance (Lum) and chromatic (Chr) contrast polarities, at different ratios of luminance to chromatic contrast. With two red-green gratings of matched luminance and chromatic polarity (+Lum+Chr), there was an elevation of threshold at isoluminance. When both luminance and chromatic polarity were mismatched (−Lum−Chr), thresholds were substantially elevated under all conditions. With the same luminance contrast polarity and opposite chromatic polarity (+Lum−Chr) thresholds were only elevated close to isoluminance; in the reverse condition (−Lum+Chr), thresholds were elevated as in the −Lum−Chr condition except close to equiluminance. Similar data were obtained for gratings isolating the short-wavelength cone mechanism. Further psychophysical measurements assessed the role of target separation with matched or mismatched contrast polarity; similar results were found for luminance and chromatic gratings. Comparison physiological data were collected from parafoveal ganglion cells of the macaque retina. Positional precision of ganglion cell signals was assessed under conditions related to the psychophysical measurements. On the basis of these combined observations, it is argued that both magnocellular, parvocellular, and koniocellular pathways have access to cortical positional mechanisms associated with vernier acuity.
hyperacuity; vernier; contrast; polarity; magnocellular; parvocellular; luminance; chromatic
Age-related Macular Degeneration (AMD) is the leading cause of blindness among the elderly. While excellent treatment has emerged for neovascular disease, treatment for early AMD is lacking due to an incomplete understanding of the early molecular events. Cigarette smoking is the strongest epidemiologic risk factor, yet we do not understand how smoking contributes to AMD. Smoking related oxidative damage during the early phases of AMD may play an important role. This review explores how cigarette smoking and oxidative stress to the retinal pigmented epithelium (RPE) might contribute to AMD, and how the transcription factor Nrf2 can activate a cytoprotective response.
Age-related Macular Degeneration; Cigarette smoking; Nrf2; Oxidative stress; Retinal pigmented epithelium
In a modified reflexive spatial attention paradigm, when the cue and the target are at the same spatial location, processing of the target is faster when the cue and the target have different shapes compared to same (shape effect). Recent physiological findings suggest distinct population level encoding of shape in ventral versus dorsal cortical visual streams in monkeys. In human observers, we tested whether the effect of shape on reflexive spatial attention could be attributed to ventral and/or dorsal stream encoding of shape. In the modified reflexive spatial attention paradigm, we varied the shapes of the cue and target. Based on data from monkey physiology (Lehky & Sereno, 2007), we selected four pairs of cue and target shapes. In some pairs, cue and target were similarly encoded (similar encoding distance) by a population of cells in the lateral intraparietal cortex, a dorsal stream area, but more dissimilarly encoded (having a greater encoding distance) by a population of cells in the anterior inferotemporal cortex (AIT), a ventral stream area. In other pairs, cue and target were similarly encoded in AIT and had greater dissimilarity in LIP encoding. We found that pairs of cue and target with greater dissimilarity in LIP encoding produced larger and more consistent shape effects up to a cue to target onset asynchrony (CTOA) of 450 ms. The shape effects for cue and target pairs with greater dissimilarity in AIT encoding were smaller and inconsistent, suggesting that shape effects in reflexive spatial attention are largely driven by the dorsal stream.
Neural encoding; Shape selectivity; Lateral intraparietal; Anterior inferotemporal
Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed.
reading; blur; defocus
Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation.
Visual search; eye movements; visual similarity; object parts; target-distractor similarity; distractor homogeneity
► Second-order cues normally support layer decomposition at long but not short presentation times. ► Perceptual learning is used to train observers to perform the task at short presentation times. ► The transfer of learning is consistent with low level changes in perceptual processing.
Luminance variations are ambiguous: they can signal changes in surface reflectance or changes in illumination. Layer decomposition—the process of distinguishing between reflectance and illumination changes—is supported by a range of secondary cues including colour and texture. For an illuminated corrugated, textured surface the shading pattern comprises modulations of luminance (first order, LM) and local luminance amplitude (second-order, AM). The phase relationship between these two signals enables layer decomposition, predicts the perception of reflectance and illumination changes, and has been modelled based on early, fast, feed-forward visual processing (Schofield et al., 2010). However, while inexperienced viewers appreciate this scission at long presentation times, they cannot do so for short presentation durations (250 ms). This might suggest the action of slower, higher-level mechanisms. Here we consider how training attenuates this delay, and whether the resultant learning occurs at a perceptual level. We trained observers to discriminate the components of plaid stimuli that mixed in-phase and anti-phase LM/AM signals over a period of 5 days. After training, the strength of the AM signal needed to differentiate the plaid components fell dramatically, indicating learning. We tested for transfer of learning using stimuli with different spatial frequencies, in-plane orientations, and acutely angled plaids. We report that learning transfers only partially when the stimuli are changed, suggesting that benefits accrue from tuning specific mechanisms, rather than general interpretative processes. We suggest that the mechanisms which support layer decomposition using second-order cues are relatively early, and not inherently slow.
Second-order; Layer-decomposition; Perceptual-learning
Word reading speed in peripheral vision is slower when words are in close proximity of other words (Chung, 2004). This word crowding effect could arise as a consequence of interaction of low-level letter features between words, or the interaction between high-level holistic representations of words. We evaluated these two hypotheses by examining how word crowding changes for five configurations of flanking words: the control condition — flanking words were oriented upright; scrambled — letters in each flanking word were scrambled in order; horizontal-flip — each flanking word was the left-right mirror-image of the original; letter-flip — each letter of the flanking word was the left-right mirror-image of the original; and vertical-flip — each flanking word was the up-down mirror-image of the original. The low-level letter feature interaction hypothesis predicts similar word crowding effect for all the different flanker configurations, while the high-level holistic representation hypothesis predicts less word crowding effect for all the alternative flanker conditions, compared with the control condition. We found that oral reading speed for words flanked above and below by other words, measured at 10° eccentricity in the nasal field, showed the same dependence on the vertical separation between the target and its flanking words, for the various flanker configurations. The result was also similar when we rotated the flanking words by 90° to disrupt the periodic vertical pattern, which presumably is the main structure in words. The remarkably similar word crowding effect irrespective of the flanker configurations suggests that word crowding arises as a consequence of interactions of low-level letter features.
crowding; word recognition; peripheral vision; features; holistic representation
Performance in visual tasks is limited by the low-level mechanisms that sample the visual field. It is well documented that contrast sensitivity and spatial resolution decrease as a function of eccentricity and that those factors impair performance in “higher level” tasks, such as visual search. Performance also varies consistently at isoeccentric locations in the visual field. Specifically, at a fixed eccentricity, performance is better along the horizontal meridian than the vertical meridian, and along the lower than the upper vertical meridian. Whether these asymmetries in visual performance fields are confined to the vertical meridian or extend across the whole upper versus lower visual hemifield has been a matter of debate. Here, we measure the extent of the upper versus lower asymmetry. Results reveal that this asymmetry is most pronounced at the vertical meridian and that it decreases gradually as the angular distance (polar angle) from the vertical meridian increases, with eccentricity held constant. Beyond 30° of polar angle from the vertical meridian, the upper to lower asymmetry is no longer reliable. Thus, the vertical meridian is uniquely asymmetric and uniquely insensitive. This pattern of results is consistent with early anatomical properties of the visual system and reflects constraints that are critical to our understanding of visual information processing.
Performance fields; Vertical meridian asymmetry; Upper versus lower asymmetry; Horizontal vertical anisotropy; Spatial vision; Contrast sensitivity
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream.
spatial attention; laterality; visual field asymmetries; dorsal/ventral; motion; orientation
Sensory eye dominance (SED) reflects an imbalan ce of interocular inhibition in the binocular network. Extending an earlier work (Ooi & He, 2001) that measured global SED within the central 6°, the current study measured SED locally at 17 locations within the central 8° of the binocular visual field. The eccentricities (radius) chosen for this, “binocular perimetry,” study were 0° (fovea), 2° and 4°. At each eccentricity, eight concentric locations (polar angle: 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°) were tested. The outcome, an SED map, sets up comparison between local SED and other visual functions [monocular contrast threshold, binocular disparity threshold, reaction time to detect depth, the dynamics of binocular rivalry and motor eye dominance]. Our analysis shows that an observer’s SED varies gradually across the binocular visual field both in its sign and magnitude. The strong eye channel revealed in the SED measurement does not always have a lower monocular contrast threshold, and does not need to be the motor dominant eye. There exists significant correlation between SED and binocular disparity threshold, and between SED and the response time to detect depth of a random-dot stereogram. A significant correlation is also found between SED and the eye that predominates when viewing an extended duration binocular rivalry stimulus. While it is difficult to attribute casual factors based on correlation analyses, these observations agree with the notion that an imbalance of interocular inhibition, which is largely revealed as SED, is a significant factor impeding binocular visual perception.
binocular disparity threshold; binocular perimetry; binocular rivalry; monocular contrast threshold; motor eye dominance; sensory eye dominance; stereo reaction time
We explored perceptual factors that might account for the other-race classification advantage (ORCA) in classifying faces by race. Testing Chinese participants in China and Israeli participants in Israel we show that: (a) The distinction between Chinese and Israeli faces is highly accurate even on the basis of isolated eyes or faces with eyes concealed, but full faces are categorized faster. (b) The ORCA is similarly robust for full faces and for face parts. (c) The ORCA was larger when the configuration of the inner-face components was distorted, reflecting delayed categorization of own-race distorted faces relative to own-race normally configured faces but no conspicuous distortion effect on other-race faces. These data demonstrate that perceptual factors can account for the ORCA independently of social bias. We suggest that one source of the ORCA in race categorization is the configural analysis applied by default while processing own-race but not other-race faces.
face perception; other-race faces; configural processing; featural processing
Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance.
time-to-collision; time-to-passage; looming; motion
In order to identify candidate cation channels important for retinal physiology, 28 TRP channel genes were surveyed for expression in the mouse retina. Transcripts for all TRP channels were detected by RT-PCR and sequencing. Northern blotting revealed that mRNAs for 12 TRP channel genes are enriched in the retina. The strongest signals were observed for TRPC1, TRPC3, TRPM1, TRPM3, and TRPML1, and clear signals were obtained for TRPC4, TRPM7, TRPP2, TRPV2, and TRPV4. In situ hybridization and immunofluorescence revealed widespread expression throughout multiple retinal layers for TRPC1, TRPC3, TRPC4, TRPML1, PKD1, and TRPP2. Striking localization of enhanced mRNA expression was observed for TRPC1 in the photoreceptor inner segment layer, for TRPM1 in the inner nuclear layer (INL), for TRPM3 in the INL, and for TRPML1 in the outer plexiform and nuclear layers. Strong immunofluorescence signal in cone outer segments was observed for TRPM7 and TRPP2. TRPC5 immunostaining was largely confined to INL cells immediately adjacent to the inner plexiform layer. TRPV2 antibodies stained photoreceptor axons in the outer plexiform layer. Expression of TRPM1 splice variants was strong in the ciliary body, whereas TRPM3 was strongly expressed in the retinal pigmented epithelium.
retina; TRP channels; gene expression; mouse; ion channels
Natural scenes are explored by combinations of saccadic eye movements and shifts of attention. The mechanisms that coordinate attention and saccades during ordinary viewing are not well understood because studies linking saccades and attention have focused mainly on single saccades made in isolation. This study used an orientation discrimination task to examine attention during sequences of saccades made through an array of targets and distractors. Perceptual measures showed that attention was distributed along saccadic paths when the paths were marked by color cues. When paths were followed from memory, attention rarely spread beyond the goal of the upcoming saccade. These different distributions of attention suggest the involvement of separate processes of attentional control during saccadic planning, one triggered by top-down selection of the saccadic target, and the other by activation linked to visual mechanisms not tied directly to saccadic planning. The concurrent activity of both processes extends the effective attentional field without compromising the accuracy, precision or timing of saccades.
saccades; attention; eye movements; sequences; motor control; orientation discrimination
Photoreceptor guanylate cyclase (GC1) is a transmembrane protein and responsible for synthesis of cGMP, the secondary messenger of phototransduction. It consists of an extracellular domain, a single transmembrane domain, and an intracellular domain. It is unknown how GC1 targets to the outer segments where it resides. To identify a putative GC1 targeting signal, we generated a series of peripheral membrane and transmembrane constructs encoding extracellular and intracellular mouse GC1 fragments fused to eGFP. The constructs were expressed in X. laevis rod photoreceptors under the control of the rhodopsin promoter. We examined the localization of GFP-GC1 fusion proteins containing the complete GC1 sequence, or partial GC1 sequences, which were membrane-associated via either the GC1 transmembrane domain or the rhodopsin C-terminal palmitoyl chains. Full-length GFP-GC1 targeted to the rod outer segment disk rims. As a group, fusion proteins containing the entire cytoplasmic domain of GC1 targeted to the OS, whereas other fusion proteins containing portions of the cytoplasmic or the extracellular domains did not. We conclude that GC1 likely has no single linear peptide-based OS targeting signal. Our results suggest targeting is due to either multiple weak signals in the cytoplasmic domain of GC1, or co-transport to the OS with an accessory protein.
Guanylate cyclase; transgenic Xenopus; targeting signals; membrane associated EGFP-fusion protein; membrane protein transport
The first physiological process influencing visual perception is the optics of the eye. The retinal image is affected by diffraction at the pupil and several kinds of optical imperfections. A model of the eye (Thibos & Bradley, 1999), which takes account of pupil aperture, chromatic aberration and wavefront aberrations, was used to determine wavelength-dependent point-spread functions, which can be convolved with any stimulus specified by its spectral distribution of light at each point. The resulting retinal spectral distribution of light was used to determine the spatial distribution of stimulation for each cone type (S, M and L). In addition, individual differences in retinal-image quality were assessed using a statistical model (Thibos, Bradley & Hong, 2002) for population values of Zernike coefficients, which characterize imperfections of the eye's optics. The median and relatively extreme (5th and 95th percentile) modulation transfer functions (MTFs) for the S, M and L cones were determined for equal-energy-spectrum (EES) ‘white’ light. The typical MTF for S cones was more similar to the MTF for L and M cones after taking wavefront aberrations into account but even with aberrations the S-cone MTF typically was below the M- or L-cone MTF by a factor of at least 10 (one log unit). More generally, the model presented here provides a technique for estimating retinal image quality for the S, M and L cones for any stimulus presented to the eye. The model is applied to some informative examples.
Chromatic aberration; wave aberrations; optics; retinal image; spread light; point spread function
This study quantified normal age-related changes to the photoreceptor axons in the central macula using the birefringent properties of the Henle fiber layer. A scanning laser polarimeter was used to acquire 15 × 15 deg macular images in 120 clinically normal subjects, ranging in age from the third decade to the eighth. Raw image data of the macular cross were used to compute phase retardation maps associated with Henle fiber layer. Annular regions of interest ranging from 0.25 to 3 deg eccentricity and centered on the fovea were used to generate intensity profiles from the phase retardation data, which were then analyzed using sine curve fitting and Fast Fourier Transform (FFT). The amplitude of a 2f sine curve was used as a measure of macular phase retardation magnitude. For FFT analysis, the 2f amplitude, as well as the 4f, were normalized by the remaining FFT components. The amplitude component of the 2f curve fit and the normalized 2f FFT component decreased as a function of age, while the eccentricity of the maximum value for the normalized 2f FFT component increased. The phase retardation changes in the central macula indicate structural alterations in the cone photoreceptor axons near the fovea as a function of age. These changes result in either fewer cone photoreceptors in the central macula, or a change in the orientation of their axons. This large sample size demonstrates systematic changes to the central cone photoreceptor morphology using scanning laser polarimetry.
Henle fiber layer; polarimetry; phase retardation; birefringence; fovea; aging; cones
We evaluated the relationship between the size of the peripapillary crescent and the axial length (AL) of the eye as well as the fine structure of the peripapillary crescent in selected eyes. Infrared fundus imaging and spectral domain optical coherence tomography (SDOCT) (Spectralis HRA+OCT, Heidelberg Engineering, Germany) centered at the fovea were performed on 72 healthy adults. On the infrared fundus images, we measured (a) the distance between the foveola and the temporal edge of the optic disc (FOD) and (b) the distance between the foveola and the temporal edge of the peripapillary crescent (FOC) (if present). A peripapillary crescent presented at the nasal margin of the disc in 64% of the subjects. The FOD and FOC were 4.22mm±0.46 and 3.97mm±0.25, respectively. Only the FOD was significantly correlated with axial length. As AL increased by 10%, the FOD increased by 13%, the outer neural retina only expanded by 4% (as indicated by the FOC). This result emphasizes that retinal stretching may not mirror scleral growth, and the existence in some eyes of a difference between the photoreceptor margin and RPE margin suggests that within the retina there could be slippage during eye growth.
myopia; optic disc; optic disc crescent; spectral domain optical coherence tomography; adaptive optics scanning laser ophthalmoscope
It is well known that object recognition requires spatial frequencies exceeding some critical cutoff value. People with central scotomas who rely on peripheral vision have substantial difficulty with reading and face recognition. Deficiencies of pattern recognition in peripheral vision, might result in higher cutoff requirements, and may contribute to the functional problems of people with central-field loss. Here we asked about differences in spatial-cutoff requirements in central and peripheral vision for letter and face recognition.
The stimuli were the 26 letters of the English alphabet and 26 celebrity faces. Each image was blurred using a low-pass filter in the spatial frequency domain. Critical cutoffs (defined as the minimum low-pass filter cutoff yielding 80% accuracy) were obtained by measuring recognition accuracy as a function of cutoff (in cycles per object).
Our data showed that critical cutoffs increased from central to peripheral vision by 20% for letter recognition and by 50% for face recognition. We asked whether these differences could be accounted for by central/peripheral differences in the contrast sensitivity function (CSF). We addressed this question by implementing an ideal-observer model which incorporates empirical CSF measurements and tested the model on letter and face recognition. The success of the model indicates that central/peripheral differences in the cutoff requirements for letter and face recognition can be accounted for by the information content of the stimulus limited by the shape of the human CSF, combined with a source of internal noise and followed by an optimal decision rule.
Pattern recognition; Peripheral vision; Letters; Faces; Spatial-frequency bandwidth; Ideal observer
To identify unbiased methods for estimating the target vergence required to maximize visual acuity based on wavefront aberration measurements. Experiments were designed to minimize the impact of confounding factors that have hampered previous research. Objective wavefront refractions and subjective acuity refractions were obtained for the same monochromatic wavelength. Accommodation and pupil fluctuations were eliminated by cycloplegia. Unbiased subjective refractions that maximize visual acuity for high contrast letters were performed with a computer controlled forced choice staircase procedure, using 0.125 diopter steps of defocus. All experiments were performed for two pupil diameters (3mm and 6mm). As reported in the literature, subjective refractive error does not change appreciably when the pupil dilates. For 3 mm pupils most metrics yielded objective refractions that were about 0.1D more hyperopic than subjective acuity refractions. When pupil diameter increased to 6 mm, this bias changed in the myopic direction and the variability between metrics also increased. These inaccuracies were small compared to the precision of the measurements, which implies that most metrics provided unbiased estimates of refractive state for medium and large pupils. A variety of image quality metrics may be used to determine ocular refractive state for monochromatic (635nm) light, thereby achieving accurate results without the need for empirical correction factors.
We investigated how the perceptual visibility of a target influences the pattern of microsaccadic eye movements expressed during generalized flash suppression. We found that the microsaccade rate was highly dependent on the reported visibility of the target. In the visible trials, the microsaccade rate promptly rebounded to the pre-onset level, whereas on the invisible trials the rate remained low, reaching pre-onset levels hundreds of milliseconds later. In addition, the directional distributions of microsaccades biased to the target positions in the visible condition. The present findings indicate that the microsaccade behavior is highly correlated with the perceptual state of target visibility, and suggest that the measured microsaccade rate and direction are reliable indicators of the perception.
Microsaccade; Generalized flash suppression; Multistable perception; Fixation; Visual attention
Previous studies have provided conflicting evidence regarding whether the magnocellular (M) or parvocellular (P) visual pathway is primarily responsible for triggering involuntary orienting. Here, we used event-related potentials (ERPs) to provide new evidence that both the M and P pathways can trigger attentional capture and bias visual processing at multiple levels. Specifically, cued-location targets elicited enhanced activity, relative to uncued-location targets, at both early sensory processing levels (indexed by the P1 component) and later higher-order processing stages (as indexed by the P300 component). Furthermore, the present results show these effects of attentional capture were not contingent on the feature congruency between the cue and expected target, providing evidence that this biasing of visual processing was not dependant on top-down expectations or within-pathway priming.
involuntary; attention capture; ERP; P1; P3; P300