PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Vision Res. Author manuscript; available in PMC 2010 August 1.
Published in final edited form as:
PMCID: PMC2743248
NIHMSID: NIHMS125640

Projected disparity, not horizontal disparity, predicts stereo depth of 1-D patterns

Abstract

Binocular disparities have a straightforward geometric relation to object depth, but the computation that humans use to turn disparity signals into depth percepts is neither straightforward nor well understood. One seemingly solid result, which came out of Wheatstone’s work in the 1830’s, is that the sign and magnitude of horizontal disparity predict the perceived depth of an object: ‘Positive’ horizontal disparities yield the perception of ‘far’ depth, ‘negative’ horizontal disparities yield the perception of ‘near’ depth, and variations in the magnitude of horizontal disparity monotonically increase or decrease the perceived extent of depth. Here we show that this classic link between horizontal disparity and the perception of ‘near’ versus ‘far’ breaks down when the stimuli are one-dimensional. For these stimuli, horizontal is not a privileged disparity direction. Instead of relying on horizontal disparities to determine their depth relative to that of two-dimensional stimuli, the visual system uses a disparity calculation that is non-veridical yet well suited to deal with the joint coding of disparity and orientation.

Keywords: Stereopsis, depth perception, disparity, intersection-of-constraints

Introduction

Charles Wheatstone showed in 1838 that binocular disparities suffice for the perception of stereoscopic depth and that horizontal disparities play a special role in depth perception. Indeed, perceived depth increases monotonically with horizontal disparity from threshold to (and even somewhat beyond) the limit of binocular fusion (Ogle, 1952). The horizontal disparity axis has a fundamental polarity: Objects whose horizontal disparities have one sign (‘positive’ or ‘uncrossed’) are seen on the ‘far’ side of the fixation plane and those having the opposite sign (‘negative’ or ‘crossed’) are seen on the ‘near’ side. Thus, depth polarity depends on whether the local retinal image in the left eye is to the left or the right of the corresponding image in the right eye. Many primate visual neurons respond best to stimuli having specific combinations of disparity magnitude and direction, forming a distribution of preferred disparity directions that can be described as essentially isotropic (e.g., Barlow, Blakemore & Pettigrew, 1967; van der Heydt, Adorjani, Hänny & Baumgartner, 1978; Ohzawa & Freeman, 1986; Anzai, Ohzawa & Freeman, 1999; Prince, Pointon, Cumming & Parker, 2002). However, there are uncertainties in these measures (Serrano-Pedraza & Read, 2009), as well as deviations from the general finding (e.g., DeAngelis, Ohzawa & Freeman, 1991) and some contrary evidence (Cumming, 2002). Moreover, it is uncertain whether or how the visual system uses disparities with directions other than horizontal to estimate object depth. In an attempt to reduce this uncertainty, we examined psychophysically the calculation of the depth of a one-dimensional (1-D) stimulus relative to a two-dimensional (2-D) stimulus.

1-D stimuli (gratings, lines, edges) have an ambiguous disparity direction. This is because the component of the disparity that is parallel to the stimulus is undetectable (the ‘aperture problem’) (Morgan & Castet, 1997; Farell, 1998). However, even if the effective disparity direction were known, the question of how the visual system uses disparity parameters to compute the depth of 1-D stimuli would remain unresolved. Previous studies have assumed, reasonably, that 1-D stimuli have some intrinsic disparity direction and a particular stereoscopic depth that depends on the sign and magnitude of the horizontal component of this disparity, just as for 2-D stimuli (van Ee & Schor, 2000; van Dam & van Ee, 2004; Ito, 2005). However, testing these assumptions requires independently varying the magnitude and the direction of the relative disparity of the 1-D stimulus and a reference stimulus, and this has not been done. Moreover, it is not clear that computations that work for 2-D stimuli generalize to the 1-D case; the stereo properties of 1-D stimuli might be processed differently from, and as a precursor to, those of 2-D stimuli (Farell, 1998; Patel et al., 2003; Patel, Bedell & Sampat, 2006; Qian & Zhu, 1997). We looked at suprathreshold disparities to find out how disparity determines depth for 1-D stimuli. We measured the disparity that gave a grating the same apparent depth as a plaid, whose role was that of a reference stimulus. The plaid’s disparity could take on any of several different magnitudes and directions (not just horizontal). Because the plaid is 2-D, we could independently vary the direction of its disparity and the orientations of its 1-D components, allowing us to dissociate effects of these two variables.

We find that the relative-depth calculation for these stimuli relies on horizontal disparities when the disparity of the 2-D reference stimulus is horizontal. We also find that this is a special instance of a more general rule. By this rule, the perceived relative depth of these stimuli can be predicted from a simple disparity-vector calculation in which relative horizontal disparity plays no privileged role. This finding leads to the unique prediction that two stimuli can have equivalent perceived depths even though their horizontal disparities have opposite signs. We tested this prediction and confirmed it.

Methods

Stimuli

The central stimulus was a Gabor patch. Surrounding the grating was a plaid annulus that served as a reference stimulus. The stimuli were achromatic. The sinusoids had a contrast of 0.1 and a spatial frequency of 2 cycles/degree; in a control condition, the central Gabor contrast was doubled to 0.2, to match the contrast of the plaid. The plaid was composed of two such sinusoids with differing orientations. The standard deviation (σ) of the Gaussian envelope of the central patch was 0.53° of visual angle in the horizontal and vertical directions. The windowing of the annulus was Gaussian along the radial direction, with σ = 0.34°. The peaks of the center and surround Gaussian envelopes were separated by a distance of 2° of visual angle. The grating-plaid pair was presented alone; there was no separate fixation stimulus (which might serve as an undesired reference stimulus).

Across all conditions, the orientation of the central grating ranged between 45° to 165° in 15° steps, where 0° and 180° are horizontal. The components of the plaids had orientations of 60° and 120° or 30° and 150°.

The disparities of the contrast envelopes were fixed at zero; the only non-zero disparities were interocular phase shifts of the carriers. As a result, the disparity direction of the envelope was equally different from the disparity directions of all the carriers, regardless of what these directions might be; this neutral status could be obtained only by giving envelopes a fixed disparity (or by randomizing the envelope disparities). The envelopes’ zero disparity also avoided uncertainty about whether observers judged the relative depth of envelopes or carriers, an uncertainty arising when envelope and carrier disparities are correlated.

The center and surround patterns appeared simultaneously for 150 ms, with abrupt onsets and offsets, on a pair of large calibrated monitors viewed through a mirror stereoscope; the viewing distance was 125 cm. Ocular alignment via nonius lines preceded stimulus presentation. The outer visible limit of the contours of the annulus was separated vertically by approximately 3.8° from the nearest visual non-uniformity (the horizontal edges of the monitors’ screens). The comparable separation in the horizontal direction would have been 5.8°, but we occluded the vertical edges of the monitors to prevent observers from fusing them and using them as reference stimuli for depth judgments. The two occluders were vertical, one located to the right of the right eye and one to the left of the left eye, at a distance of approximately 3 cm from the eye. The remaining objects in the visual field were either very low-contrast and non-salient or far (meters) from the depth of the experimental stimuli and hence quite useless as reference stimuli. The absolute phases of the central grating and the components of the plaid were independently randomized (identically in the two eyes) on each trial to prevent the learning of stimulus position cues.

The background luminance was 21 cd/m2, which was also the patterns’ mean luminance. Observers used a chin rest and viewed the displays with natural pupils in a moderately lit room. The monitors were driven by a Macintosh G5 computer via attenuators to boost resolution (Pelli & Zhang, 1991); consequently, only the monitors’ green guns were used. Stimuli were generated and controlled by a Matlab (Mathworks, Inc.) program incorporating elements of Psychtoolbox software (Brainard, 1997; Pelli, 1997).

In separate studies, we have found that the only plaid variables that affect perceived depth in experiments like those reported here are the magnitude and direction of the disparity of the plaid as a whole; varying the orientation of the components (either the absolute orientation or the orientation relative to the central grating) or the distribution of disparities across components had no effect, provided they conserve the plaid’s disparity vector (Chai & Farell, 2009). In addition, separation between the stimuli (here 2°) has minor effects on stereoacuity for pairs of Gabor patches (Farell & Fernandez, 2008); stimulus eccentricity, rather than separation, seems to limit stereo performance. These results have contributed to the choice of parameters used in this study.

Procedure

To obtain psychometric functions for perceived depth, we varied the disparity of the central grating from trial to trial by drawing randomly from a set of six linearly-spaced preselected values. These constant-stimulus values were selected on the basis of preliminary data to provide an approximately symmetrical bracketing of the point of subjective equality. The disparity of the plaid was held constant within a block of trials, as were the other parameters of the plaid and the grating other than absolute phase (these constraints were changed in the control experiment described below). The disparity of the plaid had a direction of 0°, 30°, or 60° (measured from right eye to left eye) and a magnitude of 1.28 or 2.56 arcmin. [The disparities of the plaid’s components were quite different in some conditions (for example, positive for one component and zero for the other). However, superimposed sinusoidal gratings cohere in depth (Adelson & Movshon, 1984; Farell, 1998; Farell & Li, 2004; Delicato & Qian, 2005); the differing component disparities do not lead to the perception of transparency.] We also obtained psychometric functions when the disparity of the plaid was zero.

After aligning the nonius lines at the start of each trial, observers initiated the stimulus presentation with a click of the mouse and classified the central grating as appearing ‘near’ or ‘far’ relative to the surrounding plaid. They signaled their classification by clicking on-screen buttons that appeared after stimulus presentation. Because perceived-depth judgments are subjective, no feedback regarding response accuracy was possible. Data were gathered in runs of 60 trials, preceded by 6–8 warm-up trials, with at least 4 runs contributing to each data point. A bootstrap procedure (Wichmann & Hill, 2001) fit a cumulative Gaussian to the proportion of ‘far’ judgments for each grating disparity.

Control Experiment

A control experiment evaluated possible effects of eye-position strategies. The previous description of methods applies to this control experiment, with the following differences. Two alternative plaids appeared in random order within a block of trials. Both plaids contained components oriented at 45° and 135°. In the ‘positive disparity’ condition, one of the plaid’s components had a phase disparity of +15° and the other had a disparity of zero. The two plaids differed in their assignment of disparities to components; in one plaid the component oriented at 45° had zero disparity; in the other, component oriented at 135° had zero disparity. This gave one plaid a disparity direction of −45° and the other plaid a disparity direction of +45°. The ‘negative disparity’ condition was identical, except for the sign of the non-zero plaid component disparity (−15° phase instead of +15°). These two plaids had disparity directions of +135° and −135°.

Test gratings were oriented at 30°, 150°, or 90°. These gratings were selectively paired with the plaids in order to generate stimulus pairs whose predicted depth-matching disparities differed in sign. Specifically, the oblique gratings had an orientation that lay between the horizontal and the disparity direction of the plaid with which they were paired: The 30° grating—equivalent to a grating oriented at −150°—appeared with a plaid whose disparity direction was 45° or −135°; the 150° grating—equivalent to a grating oriented at −30°—appeared with a plaid whose disparity direction was 135° or −45°. The 90° grating could appear with any of these plaids. Trial sequences were random within separate blocks for the positive and negative plaid disparity conditions. Thus, from trial to trial, the disparity direction of the plaid took on either of two values that differed by 90°; all three test grating orientations appeared within each block. The trial-to-trial variation in the disparity of the test grating was again under the control of a constant-stimulus procedure and other methodological details were as described above. Psychometric functions were based on 48 trials per point.

Observers

Data were recorded from 3 observers (2 in the control condition); the data of two of the three are displayed and those of the third observer closely resembled those of the others. All observers had normal acuity (after correction, if needed) and normal stereo vision; all were well practiced in stereo tasks, but one (S2) did not know the specific purposes of the experiments. Observers gave their informed written consent before participating in the experiments, the protocol for which conforms to the Declaration of Helsinki and was approved by the Institutional Review Board of Syracuse University.

Results

We paired a sinusoidally modulated luminance grating and a plaid made of two superimposed gratings and presented them in a center-surround configuration (Fig. 1A) for 150 ms on each trial. For each plaid disparity, we determined the point of subjective equality (PSE), the 50%-point on the psychometric function. This identifies the grating disparity at which the two stimuli are perceived as having the same depth. The main result is a comparison of the depth-matching functions of two grating-plaid pairs in which the grating orientations differ, one grating oriented at 90° and the other at 45°; these data appear later (Fig. 3). To show where our predictions for these two stimuli come from, we first present PSE data for many combinations of grating orientation and plaid disparity direction and magnitude (Fig. 1C). The grating orientations ranged from 45° to 165° and the plaid disparity directions were 0°, 30°, and 60°, with magnitudes of 1.28 and 2.56 arcmin.

Figure 1
Center/surround stereogram and depth-matching functions
Figure 3
Test of the disparity-projection prediction

The general effect of orientation on depth matches

At the PSE, the horizontal disparities of the grating and the plaid were approximately equal in one condition, that in which the disparity direction of the plaid was horizontal. In general, the similarity of the horizontal disparities of the two stimuli did not predict a depth match. Rather, in order to match the plaid in depth, the grating had to have a perpendicular disparity magnitude that varied with the angular difference between the plaid’s disparity direction, [var phi], and the grating’s orientation, θ. This quantity is illustrated in Figure 1B. As the angular difference |[var phi]θ| increased, the disparity magnitude of the grating needed to maintain a depth match between the two stimuli also increased. This is shown in Figure 1C, where difference between the plaid’s disparity direction and the grating’s orientation, |[var phi]θ|, appears on the x-axis and relative disparity magnitude of the grating at the depth match appears on the y-axis. The depth-matching disparity of the grating is a sinusoidal function of the orientation-direction difference, approximating Dpsin(|[var phi]θ|), where Dp is the plaid’s disparity magnitude.

The perception of a depth match between the stimuli varies with the angular difference |[var phi]θ| in a similar way whether the disparity of the plaid is in the horizontal direction or as far as 60° from horizontal (Fig. 1C). The correlation between the raw depth-matching disparities and the quantity Dpsin(|[var phi]θ|) was 0.97 and 0.98 for the two observers shown in Figure 1C (and 0.97 for the third), and averaged 0.96, 0.98, and 0.99 for the three plaid disparity directions, 0°, 30°, and 60°, respectively (p < 0.001 in all cases). Because the disparity of the grating at the depth match varies with the angular difference |[var phi]θ| and does not change with the plaid’s disparity direction, horizontal disparity does not enter into the relationship shown in Figure 1C.

The traditional hypothesis, that horizontal disparity determines perceived depth, makes quite different predictions. This is shown in Figures 2A and 2B, where the plaid’s disparity vector appears as a red arrow superimposed on schematized gratings, whose disparity vectors appear as blue arrows. Whether the disparity of the plaid is horizontal (Fig. 2A) or oblique (Fig. 2B), one might use either of two measures of horizontal disparity as a reasonable predictor of the grating’s depth-matching disparity. At the depth match, the grating’s disparity measured horizontally (broken blue arrows in Figs. 2A and B) would be the same as the horizontal component of the plaid’s disparity; alternatively, the horizontal component of the grating’s perpendicular disparity vector (solid blue arrow) would be the same as the horizontal component of the plaid’s disparity. The predicted depth-matching phase disparities differ between these two cases, as is clear from Figures 2A and 2B. But in both cases the gratings’ depth-matching disparities are predicted to be a function of the disparity direction of the plaid, |[var phi]|, and the orientation of the grating, |θ|, but not of their difference. It is easy to find examples where changes to [var phi] or θ should have no effect on the depth matches predicted by the horizontal-disparity hypothesis, because these changes have no effect on the horizontal disparity of either stimulus. One example is a flip of either [var phi] or θ about the horizontal axis. But the flip does have an effect on actual depth matches, as shown by the effect it would have on the difference |[var phi]θ| in Figure 1C.

Figure 2
Predicted depth-matching disparities

In geometrical terms, the sinusoidal relationship seen in Figure 1C predicts a depth match when the disparity vectors of the grating and the plaid project equally onto the grating’s perpendicular disparity axis (Fig. 2C). Equivalently, it predicts a depth match when the two stimuli have equal disparity magnitudes in the direction of the plaid’s disparity. In effect, one of the stimulus disparity axes, not the horizontal axis, serves as a reference direction along which the two disparity magnitudes are compared. In the case of a vertical grating, as in Figure 2C, the horizontal disparity and disparity projections predict the same depth-matching disparity. Yet any rotation of Fig. 2C would change the horizontal disparities of the stimulus pair (both the relative and the absolute values). But the rotation leaves the projected disparities unchanged, so it should conserve the perceptual depth match between the stimuli.1

Same depth from horizontal disparities of different signs

Figure 1C shows that the magnitude of horizontal disparities does not reliably predict the depth of a grating relative to a plaid. We next show that what applies to magnitude applies also to polarity: The sign of horizontal disparity does not predict the most basic depth distinction, that between ‘near’ and ‘far’. Gratings whose orientation, θ, lies between the horizontal and the plaid’s disparity direction provide the critical prediction—gratings for which 0°<θ< [var phi] when |[var phi]|<90° or ±180°>θ> [var phi] when |[var phi]|>90°. Applying the disparity-projection prediction Dg=Dpsin(|[var phi]θ|), one finds that in these cases the grating and the plaid should have the same perceived depth when their horizontal disparities have opposite signs. This is shown in Figure 2D. The disparity-projection hypothesis predicts that the grating, whose horizontal disparity component is negative, should have the same apparent depth as the plaid, whose horizontal disparity component is positive. The grating’s predicted negative disparity at the depth match holds whether one measures the disparity in the horizontal or the perpendicular direction.

To test the prediction, we again measured the perceived depth of a grating relative to a positive-disparity plaid (disparity direction: 60°). We did this using a grating having an orientation of 90° (vertical) and, separately, using a grating having an orientation of 45°. The 90° grating should have a positive disparity when it is perceived to have the same depth as the plaid (Fig. 2C), whereas the 45° grating should have a negative disparity (Fig. 2D). The psychometric functions of Figure 3 show the proportion of trials in which the 90° grating (red curve) and the 45° grating (blue curve) were judged as ‘far’ relative to the plaid. The PSEs (50% “far” judgments, marked by arrows) show that when these gratings were seen at the same depth as the plaid, their phase disparities had opposite signs, as predicted.

Thus, two gratings, one with negative horizontal disparity, one with positive horizontal disparity, are both seen at the same depth as the plaid. The horizontal component of the plaid’s disparity was +0.72 arcmin. At its depth match with the plaid, the 45° grating had a horizontal spatial disparity of −1.17 arcmin averaged across observers; the horizontal component of the grating’s perpendicular disparity was −0.58 arcmin. The corresponding horizontal disparities for the 90° grating were positive, +1.0 arcmin for both measures.

Test for eye-position strategies

Figure 3 gives accurate measures of retinal disparities only if observers’ lines of sight converged at the depth of the screen. Otherwise, the nominal and retinal disparities would differ. More to the point, while relative disparities are conserved across changes in eye position, the disparities of individual stimuli, such as those plotted in Figure 3, are not conserved. Because of this, there might be an alternative account of the data if observers’ ocular-position strategies were known. The reasoning is that observers might position their eyes before the 150 ms presentation of the stimuli so as to transform the nominal disparity values so they are useful for some particular purpose (such as processing horizontal disparities). As one example of such a strategy, observers might induce a vertical phoria. It can be imagined that a strategy of this kind might be difficult to implement (because observers get only brief glimpses of the experimental stimuli and have no stimulus they can fuse to give them the desired eye positions). In addition, it has unclear implications for perceived depth (because a side-effect of the strategy would be to give the stimulus envelopes non-zero disparities). Nonetheless, we ran a control experiment in order to test the idea that some sort of eye-positioning strategy might be behind the data of Figure 3. The control was simply to present alternative reference plaids with very different disparities within a single randomized trial sequence. Uncertainty about the plaid’s disparity magnitude and direction should thwart a eye-positioning strategy on at least half the trials. To extend the generality of the results shown in Figure 3, we changed the orientations of the test gratings and the plaid’s components, and the direction of plaid’s disparity. We also tested both ‘near’ and ‘far’ plaid depths (see Methods).

We combined data from the two oblique test grating orientations (30° and 150°) and from positive and negative plaid disparities. The signs of disparities in the negative disparity condition were notionally reversing to make the positive- and negative-disparity data sets commensurate. As a result, oblique gratings should yield results similar to the 45° grating condition seen in Figure 3 if eye-positioning strategies played no role. The 90° grating conditions, too, should be similar.

The two observers whose data appear in Figure 3 ran in this experiment, doing so over a year after the other data had been collected. Despite the trial-by-trial variation of the plaid’s disparity direction and the grating’s orientation, the data, shown in Figure 4, are very similar those of Figure 3. Oblique gratings with a mean phase disparity equivalent to −3.9° were seen to match the plaids in depth; the vertical grating with a phase disparity of +11.1° were seen to match the same plaids. These PSEs are in good agreement with predictions (−3.9° and +10.6°).

Figure 4
Psychometric functions generated under reference disparity uncertainty

Discussion

Our data describe the link between disparity and the perception of the relative depth between 1-D and 2-D stimuli. Gratings having disparities with different magnitudes and opposite horizontal polarities can have the same perceived depth as a plaid presented as a reference stimulus (Figs. 3 and and4).4). Such a many-to-one relation between disparity and perceived depth is a general property of 1-D patterns and is not restricted to disparities yielding depth matches. For example, when the vertical grating had a disparity of zero, it was almost always seen as ‘near’ relative to the plaid, whereas the oblique gratings were almost always seen as ‘far’ when their disparity was zero (Figs. 3 and and4).4). It is not by comparing horizontal disparity values that the visual system computes the depth between a grating and a plaid. Rather, what matters is the relation between the disparity vectors of the two stimuli.

The disparity projection shown in Figures 2C and 2D is a variant of the intersection-of-constraints construction that has proved useful in velocity-space accounts of object-motion perception (Adelson & Movshon, 1982). As used here, the construction is applied to distinct and spatially separate stimuli, rather than to the components of a single stimulus. If the plaid’s disparity in our experiments had been constrained to be horizontal, as in Figure 2A, the resulting disparity projections would have been consistent with the traditional hypothesis that a grating’s perceived depth depends on the magnitude of its disparity measured horizontally. In fact, such a dependence simply reflects the horizontal disparity of the reference stimulus, not the underlying computational strategy.

Figures 3 and and44 show that very different disparities give a vertical grating and an oblique grating the same perceived depth as a plaid. These disparities, and the fact that the sign of one is positive and the other is negative, are stimulus properties. As such, they express the results of a measurement and say little about how the visual system matches left-eye and right-eye image information. But they do show that grating disparities have no inherent depth value; if the plaid’s disparity were to change, the gratings would no longer be seen as having similar depths relative to the plaid, though their disparities are unchanged. We verified this expectation by setting the plaid’s disparity to zero and again determining PSEs for gratings oriented at 90° and 45°. When these two gratings had the disparities that previously resulted in a depth match with the original plaid (grating phase disparities of approximately +12° and −10°, respectively, in Fig. 3), they appeared at opposing depths—one near and one far—relative to the zero-disparity plaid (PSEs marked by color-coded boxes on the ordinate of Fig. 3). For 1-D stimuli, then, transitivity in depth does not hold: Gratings that have the same apparent depth as one plaid will, without changing their disparities, appear to flank a different plaid in depth (Fig. 5).2

Figure 5
Perceived depth is a relational, not an intrinsic, function of disparity

Prior evidence on the horizontal disparity hypothesis

Two previous studies have measured depth matches between 1-D and 2-D stimuli (van Dam & van Ee, 2004; van Ee & Schor, 2000). The stimuli were a line and a disk; the disparity of the line was constant, while the disparity of the disk varied from trial to trial. One study (van Dam & van Ee, 2004) found that the line and the disk appeared at the same depth when their horizontal disparities were approximately the same, provided the line’s ends were effectively obscured. This agrees with our results, because the disparity of the disk in this study was strictly horizontal.

The disk in the other study (van Ee & Schor, 2000) could take on any of 12 disparity directions. If the disparity magnitude yielding a depth-match had been measured for each of these disparity directions, the data of this study and ours could be compared. Instead, the disk had a single disparity magnitude for each of the 12 disparity directions; hence, the horizontal disparity was different in each direction.

The horizontal disparity of the disk at the depth match was somewhat less than that of the line, ranging between approximately 12 to 14 arcmin (versus 15 arcmin for the line). A horizontal disparity of 15 arcmin would be most consistent with our results. This is because a 15 arcmin horizontal disparity is the common disparity vector of the disk and the line. The discrepancy is not large and might be accounted for by the vertical disparity of the reference disk. Similar disks were used in the study of Friedman, Kaye and Richards (1978), who found that adding a vertical disparity component reduced the apparent depth of disks with constant horizontal disparities. Despite their lack of independence between disparity direction and magnitude, these disk-and-line studies (van Ee & Schor, 2000; van Dam & van Ee, 2004) produced results reasonably consistent with our data. Yet our data are not consistent with the interpretation given to the results of these studies, an interpretation that conforms to the traditional assumption that equal horizontal disparities produce equal perceived depths.

Patel et al. (2003) measured perceived depth using 2-D stimuli, but they manipulated disparities of 1-D components. They measured the depth seen between two random-dot surfaces, one of which had non-zero disparity confined to two symmetrically oriented bands of spatial-frequency components. Components within these bands, whatever their frequency, had a constant (90°) phase disparity. Thus, the two bands had vertical disparities with opposing signs, giving the pattern an overall disparity direction that was horizontal. Changing the bands’ orientations shifted the distribution of horizontal disparities. The expected variaton of perceived depth with band orientation was confirmed for orientations within roughly 60° of vertical. Of particular interest was the simulation Patel et al. (2003) ran to explore the contribution of mechanisms tuned to vertical orientations and horizontal disparities. These mechanisms could not make use of the disparity energy of oblique stimulus components oriented far from vertical. However, simulated mechanisms tuned to oblique orientations and strongly non-horizontal disparities could make use of them. The depth that humans see in similar stimuli presumably depends on mechanisms of this latter kind, much as depth discriminations at threshold depend on them (Farell, 2006; Patel, Bedell & Sampat, 2006).

Vertical disparity matters for perceived depth, but other factors seem to determine how much it matters. In the study just mentioned, for example, horizontal disparity predicted perceived depth when the center of the component orientation band was within approximately 60° of vertical (Patel et al., 2003). The prediction began to break down at shallower orientations, but the exact point of the break varied with orientation bandwidth. We found no clear relation between vertical disparities and either PSEs or psychometric function slopes in our data, despite the wide range of plaid disparity directions (from horizontal to 30° of vertical) and grating orientations from vertical to as close as 15° from horizontal. A possible contributing factor is the envelope disparities in our stimuli, which were zero. This might have allowed full expression of the tolerance to the vertical component of carrier disparities. Zero-disparity envelopes raise other issues, to which we turn next.

A note on zero-disparity envelopes

There are alternatives to the zero disparities of the contrast envelopes used in our study. However, there also are reasons for regarding envelope disparities that vary with the carrier disparities as problematic for answering the questions posed by our study. One problem with variable envelope disparities is deciding on their direction. What disparity direction should a grating’s envelope have? Horizontal? The same direction as the plaid’s disparity? The same direction as the grating’s? And which direction is that?

In whichever way this problem is overcome, the result would be envelopes and carriers with correlated disparities. Thus, the depth matches subsequently measured might be matches between the envelopes, the carriers, or some combination of the two. In two of these cases, our original intention of obtaining matches that varied with carrier disparities would still have to be met.

With zero-disparity envelopes, carrier and envelope disparities are uncorrelated in magnitude. As noted in Methods, zero-disparity envelopes also result in an equal difference between the disparity directions of envelope and carrier regardless of what the carrier disparity directions are. And whatever depth-cue conflicts the envelopes might contribute to, they are the same, when averaged across trials, for the grating and the plaid, just as is the case for depth-cue conflicts contributed by other objects that might be visible at the time of stimulus presentation. Finally, there is the challenge of explaining how isotropic envelopes with fixed disparity interact with differences in carrier disparity directions to account for the depth-match data.

1-D stimuli and 1-D components

Positive horizontal disparities are associated with ‘far’ depth, and negative disparities with ‘near’ depth, for both geometrical and perceptual reasons. Geometrically, a stimulus placed behind the point of fixation will cast retinal images that have a disparity with the opposite sign from those of a stimulus placed on the near side. Perceptually, an artificially created disparity characteristically gives rise to a percept whose depth polarity, ‘near’ versus ‘far’, depends on the sign of the horizontal disparity, negative versus positive, respectively.

Despite this close association, there are indications that the visual system makes use of the sign of disparity as a cue to ‘near’ versus ‘far’ depth only at rather late stages of the depth computation. Consider a ‘near’ plaid composed of one sinusoidal component with zero disparity and another with positive disparity. Such a plaid is easy constructed using components of the proper orientations—those that give the 2-D features of the plaid (the ‘blobs’) a negative disparity—so the ‘near’ depth percept is expected (Farell, 1998). However, rather than detecting the negative disparity of the plaid directly, the visual system could construct it from the disparities of the components, though neither component disparity is negative. The effect of adaptation suggest that this is in fact what happens. Adapting observers to a stimulus in depth affects subsequent judgments of the plaid’s depth. However, this occurs if the adapting stimulus has a positive disparity near that of the component, not a negative disparity near that of the plaid. Thus, an effective adaptor can be located in depth on the other side of the fixation plane from the plaid whose depth it influences (Farell, 1998). The implication is that the visual system uses the disparities of 1-D components to calculate the depth of 2-D patterns. The present data show that this computational strategy of using combined disparity and orientation information extends to relative depth judgments between spatially separate 1-D and 2-D patterns.

As is clear from Figures 3 and and4,4, the disparity-projection algorithm can give rise to non-veridical estimates of depth. This could happen when we view natural scenes. Though generally two-dimensional, natural-object images often possess 1-D-like features. These occur in the local texture of object surfaces, at the borders of objects, and within apertures. The disparity properties of these features are similar to those of gratings and other 1-D stimuli (Farell, 1998; Malik, Anderson & Charowhas, 1999). Thus, we can expect depth estimates of these 1-D image features to systematically vary with the features’ orientations whenever the disparity of the reference stimulus is non-horizontal. There are a number of sources of non-horizontal disparities in natural-image viewing (Howard & Rogers, 2002) in addition to those giving rise to 1-D-like features mentioned above.

Conclusions

Our results show that depth percepts for 1-D stimuli are relational, not intrinsic, functions of disparity. This applies even to depth order: Whether the horizontal disparity of a 1-D stimulus is positive or negative does not in general predict whether it will appear as ‘near’ versus ‘far’ relative to another stimulus having a known disparity. Of course, horizontal disparities are particularly salient cues for calculating the depth of the 2-D patterns that dominate our visual landscape. Therefore, it may seem odd that when calculating the depths of 1-D patterns, human stereo-vision engages a computation that uses non-horizontal as well as horizontal disparities and yields non-veridical, orientation-specific depth estimates, as seen in the psychometric functions of Figures 3 and and4.4. Yet, this computation can be carried out using a well-documented physiological substrate (Barlow, Blakemore & Pettigrew, 1967; van der Heydt, Adorjani, Hänny & Baumgartner, 1978; Ohzawa & Freeman, 1988; Anzai, Ohzawa & Freeman, 1999; Qian & Zhu, 1997; Ohzawa, DeAngelis, & Freeman, 1990) and may underlie the processing of stereo-depth generally. The initial coding of disparities appears not to be sensitive to the disparities of 2-D retinal images directly. Instead, the disparities of 2-D patterns are derived by combining the disparities of the multiple 1-D components of these patterns (Farell, 1998; Patel et al., 2003; Patel, Bedell & Sampat, 2006). These component disparities differ from the overall disparity of the pattern itself and are initially encoded jointly with component orientations (Farell, 2006). As a result, it is only at a later stage of analysis that 2-D pattern disparities, which are primarily horizontal in natural scenes and give 3-D spatial structure and position to visual objects, can be represented separately from orientation. Yet it is now clear from the results of the present experiments that individual 1-D patterns retain this joint coding of disparity and orientation through all stages leading to perception.

Acknowledgments

We thank Robert Barlow, Suzanne McKee, Fred Kingdom, Denis Pelli, Katharine Tillman, Hong Xu, and an anonymous reviewer for their helpful comments. We are also indebted to our subjects for their time and effort. Supported by NIH Grant EYR01-012286 (B.F.).

Footnotes

1A uniform vertical disparity by itself does not support stereoscopic depth perception and vertical disparity gradients only modulate the stereo depth of stimuli many times larger than those used here (Rogers & Bradshaw, 1993). Therefore, we would expect in practice that depth matches will be rotationally invariant provided the rotation avoids creating disparities that are effectively vertical.

2This suggests that a 2-D stimulus with zero disparity might have an implicit disparity axis that is horizontal. The colored boxes on the ordinate of Figure 3 give the direction of perceived depth for gratings with zero disparity relative to a plaid with oblique disparity. They suggest that the implicit disparity direction for 1-D stimuli with zero disparity is perpendicular to the stimulus orientation, not horizontal.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • Adelson EH, Movshon JA. Phenomenal coherence of moving visual patterns. Nature. 1982;300:523–525. [PubMed]
  • Adelson EH, Movshon JA. Binocular disparity and the computation of two-dimensional motion. Journal of the Optical Society of America. 1984;1A:1266.
  • Anzai A, Ohzawa I, Freeman RD. Neural mechanisms for processing binocular information. II. Complex cells. Journal of Neurophysiology. 1999;8:909–924. [PubMed]
  • Barlow HB, Blakemore C, Pettigrew JD. The neural mechanism of binocular depth discrimination. Journal of Physiology. 1967;193:327–342. [PubMed]
  • Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10:433–436. [PubMed]
  • Chai YC, Farell B. From disparity to depth: How to make a grating and a plaid appear in the same depth plane. Journal of Vision. 2009 in press. [PMC free article] [PubMed]
  • Cumming BC. An unexpected specialization for horizontal disparity in primate visual cortex. Nature. 2002;418:633–636. [PubMed]
  • DeAngelis GC, Ohzawa I, Freeman RD. Depth is encoded in the visual cortex by a specialized receptive field structure. Nature. 1991;352:156–159. [PubMed]
  • Delicato LS, Qian N. Is depth perception of stereo plaids predicted by intersection of constraints, vector average or second-order feature? Vision Research. 2005;45:75–89. [PubMed]
  • Farell B, Fernandez JF. Orientation difference, spatial separation, intervening stimuli: What degrades stereoacuity and what doesn’t. Journal of Vision. 2008;8:91a.
  • Farell B, Li S. Perceiving depth coherence and transparency. Journal of Vision. 2004;4:209–223. [PubMed]
  • Farell B. Two-dimensional matches from one-dimensional stimulus components in human stereopsis. Nature. 1998;395:689–693. [PubMed]
  • Farell B. Orientation-specific computation in stereoscopic vision. Journal of Neuroscience. 2006;26:9098–9106. [PMC free article] [PubMed]
  • Friedman RB, Kaye MG, Richards W. Effect of vertical disparity upon stereoscopic depth. Vision Research. 1978;18:351–352. [PubMed]
  • Howard IP, Rogers BJ. Depth Perception. Vol. 2. I. Porteus; Toronto: 2002. Seeing In Depth; pp. 233–237.
  • Ito H. Illusory depth perception of oblique lines produced by overlaid vertical disparity. Vision Research. 2005;45:931–942. [PubMed]
  • Malik J, Anderson BL, Charowhas CE. Stereoscopic occlusion junctions. Nature Neuroscience. 1999;2:840–843. [PubMed]
  • Morgan M, Castet JE. The aperture problem in stereopsis. Vision Research. 1997;39:2737–2744. [PubMed]
  • Ogle KN. On the limits of stereoscopic vision. Journal of Experimental Psychology. 1952;44:253–259. [PubMed]
  • Ohzawa I, Freeman RD. The binocular organization of complex cells in the cat’s visual cortex. Journal of Neurophysiology. 1986;56:243–259. [PubMed]
  • Ohzawa I, DeAngelis GC, Freeman RD. Stereoscopic depth discrimination in the visual cortex: neurons ideally suited as disparity detectors. Science. 1990;249:1037–1041. [PubMed]
  • Patel SS, Ukwade MT, Stevenson SB, Bedell HE, Sampath V, Ogmen H. Stereoscopic depth perception from oblique phase disparities. Vision Research. 2003;43:2479–2792. [PubMed]
  • Patel SS, Bedell HE, Sampat P. Pooling signals from vertical and non-vertically orientation-tuned disparity mechanisms in human stereopsis. Vision Research. 2006;46:1–13. [PubMed]
  • Pelli DG, Zhang L. Accurate control of contrast on microcomputer displays. Vision Research. 1991;31:1337–1350. [PubMed]
  • Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision. 1997;10:437–442. [PubMed]
  • Prince SJD, Pointon AD, Cumming BG, Parker AJ. Quantitative analysis of the responses of V1 neurons to horizontal disparity in dynamic random-dot stereograms. Journal of Neurophysiology. 2002;87:191–208. [PubMed]
  • Qian N, Zhu Y. Physiological computation of binocular disparity. Vision Research. 1997;37:1811–1827. [PubMed]
  • Rogers BJ, Bradshaw MF. Vertical disparities, differential perspective and binocular stereopsis. Nature. 1993;361:253–255. [PubMed]
  • Serrano-Pedraza I, Read JCA. Stereo vision requires an explicit encoding of vertical disparity. Journal of Vision. 2009;9:1–13. [PubMed]
  • van Dam LCJ, van Ee R. Stereoscopic matching and the aperture problem. Perception. 2004;33:769–787. [PubMed]
  • van Ee R, Schor CM. Unconstrained stereoscopic matching of lines. Vision Research. 2000;40:151–162. [PubMed]
  • von der Heydt R, Adorjani C, Hänny P, Baumgartner G. Disparity sensitivity and receptive field incongruity of units in the cat striate cortex. Experimental Brain Research. 1978;31:523–545. [PubMed]
  • Wheatstone C. Contributions to the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society. 1838;2:371–393. [PubMed]
  • Wichmann FA, Hill NJ. The psychometric function: I. Fitting, sampling and goodness-of-fit. Perception and Psychophysics. 2001;63:1293–1313. [PubMed]