Definition of slant and tilt. A Slant is the angle between the surface normal (black vertical line segment) and the frontoparallel plane. Here the slant is varied while the tilt remains at 90 o . B Tilt is the orientation of the vector formed by projection of the surface normal onto the frontoparallel plane. Here the tilt is varied while the slant remains at 45 o .

Definition of slant and tilt. A Slant is the angle between the surface normal (black vertical line segment) and the frontoparallel plane. Here the slant is varied while the tilt remains at 90 o . B Tilt is the orientation of the vector formed by projection of the surface normal onto the frontoparallel plane. Here the tilt is varied while the slant remains at 45 o .

Source publication
Preprint
Full-text available
Binocular stereo cues are important for discriminating 3D surface orientation, especially at near distances. We devised a single-interval task where observers discriminated the slant of a densely textured planar test surface relative to a textured planar surround reference surface. Although surfaces were rendered with correct perspective, the stimu...

Contexts in source publication

Context 1
... surface orientation is often specified in terms of slant and tilt (Stevens, 1983). Slant is the angle between the surface normal (the unit vector perpendicular to the surface) and the frontoparallel plane ( Figure 1A). Tilt is the orientation of the vector formed by projection of the surface normal onto the frontoparallel plane ( Figure 1B). ...
Context 2
... is the angle between the surface normal (the unit vector perpendicular to the surface) and the frontoparallel plane ( Figure 1A). Tilt is the orientation of the vector formed by projection of the surface normal onto the frontoparallel plane ( Figure 1B). ...
Context 3
... then picks the distance and slant that best explains the difference between the two images. Figure 3 illustrates this mapping for the more general case of an arbitrary 3D surface orientation (see also Appendix Figure A1). Figure 3B shows an image region in the right eye (red square) and the back projection to image plane for the left eye (blue trapezoid), for the correct distance and surface orientation (60 o slant and 45 o tilt). ...
Context 4
... computations for each patch are basically the same as described above, except the estimates are expressed as the slantîslantî s and distance of the patchîpatchî z , rather than the slantîslantî s and intercept distancê i  (see Figure 5). Directly using Equation 2 for i th image patch gives Figure A1 gives a formula equivalent to Equation 3, but with minimization taken over local slant and distance: ...
Context 5
... 10A and 10B plot goodness-of-fit measures (RMSE and negative log-likelihood respectively) as a function of patch width. The arrows in Figure 10A indicate the patch widths of the predictions shown in Figure 9. They correspond to the best fits in terms of RMSE. ...
Context 6
... plots show that including the estimation-noise parameter greatly improves model predictions (shaded regions are 68% confidence intervals). Figure 10C shows the maximum-likelihoodparameter estimates of each model (symbol color) for each patch width (symbol size). The estimated noise and scalar parameters are largest for the SCC model, smaller for the LPCC model, and smallest for the PCC model. ...
Context 7
... estimated noise and scalar parameters are largest for the SCC model, smaller for the LPCC model, and smallest for the PCC model. Figure 10. Results of the model fitting procedure. ...
Context 8
... conclude that the human slant discrimination thresholds were based entirely on binocular (stereo) cues. Figure 11A shows the depth discrimination thresholds of the three observers (dotted curves) and their average (solid curve), and Figure 11B shows the biases. The individual differences were less in this experiment than in the slant-discrimination experiment, and hence there is no scaling of the thresholds for the least and most sensitive participants. ...
Context 9
... conclude that the human slant discrimination thresholds were based entirely on binocular (stereo) cues. Figure 11A shows the depth discrimination thresholds of the three observers (dotted curves) and their average (solid curve), and Figure 11B shows the biases. The individual differences were less in this experiment than in the slant-discrimination experiment, and hence there is no scaling of the thresholds for the least and most sensitive participants. ...
Context 10
... it is possible to compare absolute efficiencies in the two tasks. The efficiency scale factors for aligning the PCC thresholds with each participant's thresholds in the high noise conditions are shown in Figure 12A. For two participants (P1 and P2), there is a trend for human efficiency to be higher in the slant discrimination experiment. ...
Context 11
... two participants (P1 and P2), there is a trend for human efficiency to be higher in the slant discrimination experiment. Figure 12. The efficiency scale factors estimated from two different experiments. ...
Context 12
... is also possible to estimate the efficiency scale factors for the SCC model. For all participants, the efficiency is higher in the slant experiment, and the confidence intervals do not overlap ( Figure 12B). Overall, humans appear to be more efficient at estimating surface slant than surface distance. ...
Context 13
... second description most naturally leads to the hypothesis that early binocular receptive fields are explicitly coding the spatially structured patterns of binocular differences that are produced by back-projection of planar surfaces. For example, Figure 13 shows the binocular receptive fields that would respond best to a sinewave textured surface at 100 cm, with a slant of 45 deg, for five different tilts. To emphasize the shape differences between to left and right receptive fields, the imaging plane was set to the same distance as the surface (100 cm). ...
Context 14
... emphasize the shape differences between to left and right receptive fields, the imaging plane was set to the same distance as the surface (100 cm). If the imaging plane was instead set at plus or minus 17 mm (the location of the retinas), then the left and right receptive fields would also differ in position. When the surface tilt is 0 deg (see Fig. 1B), the left and right receptive fields differ primarily in scale/frequency. When the surface tilt is 90 deg the left and right receptive fields differ primarily in orientation. For other tilts there are scale, orientation and shear differences (see also Figure 3). The third column shows the differences between the left and right ...
Context 15
... third column shows the differences between the left and right receptive fields. Figure 12B shows how the total energy of the difference of the left and right receptive fields varies with slant and tilt. The energy tends ...
Context 16
... modeling with generalized versions of the disparity energy model originally introduced by Ohzawa et al. (1990) showed that most of the useful disparity information is carried by standard horizontal disparity detectors (Bridge & Cumming, 2001;Bridge et al., 2001;Sanada & Ohzawa, 2006). Nonetheless, these models do not consider all of the structured disparity patterns associated with planar surfaces (see Figure 13). Also, the possible benefits of including information about structural disparity patterns may better emerge in models of population decoding that pool efficiently over all the relevant neurons (Bridge & Cumming, 2008;Greenwald & Knill, 2009;Kato et al., 2016). ...
Context 17
... equations in Appendix Figure A1 are based on projecting the scene onto an image plane. This is the common framework for representing camera images and leads to the simple equations used here. ...
Context 18
... vision science, it is also common to represent images in spherical coordinates, which is equivalent to projecting the scene onto spherical surfaces centered on the nodal point of each eye. If desired, it is straightforward to convert the equations in Figure A1 into spherical coordinates (azimuth and elevation) by substitution. It is also common in vision science to consider cases where the eyes are not in primary position (pointing straight ahead). ...
Context 19
... /2021 In the Appendix we provide formulas for generalizing the closed-form expressions in Figure A1 to the case where each eye is rotated arbitrarily about the three axes. ...
Context 20
... we derive the exact equations for mapping a point in the right image to the corresponding point in the left image given a planar surface at an intercept distance of  , with a slant of s and a tilt of  , for the imaging geometry illustrated in Figure 2. The equations are shown in Figure A1. These equations are for arbitrary slant, tilt, and distance, but for the current experiments the tilt was set to zero. ...
Context 21
... for the LPCC and SCC models, the equations are expressed in terms of distance z rather than intercept distance  (see Figure 5). The equations in Figure A1 The equations in Figure A1 are standard projective geometry, but are derived here to provide compact equations that are easy to apply. To derive the equations in Figure A1, we first note that if the nodal point is at the origin in 3D Euclidean space (as in Figure 2), then the standard equations for perspective projection are . ...
Context 22
... for the LPCC and SCC models, the equations are expressed in terms of distance z rather than intercept distance  (see Figure 5). The equations in Figure A1 The equations in Figure A1 are standard projective geometry, but are derived here to provide compact equations that are easy to apply. To derive the equations in Figure A1, we first note that if the nodal point is at the origin in 3D Euclidean space (as in Figure 2), then the standard equations for perspective projection are . ...
Context 23
... equations in Figure A1 The equations in Figure A1 are standard projective geometry, but are derived here to provide compact equations that are easy to apply. To derive the equations in Figure A1, we first note that if the nodal point is at the origin in 3D Euclidean space (as in Figure 2), then the standard equations for perspective projection are . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. ...
Context 24
... A3 and A4 are the equations in the right panel of Figure A1. If the nodal point is shifted to the right, then the point in the image plane is given by ...
Context 25
... A13, A16 and A17 are the back-projection equations in Figure A1. ...
Context 26
... binocular equations in Figure A1 can be expressed in spherical coordinates (azimuthlongitude  and elevation-latitude e ; Fick coordinates) with respect to the nodal point for each eye by substituting for , , , ...
Context 27
... the human visual system, the eyes can rotate around the three axes (and may even translate by a small amount when rotated). The possible rotations are largely constrained by the modified Listing's law (e.g., see Howard, 2012), but for the purpose of generalizing the expressions in Figure A1 to different eye positions, we can allow each eye to rotate arbitrarily ( Figure 2A). Specifically, suppose that the right eye is rotated from the primary position by angles, ,, ...

Similar publications

Article
Full-text available
When exploring the surrounding environment with the eyes, humans and primates need to interpret three-dimensional (3D) shapes in a fast and invariant way, exploiting a highly variant and gaze-dependent visual information. Since they have front-facing eyes, binocular disparity is a prominent cue for depth perception. Specifically, it serves as compu...
Article
Full-text available
We present a simple model which can account for the stereoscopic sensitivity of praying mantis predatory strikes. The model consists of a single “disparity sensor”: a binocular neuron sensitive to stereoscopic disparity and thus to distance from the animal. The model is based closely on the known behavioural and neurophysiological properties of man...
Article
Full-text available
Binocular stereo cues are important for discriminating 3D surface orientation, especially at near distances. We devised a single-interval task where observers discriminated the slant of a densely textured planar test surface relative to a textured planar surround reference surface. Although surfaces were rendered with correct perspective, the stimu...