Depth Discrimination from Optic Flow

Perception ◽  
1988 ◽  
Vol 17 (4) ◽  
pp. 497-512 ◽  
Author(s):  
William A Simpson

A simple scheme for deriving relative depth (time-to-collision, or TTC) from optic flow is developed in which the total flow is first sensed by unconnected motion (imperfect filter) sensors and then the rotational component is subtracted to yield the translational component. Only the latter component yields depth information. This scheme is contrasted with one where the TTC sensors respond only to the translational component at the initial registration of the flow (perfect filter sensors or looming detectors). The simple scheme predicts the results of three experiments on discrimination of TTC: discrimination thresholds are elevated if the objects withdraw from rather than approach the observer, thresholds are elevated if a rotational component is added to the flow, and the amount of threshold elevation resulting from the addition of a rotational component is reduced by prior adaptation to a pure rotational flow. These results confirm the simple model and disconfirm predictions based on the looming detector scheme.

2021 ◽  
Author(s):  
Richard Rzeszutek

This dissertation proposes a novel framework for recovering relative depth maps from a video. The framework is composed of two parts: a depth estimator and a sparse label interpolator. These parts are completely separate from one another and can operate independently. Prior methods have tended to heavily couple the interpolation stage with the depth estimation, which can assist with automation at the expense of flexibility. The loss of this flexibility can in fact be worse than any advantage gained by coupling the two stages together. This dissertation shows how by treating the two stages separately, it is very easy to change the quality of the results with little effort. It also leaves room for other adjustments. The depth estimator is based upon well-established computer vision principles and only has the restriction that the camera must be moving in order to obtain depth estimates. By starting from first principles, this dissertation has developed a new approach for quickly estimating relative depth. That is, it is able to answer the question, “is this feature closer than another," with relatively little computational overhead. The estimator is designed using a pipeline-style approach so that it produces sparse depth estimates in an online fashion; i.e. a depth estimate is automatically available for each new frame presented to the estimator. Finally, the interpolator applies an existing method based upon edge-aware filtering to generate the final depth maps. When temporal filters are used, the interpolation stage is able to very easily handle frames without any depth information, such as when the camera was stationary. However, unlike the prior work, this dissertation establishes the theoretical background for this type of interpolation and addresses some of the associated numerical problems. Strategies for dealing with these issues have also been provided


1955 ◽  
Vol 68 (2) ◽  
pp. 193 ◽  
Author(s):  
Warren H. Teichner ◽  
John L. Kobrick ◽  
Robert F. Wehrkamp

2020 ◽  
Vol 16 (5) ◽  
pp. 20200046
Author(s):  
Carlos Ruiz ◽  
Jamie C. Theobald

Flies and other insects use incoherent motion (parallax) to the front and sides to measure distances and identify obstacles during translation. Although additional depth information could be drawn from below, there is no experimental proof that they use it. The finding that blowflies encode motion disparities in their ventral visual fields suggests this may be an important region for depth information. We used a virtual flight arena to measure fruit fly responses to optic flow. The stimuli appeared below ( n = 51) or above the fly ( n = 44), at different speeds, with or without parallax cues. Dorsal parallax does not affect responses, and similar motion disparities in rotation have no effect anywhere in the visual field. But responses to strong ventral sideslip (206° s −1 ) change drastically depending on the presence or absence of parallax. Ventral parallax could help resolve ambiguities in cluttered motion fields, and enhance corrective responses to nearby objects.


2021 ◽  
Author(s):  
Richard Rzeszutek

This dissertation proposes a novel framework for recovering relative depth maps from a video. The framework is composed of two parts: a depth estimator and a sparse label interpolator. These parts are completely separate from one another and can operate independently. Prior methods have tended to heavily couple the interpolation stage with the depth estimation, which can assist with automation at the expense of flexibility. The loss of this flexibility can in fact be worse than any advantage gained by coupling the two stages together. This dissertation shows how by treating the two stages separately, it is very easy to change the quality of the results with little effort. It also leaves room for other adjustments. The depth estimator is based upon well-established computer vision principles and only has the restriction that the camera must be moving in order to obtain depth estimates. By starting from first principles, this dissertation has developed a new approach for quickly estimating relative depth. That is, it is able to answer the question, “is this feature closer than another," with relatively little computational overhead. The estimator is designed using a pipeline-style approach so that it produces sparse depth estimates in an online fashion; i.e. a depth estimate is automatically available for each new frame presented to the estimator. Finally, the interpolator applies an existing method based upon edge-aware filtering to generate the final depth maps. When temporal filters are used, the interpolation stage is able to very easily handle frames without any depth information, such as when the camera was stationary. However, unlike the prior work, this dissertation establishes the theoretical background for this type of interpolation and addresses some of the associated numerical problems. Strategies for dealing with these issues have also been provided


Sign in / Sign up

Export Citation Format

Share Document