scholarly journals Mis-perception of motion in depth originates from an incomplete transformation of retinal signals

2018 ◽  
Author(s):  
T. Scott Murdison ◽  
Guillaume Leclercq ◽  
Philippe Lefèvre ◽  
Gunnar Blohm

AbstractDepth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly interpret binocular retinal motion for spatial motion in depth perception. Here, we tested this by asking participants to reproduce the perceived spatial trajectory of an isolated point stimulus moving on different horizontal-depth paths either peri-foveally or peripherally while participants’ gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (ignoring vergence and version) and the spatially correct motion. A simple geometric model could capture the behavior well, revealing that participants tended to underestimate their version by as much as 17%, overestimate their vergence by as much as 22%, and underestimate the overall change in retinal disparity by as much as 64%. Since such large perceptual errors are not observed in everyday viewing, we suggest that other monocular and/or contextual cues are required for accurate real-world motion in depth perception.

2016 ◽  
Vol 371 (1697) ◽  
pp. 20150254 ◽  
Author(s):  
Holly Bridge

Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’.


2004 ◽  
Vol 4 (8) ◽  
pp. 464-464
Author(s):  
Y. Watanabe ◽  
M. Tomita ◽  
K. Harasawa ◽  
M. Usui ◽  
S. Shioiri ◽  
...  

Perception ◽  
10.1068/p2955 ◽  
2000 ◽  
Vol 29 (4) ◽  
pp. 437-452 ◽  
Author(s):  
Justin O'Brien ◽  
Alan Johnston

Perception ◽  
2019 ◽  
Vol 48 (4) ◽  
pp. 338-345
Author(s):  
Soyogu Matsushita ◽  
Hiroshi Ono

We examined whether the thresholds of motion and depth perception produced by motion parallax could be specified by the concept of a disparity gradient. We manipulated both the motion parallax amplitude and the angular separation of two dots and calculated the percentages of trials in which participants perceived motion or depth. The results showed that the amplitude of motion parallax for the threshold increased as the separation became larger with the gradients of 0.023, 0.072, and 0.430 for the lower depth, the lower motion, and the upper depth thresholds, respectively. These findings indicate that the gradient is a useful concept to specify the motion and depth thresholds together rather than parallax amplitude alone.


2019 ◽  
Vol 222 (11) ◽  
pp. jeb198614 ◽  
Author(s):  
Vivek Nityananda ◽  
Coline Joubier ◽  
Jerry Tan ◽  
Ghaith Tarawneh ◽  
Jenny C. A. Read

2015 ◽  
Vol 28 (3-4) ◽  
pp. 253-283 ◽  
Author(s):  
Irene Sperandio ◽  
Irene Sperandio ◽  
Philippe A. Chouinard

Size constancy is the result of cognitive scaling operations that enable us to perceive an object as having the same size when presented at different viewing distances. In this article, we review the literature on size and distance perception to form an overarching synthesis of how the brain might combine retinal images and distance cues of retinal and extra-retinal origin to produce a perceptual visual experience of a world where objects have a constant size. A convergence of evidence from visual psychophysics, neurophysiology, neuropsychology, electrophysiology and neuroimaging highlight the primary visual cortex (V1) as an important node in mediating size–distance scaling. It is now evident that this brain area is involved in the integration of multiple signals for the purposes of size perception and does much more than fulfil the role of an entry position in a series of hierarchical cortical events. We also discuss how information from other sensory modalities can also contribute to size–distance scaling and shape our perceptual visual experience.


2016 ◽  
Author(s):  
Long Luu ◽  
Alan A Stocker

AbstractIllusions provide a great opportunity to study how perception is affected by both the observer's expectations and the way sensory information is represented1,2,3,4,5,6. Recently, Jazayeri and Movshon7 reported a new and interesting perceptual illusion, demonstrating that the perceived motion direction of a dynamic random dot stimulus is systematically biased when preceded by a motion discrimination judgment. The authors hypothesized that these biases emerge because the brain predominantly relies on those neurons that are most informative for solving the discrimination task8, but then is using the same neural weighting profile for generating the percept. In other words, they argue that these biases are “mistakes” of the brain, resulting from using inappropriate neural read-out weights. While we were able to replicate the illusion for a different visual stimulus (orientation), our new psychophysical data suggest that the above interpretation is likely incorrect: Biases are not caused by a read-out profile optimized for solving the discrimination task but rather by the specific choices subjects make in the discrimination task on any given trial. We formulate this idea as a conditioned Bayesian observer model and show that it can explain the new as well as the original psychophysical data. In this framework, the biases are not caused by mistake but rather by the brain's attempt to remain ‘self-consistent’ in its inference process. Our model establishes a direct connection between the current perceptual illusion and the well-known phenomena of cognitive consistency and dissonance9,10.


Sign in / Sign up

Export Citation Format

Share Document