retinal disparity
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 1)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Arvind Chandna ◽  
Jeremy Badler ◽  
Devashish Singh ◽  
Scott Watamaniuk ◽  
Stephen Heinen

AbstractTo clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object’s motion in depth. The viewing eye always followed the target’s motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage.


2020 ◽  
Vol 14 ◽  
Author(s):  
Marc M. Himmelberg ◽  
Federico G. Segala ◽  
Ryan T. Maloney ◽  
Julie M. Harris ◽  
Alex R. Wade

Two stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single “motion-in-depth” pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving toward or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move toward and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.


2019 ◽  
Author(s):  
Marc M. Himmelberg ◽  
Federico G. Segala ◽  
Ryan T. Maloney ◽  
Julie M. Harris ◽  
Alex R. Wade

AbstractTwo stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single ‘motion-in-depth’ pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving towards or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move towards and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.


2018 ◽  
Author(s):  
Ronny Rosner ◽  
Joss von Hadeln ◽  
Ghaith Tarawneh ◽  
Jenny C. A. Read

A puzzle for neuroscience - and robotics - is how insects achieve surprisingly complex behaviours with such tiny brains1,2. One example is depth perception via binocular stereopsis in the praying mantis, a predatory insect. Praying mantids use stereopsis, the computation of distances from disparities between the two retinas, to trigger a raptorial strike of their forelegs3,4 when prey is within reach. The neuronal basis of this ability is entirely unknown. From behavioural evidence, one view is that the mantis brain must measure retinal disparity locally across a range of distances and eccentricities4–7, very like disparity-tuned neurons in vertebrate visual cortex8. Sceptics argue that this “retinal disparity hypothesis” implies far too many specialised neurons for such a tiny brain9. Here we show the first evidence that individual neurons in the praying mantis brain are indeed tuned to specific disparities and eccentricities, and thus locations in 3D-space. This disparity information is transmitted to the central brain by neurons connecting peripheral visual areas in both hemispheres, as well as by a unilateral neuron type. Like disparity-tuned cortical cells in vertebrates, the responses of these mantis neurons are consistent with linear summation of binocular inputs followed by an output nonlinearity10. Additionally, centrifugal neurons project disparity information back from the central brain to early visual areas, possibly for gain modulation or 3D spatial attention. Thus, our study not only proves the retinal disparity hypothesis for insects, it reveals feedback connections hitherto undiscovered in any animal species.


2018 ◽  
Vol 18 (6) ◽  
pp. 17 ◽  
Author(s):  
Eric S. Seemiller ◽  
Bruce G. Cumming ◽  
T. Rowan Candy

2014 ◽  
Vol 281 (1776) ◽  
pp. 20132118 ◽  
Author(s):  
Arthur J. Lugtigheid ◽  
Laurie M. Wilcox ◽  
Robert S. Allison ◽  
Ian P. Howard

The brain receives disparate retinal input owing to the separation of the eyes, yet we usually perceive a single fused world. This is because of complex interactions between sensory and oculomotor processes that quickly act to reduce excessive retinal disparity. This implies a strong link between depth perception and fusion, but it is well established that stereoscopic depth percepts are also obtained from stimuli that produce double images. Surprisingly, the nature of depth percepts from such diplopic stimuli remains poorly understood. Specifically, despite long-standing debate it is unclear whether depth under diplopia is owing to the retinal disparity (directly), or whether the brain interprets signals from fusional vergence responses to large disparities (indirectly). Here, we addressed this question using stereoscopic afterimages, for which fusional vergence cannot provide retinal feedback about depth. We showed that observers could reliably recover depth sign and magnitude from diplopic afterimages. In addition, measuring vergence responses to large disparity stimuli revealed that that the sign and magnitude of vergence responses are not systematically related to the target disparity, thus ruling out an indirect explanation of our results. Taken together, our research provides the first conclusive evidence that stereopsis is a direct process, even for diplopic targets.


2010 ◽  
Vol 10 (7) ◽  
pp. 331-331
Author(s):  
Z.-L. Zhang ◽  
C. Cantor ◽  
C. Schor
Keyword(s):  

2010 ◽  
Vol 20 (13) ◽  
pp. 1176-1181 ◽  
Author(s):  
Zhi-Lei Zhang ◽  
Christopher R.L. Cantor ◽  
Clifton M. Schor
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document