scholarly journals Depth percept from motion parallax by backward/forward head movements

2013 ◽  
Vol 13 (9) ◽  
pp. 1180-1180
Author(s):  
M. Ishii ◽  
M. Fujii
Perception ◽  
1998 ◽  
Vol 27 (8) ◽  
pp. 937-949 ◽  
Author(s):  
Takanao Yajima ◽  
Hiroyasu Ujike ◽  
Keiji Uchikawa

The two main questions addressed in this study were (a) what effect does yoking the relative expansion and contraction (EC) of retinal images to forward and backward head movements have on the resultant magnitude and stability of perceived depth, and (b) how does this relative EC image motion interact with the depth cues of motion parallax? Relative EC image motion was produced by moving a small CCD camera toward and away from the stimulus, two random-dot surfaces separated in depth, in synchrony with the observers' forward and backward head movements. Observers viewed the stimuli monocularly, on a helmet-mounted display, while moving their heads at various velocities, including zero velocity. The results showed that (a) the magnitude of perceived depth was smaller with smaller head velocities (<10 cm s−1), including the zero-head-velocity condition, than with a larger velocity (10 cm s−1), and (b) perceived depth, when motion parallax and the EC image motion cues were simultaneously presented, is equal to the greater of the two possible perceived depths produced from either of these two cues alone. The results suggested the role of nonvisual information of self-motion on perceiving depth.


2021 ◽  
Author(s):  
Philip R L Parker ◽  
Eliott T T Abe ◽  
Natalie T Beatie ◽  
Emmalyn S P Leonard ◽  
Dylan M Martins ◽  
...  

In natural contexts, sensory processing and motor output are closely coupled, which is reflected in the fact that many brain areas contain both sensory and movement signals. However, standard reductionist paradigms decouple sensory decisions from their natural motor consequences, and head-fixation prevents the natural sensory consequences of self-motion. In particular, movement through the environment provides a number of depth cues beyond stereo vision that are poorly understood. To study the integration of visual processing and motor output in a naturalistic task, we investigated distance estimation in freely moving mice. We found that mice use vision to accurately jump across a variable gap, thus directly coupling a visual computation to its corresponding ethological motor output. Monocular eyelid suture did not affect performance, thus mice can use cues that do not depend on binocular disparity and stereo vision. Under monocular conditions, mice performed more vertical head movements, consistent with the use of motion parallax cues, and optogenetic suppression of primary visual cortex impaired task performance. Together, these results show that mice can use monocular cues, relying on visual cortex, to accurately judge distance. Furthermore, this behavioral paradigm provides a foundation for studying how neural circuits convert sensory information into ethological motor output.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 163-163
Author(s):  
H Ujike ◽  
S Saida

Motion parallax has been shown to be a principal cue for depth perception under monocular viewing. The simulated depth of stimuli in previous studies has been constant in both magnitude and direction. In the present study we addressed the question how the visual system detects parallactic depth change. To answer this we investigated the temporal characteristics of parallactic depth change and the effect of a motion signal on them. The stimulus consisted of four bands of 15-cycle sinusoidal gratings and parallactic depth was simulated between each band. In experiment 1, we measured the amount of perceived depth change with different frequencies (0.125 to 10 Hz) of simulated depth change and with different velocities (2.5 to 40 cm s−1) of head movements. The result showed the perceived depth change decreased with frequency of depth change, and it increased with head velocity when the frequency was constant. In experiment 2, we measured the motion threshold with different velocities of head movement. The result showed the threshold was constant across different head velocities. In experiment 3, we measured the amount of perceived depth using apparent motion stimuli with the head moving. The result showed depth decreased with SOA of apparent motion stimuli, but there was no effect of different head velocities. The results of these three experiments indicate that parallactic depth change is determined by the duration of simulated depth, which corresponds to the integration time of motion, as well as by the extent of head movement. We conclude that parallactic depth is integrated in two stages: first, integration of motion and, second, integration of motion parallax.


Perception ◽  
10.1068/p5221 ◽  
2005 ◽  
Vol 34 (4) ◽  
pp. 477-490 ◽  
Author(s):  
Hiroshi Ono ◽  
Hiroyasu Ujike

Yoking the movement of the stimulus on the screen to the movement of the head, we examined visual stability and depth perception as a function of head-movement velocity and parallax. In experiment 1, for different head velocities, observers adjusted the parallax to find (a) the depth threshold and (b) the concomitant-motion threshold. Between these thresholds, depth was seen with no perceived motion. In experiment 2, for different head velocities, observers adjusted the parallax to produce the same perceived depth. A slower head movement required a greater parallax to produce the same perceived depth as faster head movements. In experiment 3, observers reported the perceived depth for different parallax magnitudes. Perceived depth covaried with smaller parallax without motion perception, but began to decrease with larger parallax and concomitant motion was seen. Only motion was seen with the larger parallax.


2007 ◽  
Vol 16 (4) ◽  
pp. 414-438 ◽  
Author(s):  
Michael Cohen ◽  
Noor Alamshah Bolhassan ◽  
Owen Noel Newton Fernando

To support multiperspective and stereographic image display systems intended for multiuser applications, we have developed two integrated multiuser multiperspective stereographic browsers, respectively featuring IBR-generated egocentric and CG exocentric perspectives. The first one described, “VR4U2C” (‘virtual reality for you to see’), uses Apple's QuickTime VR technology and the Java programming language together with the support of the QuickTime for Java library. This unique QTVR browser allows coordinated display of multiple views of a scene or object, limited only by the size and number of monitors or projectors assembled around or among users (for panoramas or turnoramas) in various viewing locations. The browser also provides a novel solution to limitations associated with display of QTVR imagery: its multinode feature provides interactive stereographic QTVR (dubbed SQTVR) to display dynamically selected pairs of images exhibiting binocular parallax, the stereoscopic depth percept enhanced by motion parallax from displacement of the viewpoint through space coupled with rotation of the view through a 360° horizontal panorama. This navigable approach to SQTVR allows proper occlusion/disocclusion as the virtual standpoint shifts, as well as natural looming of closer objects compared to more distant ones. We have integrated this stereographic panoramic browsing application in a client/server architecture with a sibling client, named “Just Look at Yourself!” which is built with Java3D and allows realtime visualization of the dollying and viewpoint adjustment as well as juxtaposition and combination of stereographic CG and IBR displays. “Just Look at Yourself!” visualizes and emulates VR4U2C, embedding avatars associated with cylinder pairs wrapped around the stereo standpoints texture-mapped with a set of panoramic scenes into a 3D CG model of the same space as that captured by the set of panoramas. The transparency of the 3D CG polygon space and the photorealistic stereographic 360° scenes, as well as the size of the stereo goggles through which the CG space is conceptually viewed and upon which the 360° scenes are texture-mapped, can be adjusted at runtime to understand the relationship of the spaces.


2011 ◽  
Vol 11 (11) ◽  
pp. 924-924
Author(s):  
M. Aytekin ◽  
M. Rucci

eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Adhira Sunkara ◽  
Gregory C DeAngelis ◽  
Dora E Angelaki

As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues.


Author(s):  
R. John Leigh ◽  
David S. Zee

This chapter reviews the relationship between ocular motor and cephalomotor systems, summarizing mechanical properties of head and neck tissues, head stability, and the roles of the cervico-ocular reflex (COR), vestibulo-collic reflex (VCR), and cervico-collic reflex. Visual consequences of head translation, and motion parallax, are discussed. Behavioral properties of eye-head saccades and smooth eye-head tracking are summarized along with their interactions with the vestibulo-ocular reflex (VOR). The neural substrate for rapid and smooth eye-head movements is discussed including the nucleus reticularis gigantocellularis, superior colliculus, cerebellar vermis and fastigial nucleus. Mathematical models for eye-head behavior are presented. Clinical and laboratory evaluation of eye-head movements are outlined, with geometric corrections required during measurement of eye and head movements. Discussion of the pathophysiology of abnormal eye-head movements includes vestibular hypofunction, progressive supranuclear palsy, spasmodic torticollis (cervical dystonia), spasmus nutans, epilepsy, ocular motor apraxia, and abnormal smooth eye-head tracking in parkinsonian syndromes and cerebellar disorders.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 32-32 ◽  
Author(s):  
V Cornilleau-Pérès ◽  
E Marin ◽  
J Droulez

Under polar projection (the natural projection for visual scenes) motion parallax is a powerful cue specifying relative depth. For small-field stimuli, it is ambiguous in the sense that a concave surface can be perceived as convex and deforming. By contrast, concavity/convexity of wide-field surfaces is unambiguously perceived. This led us to hypothesise a critical role of the 3-D rigidity constraint for large visual scenes in motion (Dijkstra et al, 1995 Vision Research35 453 – 462). To examine this hypothesis, we exposed subjects to planes inclined in space, and asked them to report the tilt (direction of inclination). Depth was specified either by motion parallax (MP, the surface oscillated around a frontoparallel axis) or by static perspective cues (SP, orthogonal square grids drawn on the plane). At ECVP95, we had reported a predominance of SP over MP when the tilts specified by these two cues ( tMP and tSP respectively) differed (1995 Perception24 Supplement, 137). Since these results were obtained for fast movements (oscillation frequency for MP: 3.6 Hz), we extended our investigation to a slower frequency (0.5 Hz) which is more likely to be involved during natural head-movements. We found that: (i) errors in tilt reports were larger for MP than for SP, and decreased with increasing field-size; (ii) in the case of conflict ( tMP= tSP±90°), the reported tilt was either tMP or tSP, rather than an average of these two values; (iii) in this case, tilt was most often reported according to SP, rather than to MP cues; this effect occurred even when the accuracies for the two individual cues were similar. Therefore, in a conflict situation between MP and SP, surface orientation is reported according to a winner-take-all rule, which is largely in favour of static grid-cues. Hence, even for wide-field movements, the image contrast distribution can lead the visual system to prefer an unrigid, rather than rigid, solution to the 3-D shape-from-motion problem.


Sign in / Sign up

Export Citation Format

Share Document