Gaze-Accuracy during Monocular and Binocular Viewing

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 371-371
Author(s):  
R M Steinman ◽  
T I Forofonova ◽  
J Epelboim ◽  
M R Stepanov

Epelboim et al (1996 Vision Research35 3401 – 3422) reported that cyclopean gaze errors were smaller than either eye's during tapping and looking-only tasks. This raised two questions: (i) does cyclopean gaze accuracy require binocular input, and (ii) when only one eye sees, is its gaze more accurate than the patched eye's? Most oculomotorists probably expect an affirmative answer to both. Neither expectation was fulfilled. The Maryland Revolving Field Monitor recorded, with exceptional accuracy, eye movements of two unrestrained subjects tapping or only looking, in a specified order, at four randomly positioned LEDs, with monocular or binocular viewing. Subjects either tapped with their finger tips naturally, or unnaturally via a rod (2 mm diameter, 1.5 cm long), glued to a sewing thimble. Instructions were to be fast, but make no order errors. With binocular viewing, cyclopean gaze accuracy was best during looking-only. During natural tapping, gaze errors increased, becoming no smaller than success required. Both tasks were learned equally fast, but as expected, the younger subject (aged 27 years) performed ∼ 40% faster than the older subject (aged 69 years). Unnatural, monocular viewing produced odd results, eg cyclopean gaze error was smallest when only one eye could see in some conditions. Only the older subject served in the unnatural tapping task because the younger's errors were too close to his gaze control limit. The older subject, who was suitable, reduced his cyclopean gaze error by 56%, from 1.4 to 0.9 deg. These results support our claim that the gaze error allowed is adjusted to the visuomotor demands of different tasks.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Tobias Wibble ◽  
Tony Pansell

Abstract Vertical vergence is generally associated with one of three mechanisms: vestibular activation during a head tilt, induced by vertical visual disparity, or as a by-product of ocular torsion. However, vertical vergence can also be induced by seemingly unrelated visual conditions, such as optokinetic rotations. This study aims to investigate the effect of vision on this latter form of vertical vergence. Eight subjects (4m/4f) viewed a visual scene in head erect position in two different viewing conditions (monocular and binocular). The scene, containing white lines angled at 45° against a black background, was projected at an eye-screen distance of 2 m, and rotated 28° at an acceleration of 56°/s2. Eye movements were recorded using a Chronos Eye-Tracker, and eye occlusions were carried out by placing an infrared-translucent cover in front of the left eye during monocular viewing. Results revealed vergence amplitudes during binocular viewing to be significantly lower than those seen for monocular conditions (p = 0.003), while torsion remained unaffected. This indicates that vertical vergence to optokinetic stimulation, though visually induced, is visually suppressed during binocular viewing. Considering that vertical vergence is generally viewed as a vestibular signal, the findings may reflect a visually induced activation of a vestibular pathway.


2018 ◽  
pp. 186-199

Background Coincidence-anticipation timing (CAT) responses require individuals to determine the time at which an approaching object will arrive at (time to collision) or pass by (time to passage) the observer and to then make a response coincident with this time. Previous studies suggest that under some conditions time to collision estimates are more accurate when binocular and monocular cues are combined. The purpose of this study was to compare binocular and monocular coincidence anticipation timing responses with the Bassin Anticipation Timer, a device for testing and training CAT responses. Methods: Useable data were obtained from 20 participants. Coincidence-anticipation timing responses were determined using a Bassin Anticipation Timer over a range of approaching stimulus linear velocities of 5 to 40mph. Participants stood to the left side of the Bassin Anticipation track. The track was below eye height. The participants’ task was to push a button to coincide with arrival of the approaching stimulus at a location immediately adjacent to the participant. CAT responses were made under three randomized conditions: binocular viewing, monocular dominant eye viewing, and monocular non-dominant eye viewing. Results: Signed (constant), unsigned (absolute), and variable (standard deviation) CAT response errors were determined and compared across viewing conditions at each stimulus velocity. There were no significant differences in CAT errors between the conditions at any stimulus velocity, although the differences in signed and unsigned errors approached significance at 40mph. Conclusions: The addition of binocular cues did not result in a reduction in coincidence anticipation timing response errors compared to the monocular viewing conditions. There were no differences in CAT response errors between the monocular dominant eye viewing and monocular non-dominant eye viewing conditions.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


2019 ◽  
Vol 121 (5) ◽  
pp. 1967-1976 ◽  
Author(s):  
Niels Gouirand ◽  
James Mathew ◽  
Eli Brenner ◽  
Frederic R. Danion

Adapting hand movements to changes in our body or the environment is essential for skilled motor behavior. Although eye movements are known to assist hand movement control, how eye movements might contribute to the adaptation of hand movements remains largely unexplored. To determine to what extent eye movements contribute to visuomotor adaptation of hand tracking, participants were asked to track a visual target that followed an unpredictable trajectory with a cursor using a joystick. During blocks of trials, participants were either allowed to look wherever they liked or required to fixate a cross at the center of the screen. Eye movements were tracked to ensure gaze fixation as well as to examine free gaze behavior. The cursor initially responded normally to the joystick, but after several trials, the direction in which it responded was rotated by 90°. Although fixating the eyes had a detrimental influence on hand tracking performance, participants exhibited a rather similar time course of adaptation to rotated visual feedback in the gaze-fixed and gaze-free conditions. More importantly, there was extensive transfer of adaptation between the gaze-fixed and gaze-free conditions. We conclude that although eye movements are relevant for the online control of hand tracking, they do not play an important role in the visuomotor adaptation of such tracking. These results suggest that participants do not adapt by changing the mapping between eye and hand movements, but rather by changing the mapping between hand movements and the cursor’s motion independently of eye movements. NEW & NOTEWORTHY Eye movements assist hand movements in everyday activities, but their contribution to visuomotor adaptation remains largely unknown. We compared adaptation of hand tracking under free gaze and fixed gaze. Although our results confirm that following the target with the eyes increases the accuracy of hand movements, they unexpectedly demonstrate that gaze fixation does not hinder adaptation. These results suggest that eye movements have distinct contributions for online control and visuomotor adaptation of hand movements.


1999 ◽  
Vol 81 (6) ◽  
pp. 3105-3109 ◽  
Author(s):  
T. Belton ◽  
R. A. McCrea

Contribution of the cerebellar flocculus to gaze control during active head movements. The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye–head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.


1998 ◽  
Vol 79 (6) ◽  
pp. 3060-3076 ◽  
Author(s):  
Martin Paré ◽  
Daniel Guitton

Paré, Martin and Daniel Guitton. Brain stem omnipause neurons and the control of combined eye-head gaze saccades in the alert cat. J. Neurophysiol. 79: 3060–3076, 1998. When the head is unrestrained, rapid displacements of the visual axis—gaze shifts (eye-re-space)—are made by coordinated movements of the eyes (eye-re-head) and head (head-re-space). To address the problem of the neural control of gaze shifts, we studied and contrasted the discharges of omnipause neurons (OPNs) during a variety of combined eye-head gaze shifts and head-fixed eye saccades executed by alert cats. OPNs discharged tonically during intersaccadic intervals and at a reduced level during slow perisaccadic gaze movements sometimes accompanying saccades. Their activity ceased for the duration of the saccadic gaze shifts the animal executed, either by head-fixed eye saccades alone or by combined eye-head movements. This was true for all types of gaze shifts studied: active movements to visual targets; passive movements induced by whole-body rotation or by head rotation about stationary body; and electrically evoked movements by stimulation of the caudal part of the superior colliculus (SC), a central structure for gaze control. For combined eye-head gaze shifts, the OPN pause was therefore not correlated to the eye-in-head trajectory. For instance, in active gaze movements, the end of the pause was better correlated with the gaze end than with either the eye saccade end or the time of eye counterrotation. The hypothesis that cat OPNs participate in controlling gaze shifts is supported by these results, and also by the observation that the movements of both the eyes and the head were transiently interrupted by stimulation of OPNs during gaze shifts. However, we found that the OPN pause could be dissociated from the gaze-motor-error signal producing the gaze shift. First, OPNs resumed discharging when perturbation of head motion briefly interrupted a gaze shift before its intended amplitude was attained. Second, stimulation of caudal SC sites in head-free cat elicited large head-free gaze shifts consistent with the creation of a large gaze-motor-error signal. However, stimulation of the same sites in head-fixed cat produced small “goal-directed” eye saccades, and OPNs paused only for the duration of the latter; neither a pause nor an eye movement occurred when the same stimulation was applied with the eyes at the goal location. We conclude that OPNs can be controlled by neither a simple eye control system nor an absolute gaze control system. Our data cannot be accounted for by existing models describing the control of combined eye-head gaze shifts and therefore put new constraints on future models, which will have to incorporate all the various signals that act synergistically to control gaze shifts.


Author(s):  
Csaba Antonya ◽  
Florin Barbuceanu ◽  
Zolta´n Rusa´k ◽  
Doru Talaba ◽  
Silviu Butnariu ◽  
...  

The paper is investigating the relationship between human eye movements, correlated with the visual perception of computer generated scene on one hand and obstacle avoidance strategies on the other hand, during the process of driving a computer game-like car. Several issues were investigated regarding how the gaze fixation point of the driver is moving during obstacle avoidance maneuvers. The relevance of each issue in making a decision was assessed. The main goal is to establish a correlation (mapping) system between gaze fixation parameters and obstacles avoidance strategies in order to be able to develop cognitive algorithms for driver assistance in real world driving conditions, to monitor driver’s vigilance and ultimately to enable progress towards the autonomous vehicle which can avoid possible obstacles or resolve hazardous traffic situations just by monitoring the eye movements of the driver.


2003 ◽  
Vol 90 (4) ◽  
pp. 2770-2776 ◽  
Author(s):  
Julio C. Martinez-Trujillo ◽  
Eliana M. Klier ◽  
Hongying Wang ◽  
J. Douglas Crawford

Most of what we know about the neural control of gaze comes from experiments in head-fixed animals, but several “head-free” studies have suggested that fixing the head dramatically alters the apparent gaze command. We directly investigated this issue by quantitatively comparing head-fixed and head-free gaze trajectories evoked by electrically stimulating 52 sites in the superior colliculus (SC) of two monkeys and 23 sites in the supplementary eye fields (SEF) of two other monkeys. We found that head movements made a significant contribution to gaze shifts evoked from both neural structures. In the majority of the stimulated sites, average gaze amplitude was significantly larger and individual gaze trajectories were significantly less convergent in space with the head free to move. Our results are consistent with the hypothesis that head-fixed stimulation only reveals the oculomotor component of the gaze shift, not the true, planned goal of the movement. One implication of this finding is that when comparing stimulation data against popular gaze control models, freeing the head shifts the apparent coding of gaze away from a “spatial code” toward a simpler visual model in the SC and toward an eye-centered or fixed-vector model representation in the SEF.


Sign in / Sign up

Export Citation Format

Share Document