scholarly journals Accurate Model-Based Point of Gaze Estimation on Mobile Devices

Vision ◽  
2018 ◽  
Vol 2 (3) ◽  
pp. 35 ◽  
Author(s):  
Braiden Brousseau ◽  
Jonathan Rose ◽  
Moshe Eizenman

The most accurate remote Point of Gaze (PoG) estimation methods that allow free head movements use infrared light sources and cameras together with gaze estimation models. Current gaze estimation models were developed for desktop eye-tracking systems and assume that the relative roll between the system and the subjects’ eyes (the ’R-Roll’) is roughly constant during use. This assumption is not true for hand-held mobile-device-based eye-tracking systems. We present an analysis that shows the accuracy of estimating the PoG on screens of hand-held mobile devices depends on the magnitude of the R-Roll angle and the angular offset between the visual and optical axes of the individual viewer. We also describe a new method to determine the PoG which compensates for the effects of R-Roll on the accuracy of the POG. Experimental results on a prototype infrared smartphone show that for an R-Roll angle of 90 ° , the new method achieves accuracy of approximately 1 ° , while a gaze estimation method that assumes that the R-Roll angle remains constant achieves an accuracy of 3.5 ° . The manner in which the experimental PoG estimation errors increase with the increase in the R-Roll angle was consistent with the analysis. The method presented in this paper can improve significantly the performance of eye-tracking systems on hand-held mobile-devices.

Author(s):  
Fiona Mulvey ◽  
Arantxa Villanueva ◽  
David Sliney ◽  
Robert Lange ◽  
Michael Donegan

Infrared light is the most common choice for illumination of the eye in current eye trackers, usually produced via IR light-emitting diodes (LEDs). This chapter provides an overview of the potential hazards of over-exposure to infrared light, the safety standards currently in place, configurations and lighting conditions employed by various eye tracking systems, the basics of measurement of IR light sources in eye trackers, and special considerations associated with continuous exposure in the case of gaze control for communication and disabled users. It should be emphasised that any eye tracker intended for production should undergo testing by qualified professionals at a recognised test house, in a controlled laboratory setting. However, some knowledge of the measurement procedures and issues involved should be useful to designers and users of eye tracking systems.


2013 ◽  
pp. 1062-1083
Author(s):  
Fiona Mulvey ◽  
Arantxa Villanueva ◽  
David Sliney ◽  
Robert Lange ◽  
Michael Donegan

Infrared light is the most common choice for illumination of the eye in current eye trackers, usually produced via IR light-emitting diodes (LEDs). This chapter provides an overview of the potential hazards of over-exposure to infrared light, the safety standards currently in place, configurations and lighting conditions employed by various eye tracking systems, the basics of measurement of IR light sources in eye trackers, and special considerations associated with continuous exposure in the case of gaze control for communication and disabled users. It should be emphasised that any eye tracker intended for production should undergo testing by qualified professionals at a recognised test house, in a controlled laboratory setting. However, some knowledge of the measurement procedures and issues involved should be useful to designers and users of eye tracking systems.


Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 41
Author(s):  
Fabricio Batista Narcizo ◽  
Fernando Eustáquio Dantas dos Santos ◽  
Dan Witzner Hansen

This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range.


Author(s):  
Arantxa Villanueva ◽  
Rafael Cabeza ◽  
Javier San Agustin

The main objective of gaze trackers is to provide an accurate estimate of the user’s gaze by using the eye tracking information. Gaze, in its most general form, can be considered to be the line of sight or line of gaze, as 3D (imaginary) lines with respect to the camera, or as the point of regard (also termed the point of gaze). This chapter introduces different gaze estimation techniques, including geometry-based methods and interpolation methods. Issues related to both remote and head mounted trackers are discussed. Different fixation estimation methods are also briefly introduced. It is assumed that the reader is familiar with basic 3D geometry concepts as well as advanced mathematics, such as matrix manipulation and vector calculus.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 26
Author(s):  
David González-Ortega ◽  
Francisco Javier Díaz-Pernas ◽  
Mario Martínez-Zarzuela ◽  
Míriam Antón-Rodríguez

Driver’s gaze information can be crucial in driving research because of its relation to driver attention. Particularly, the inclusion of gaze data in driving simulators broadens the scope of research studies as they can relate drivers’ gaze patterns to their features and performance. In this paper, we present two gaze region estimation modules integrated in a driving simulator. One uses the 3D Kinect device and another uses the virtual reality Oculus Rift device. The modules are able to detect the region, out of seven in which the driving scene was divided, where a driver is gazing at in every route processed frame. Four methods were implemented and compared for gaze estimation, which learn the relation between gaze displacement and head movement. Two are simpler and based on points that try to capture this relation and two are based on classifiers such as MLP and SVM. Experiments were carried out with 12 users that drove on the same scenario twice, each one with a different visualization display, first with a big screen and later with Oculus Rift. On the whole, Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The Oculus-based gaze region estimation method with the highest performance achieved an accuracy of 97.94%. The information provided by the Oculus Rift module enriches the driving simulator data and makes it possible a multimodal driving performance analysis apart from the immersion and realism obtained with the virtual reality experience provided by Oculus.


2019 ◽  
Vol 35 (14) ◽  
pp. i417-i426 ◽  
Author(s):  
Erin K Molloy ◽  
Tandy Warnow

Abstract Motivation At RECOMB-CG 2018, we presented NJMerge and showed that it could be used within a divide-and-conquer framework to scale computationally intensive methods for species tree estimation to larger datasets. However, NJMerge has two significant limitations: it can fail to return a tree and, when used within the proposed divide-and-conquer framework, has O(n5) running time for datasets with n species. Results Here we present a new method called ‘TreeMerge’ that improves on NJMerge in two ways: it is guaranteed to return a tree and it has dramatically faster running time within the same divide-and-conquer framework—only O(n2) time. We use a simulation study to evaluate TreeMerge in the context of multi-locus species tree estimation with two leading methods, ASTRAL-III and RAxML. We find that the divide-and-conquer framework using TreeMerge has a minor impact on species tree accuracy, dramatically reduces running time, and enables both ASTRAL-III and RAxML to complete on datasets (that they would otherwise fail on), when given 64 GB of memory and 48 h maximum running time. Thus, TreeMerge is a step toward a larger vision of enabling researchers with limited computational resources to perform large-scale species tree estimation, which we call Phylogenomics for All. Availability and implementation TreeMerge is publicly available on Github (http://github.com/ekmolloy/treemerge). Supplementary information Supplementary data are available at Bioinformatics online.


2015 ◽  
Vol 15 (7) ◽  
pp. 88-98
Author(s):  
J. Dezert ◽  
A. Tchamova ◽  
P. Konstantinova

Abstract The main purpose of this paper is to apply and to test the performance of a new method, based on belief functions, proposed by Dezert et al. in order to evaluate the quality of the individual association pairings provided in the optimal data association solution for improving the performances of multisensor-multitarget tracking systems. The advantages of its implementation in an illustrative realistic surveillance context, when some of the association decisions are unreliable and doubtful and lead to potentially critical mistake, are discussed. A comparison with the results obtained on the base of Generalized Data Association is made.


Sign in / Sign up

Export Citation Format

Share Document