scholarly journals Fuzzy-System-Based Detection of Pupil Center and Corneal Specular Reflection for a Driver-Gaze Tracking System Based on the Symmetrical Characteristics of Face and Facial Feature Points

Symmetry ◽  
2017 ◽  
Vol 9 (11) ◽  
pp. 267 ◽  
Author(s):  
Dong Lee ◽  
Hyo Yoon ◽  
Hyung Hong ◽  
Kang Park
1997 ◽  
Vol 06 (02) ◽  
pp. 193-209 ◽  
Author(s):  
Rainer Stiefelhagen ◽  
Jie Yang ◽  
Alex Waibel

In this paper we present a non-intrusive model-based gaze tracking system. The system estimates the 3-D pose of a user's head by tracking as few as six facial feature points. The system locates a human face using a statistical color model and then finds and tracks the facial features, such as eyes, nostrils and lip corners. A full perspective model is employed to map these feature points onto the 3D pose. Several techniques have been developed to track the features points and recover from failure. We currently achieve a frame rate of 15+ frames per second using an HP 9000 workstation with a framegrabber and a Canon VC-C1 camera. The application of the system has been demonstrated by a gaze-driven panorama image viewer. The potential applications of the system include multimodal interfaces, virtual reality and video-teleconferencing.


2010 ◽  
Vol 22 (03) ◽  
pp. 185-192 ◽  
Author(s):  
Jin-Yu Chu ◽  
Jian-De Sun ◽  
Xiao-Hui Yang ◽  
Ju Liu ◽  
Wei Liu

The gaze tracking system has become an active research field for handicapped persons as well as general people in recent years. The precise mapping method plays an important role in the system. In this paper, a novel infrared gaze tracking system based on nonuniform interpolation is proposed. In this system, the eye images for the computer to analyze are extracted under two infrared light sources and a charge-coupled device camera, and the users do not require wearing any device. First, the integral projection algorithm and canny edge detection are applied to extract the pupil boundary points from the captured eye images, and then the pupil center is computed using an efficient and accurate ellipse-fitting algorithm. Finally, to estimate where the user looks, a novel mapping method based on the nonuniform interpolation algorithm is proposed. In this mapping method, the complicated geometric eyeball model and the nonlinear mapping between the pupil center coordinates and computer monitor screen coordinates do not need to be taken into account. Experimental results show that the proposed mapping method is simple, fast and more accurate. Moreover, our system is the remote eye gaze tracking system. The users do not need to wear any device, which make the users feel more comfortable.


2010 ◽  
Vol 36 (8) ◽  
pp. 1051-1061 ◽  
Author(s):  
Chuang ZHANG ◽  
Jian-Nan CHI ◽  
Zhao-Hui ZHANG ◽  
Zhi-Liang WANG

2021 ◽  
Vol 11 (2) ◽  
pp. 851
Author(s):  
Wei-Liang Ou ◽  
Tzu-Ling Kuo ◽  
Chin-Chieh Chang ◽  
Chih-Peng Fan

In this study, for the application of visible-light wearable eye trackers, a pupil tracking methodology based on deep-learning technology is developed. By applying deep-learning object detection technology based on the You Only Look Once (YOLO) model, the proposed pupil tracking method can effectively estimate and predict the center of the pupil in the visible-light mode. By using the developed YOLOv3-tiny-based model to test the pupil tracking performance, the detection accuracy is as high as 80%, and the recall rate is close to 83%. In addition, the average visible-light pupil tracking errors of the proposed YOLO-based deep-learning design are smaller than 2 pixels for the training mode and 5 pixels for the cross-person test, which are much smaller than those of the previous ellipse fitting design without using deep-learning technology under the same visible-light conditions. After the combination of calibration process, the average gaze tracking errors by the proposed YOLOv3-tiny-based pupil tracking models are smaller than 2.9 and 3.5 degrees at the training and testing modes, respectively, and the proposed visible-light wearable gaze tracking system performs up to 20 frames per second (FPS) on the GPU-based software embedded platform.


2009 ◽  
Vol 30 (12) ◽  
pp. 1144-1150 ◽  
Author(s):  
Diego Torricelli ◽  
Michela Goffredo ◽  
Silvia Conforto ◽  
Maurizio Schmid

Sign in / Sign up

Export Citation Format

Share Document