scholarly journals Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks

Author(s):  
Zhen-Hua Feng ◽  
Josef Kittler ◽  
Muhammad Awais ◽  
Patrik Huber ◽  
Xiao-Jun Wu
2019 ◽  
Vol 128 (8-9) ◽  
pp. 2126-2145 ◽  
Author(s):  
Zhen-Hua Feng ◽  
Josef Kittler ◽  
Muhammad Awais ◽  
Xiao-Jun Wu

AbstractEfficient and robust facial landmark localisation is crucial for the deployment of real-time face analysis systems. This paper presents a new loss function, namely Rectified Wing (RWing) loss, for regression-based facial landmark localisation with Convolutional Neural Networks (CNNs). We first systemically analyse different loss functions, including L2, L1 and smooth L1. The analysis suggests that the training of a network should pay more attention to small-medium errors. Motivated by this finding, we design a piece-wise loss that amplifies the impact of the samples with small-medium errors. Besides, we rectify the loss function for very small errors to mitigate the impact of inaccuracy of manual annotation. The use of our RWing loss boosts the performance significantly for regression-based CNNs in facial landmarking, especially for lightweight network architectures. To address the problem of under-representation of samples with large pose variations, we propose a simple but effective boosting strategy, referred to as pose-based data balancing. In particular, we deal with the data imbalance problem by duplicating the minority training samples and perturbing them by injecting random image rotation, bounding box translation and other data augmentation strategies. Last, the proposed approach is extended to create a coarse-to-fine framework for robust and efficient landmark localisation. Moreover, the proposed coarse-to-fine framework is able to deal with the small sample size problem effectively. The experimental results obtained on several well-known benchmarking datasets demonstrate the merits of our RWing loss and prove the superiority of the proposed method over the state-of-the-art approaches.


2018 ◽  
Vol 40 (12) ◽  
pp. 3067-3074 ◽  
Author(s):  
Yue Wu ◽  
Tal Hassner ◽  
KangGeon Kim ◽  
Gerard Medioni ◽  
Prem Natarajan

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6847
Author(s):  
Andoni Larumbe-Bergera ◽  
Gonzalo Garde ◽  
Sonia Porta ◽  
Rafael Cabeza ◽  
Arantxa Villanueva

Remote eye tracking technology has suffered an increasing growth in recent years due to its applicability in many research areas. In this paper, a video-oculography method based on convolutional neural networks (CNNs) for pupil center detection over webcam images is proposed. As the first contribution of this work and in order to train the model, a pupil center manual labeling procedure of a facial landmark dataset has been performed. The model has been tested over both real and synthetic databases and outperforms state-of-the-art methods, achieving pupil center estimation errors below the size of a constricted pupil in more than 95% of the images, while reducing computing time by a 8 factor. Results show the importance of use high quality training data and well-known architectures to achieve an outstanding performance.


2018 ◽  
Vol 78 (3) ◽  
pp. 3239-3239
Author(s):  
Hyungjoon Kim ◽  
Jisoo Park ◽  
HyeonWoo Kim ◽  
Eenjun Hwang ◽  
Seungmin Rho

2018 ◽  
Vol 78 (3) ◽  
pp. 3221-3238 ◽  
Author(s):  
Hyungjoon Kim ◽  
Jisoo Park ◽  
HyeonWoo Kim ◽  
Eenjun Hwang ◽  
Seungmin Rho

Sign in / Sign up

Export Citation Format

Share Document