An Efficient Skin Illumination Compensation Model for Efficient Face Detection

Author(s):  
C. N. Ravi Kumar ◽  
A. Bindu
2009 ◽  
Vol 30 (9) ◽  
pp. 856-860 ◽  
Author(s):  
Jae-Ung Yun ◽  
Hyung-Jin Lee ◽  
Anjan Kumar Paul ◽  
Joong-Hwan Baek

2014 ◽  
Vol 945-949 ◽  
pp. 1880-1884
Author(s):  
Hua Zhang ◽  
Li Jia Wang ◽  
Zhen Jie Wang ◽  
Wei Yi Yuan

To overcome illumination changes and pose variations, a pose-invariant face detection method is presented. First, an illumination compensation method based on reference white is presented to overcome the lighting variations. The reference white is obtained according to the component Y from YCbCr color space. Then, a mixture face model is constructed by the Cb and Cr from YCbCr color space and H from the HSV color space to extract faces from colorful image. At last, an eyes model is designed to locate eyes in the obtained face images, which can distinguish face from neck and arms ultimately. The presented method is conducted on the CASIA face database. The experimental results have shown that our method is robust to pose changes and illumination variations, and it can achieve well performance.


2003 ◽  
Vol 03 (03) ◽  
pp. 461-479 ◽  
Author(s):  
JUN MIAO ◽  
HONG LIU ◽  
WEN GAO ◽  
HONGMING ZHANG ◽  
GANG DENG ◽  
...  

This paper presents an implementation of a system designed for the location of human faces and facial features such as pupils, eyes, nose and mouth. The kernel of the system is an integration of several algorithms, such as the human face center-of-gravity template, illumination compensation, and so on. A false-face removal algorithm is proposed in this paper specially for the distinguishing of cartoon faces from true faces. The testing experiments of the system have produced quite good results, with the average detection accuracy rates for face detection and facial feature location being 97.8% and 87.5% respectively.


2010 ◽  
Vol 130 (11) ◽  
pp. 2031-2038
Author(s):  
Kohki Abiko ◽  
Hironobu Fukai ◽  
Yasue Mitsukura ◽  
Minoru Fukumi ◽  
Masahiro Tanaka
Keyword(s):  

2020 ◽  
Vol 64 (4) ◽  
pp. 40404-1-40404-16
Author(s):  
I.-J. Ding ◽  
C.-M. Ruan

Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.


Sign in / Sign up

Export Citation Format

Share Document