scholarly journals Face Recognition Performance in Facing Pose Variation

Author(s):  
Alexander Agung Santoso Gunawan ◽  
Reza A Prasetyo

There are many real world applications of face recognition which require good performance in uncontrolled environments such as social networking, and environment surveillance. However, many researches of face recognition are done in controlled situations. Compared to the controlled environments, face recognition in uncontrolled environments comprise more variation, for example in the pose, light intensity, and expression. Therefore, face recognition in uncontrolled conditions is more challenging than in controlled settings. In thisresearch, we would like to discuss handling pose variations in face recognition. We address the representation issue us ing multi-pose of face detection based on yaw angle movement of the head as extensions of the existing frontal face recognition by using Principal Component Analysis (PCA). Then, the matching issue is solved by using Euclidean distance. This combination is known as Eigenfaces method. The experiment is done with different yaw angles and different threshold values to get the optimal results. The experimental results show that: (i) the more pose variation of face images used as training data is, the better recognition results are, but it also increases the processing time, and (ii) the lower threshold value is, the harder it recognizes a face image, but it also increases the accuracy.

2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Tai-Xiang Jiang ◽  
Ting-Zhu Huang ◽  
Xi-Le Zhao ◽  
Tian-Hui Ma

We have proposed a patch-based principal component analysis (PCA) method to deal with face recognition. Many PCA-based methods for face recognition utilize the correlation between pixels, columns, or rows. But the local spatial information is not utilized or not fully utilized in these methods. We believe that patches are more meaningful basic units for face recognition than pixels, columns, or rows, since faces are discerned by patches containing eyes and noses. To calculate the correlation between patches, face images are divided into patches and then these patches are converted to column vectors which would be combined into a new “image matrix.” By replacing the images with the new “image matrix” in the two-dimensional PCA framework, we directly calculate the correlation of the divided patches by computing the total scatter. By optimizing the total scatter of the projected samples, we obtain the projection matrix for feature extraction. Finally, we use the nearest neighbor classifier. Extensive experiments on the ORL and FERET face database are reported to illustrate the performance of the patch-based PCA. Our method promotes the accuracy compared to one-dimensional PCA, two-dimensional PCA, and two-directional two-dimensional PCA.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.


2013 ◽  
Vol 8 (2) ◽  
pp. 787-795
Author(s):  
Sasi Kumar Balasundaram ◽  
J. Umadevi ◽  
B. Sankara Gomathi

This paper aims to achieve the best color face recognition performance. The newly introduced feature selection method takes advantage of novel learning which is used to find the optimal set of color-component features for the purpose of achieving the best face recognition result. The proposed color face recognition method consists of two parts namely color-component feature selection with boosting and color face recognition solution using selected color component features. This method is better than existing color face recognition methods with illumination, pose variation and low resolution face images. This system is based on the selection of the best color component features from various color models using the novel boosting learning framework. These selected color component features are then combined into a single concatenated color feature using weighted feature fusion. The effectiveness of color face recognition method has been successfully evaluated by the public face databases.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Zhifei Wang ◽  
Zhenjiang Miao ◽  
Yanli Wan ◽  
Zhen Tang

Low resolution (LR) in face recognition (FR) surveillance applications will cause the problem of dimensional mismatch between LR image and its high-resolution (HR) template. In this paper, a novel method called kernel coupled cross-regression (KCCR) is proposed to deal with this problem. Instead of processing in the original observing space directly, KCCR projects LR and HR face images into a unified nonlinear embedding feature space using kernel coupled mappings and graph embedding. Spectral regression is further employed to improve the generalization performance and reduce the time complexity. Meanwhile, cross-regression is developed to fully utilize the HR embedding to increase the information of the LR space, thus to improve the recognition performance. Experiments on the FERET and CMU PIE face database show that KCCR outperforms the existing structure-based methods in terms of recognition rate as well as time complexity.


2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Rong Wang

In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.


2015 ◽  
Vol 734 ◽  
pp. 562-567 ◽  
Author(s):  
En Zeng Dong ◽  
Yan Hong Fu ◽  
Ji Gang Tong

This paper proposed a theoretically efficient approach for face recognition based on principal component analysis (PCA) and rotation invariant uniform local binary pattern texture features in order to weaken the effects of varying illumination conditions and facial expressions. Firstly, the rotation invariant uniform LBP operator was adopted to extract the local texture feature of the face images. Then PCA method was used to reduce the dimensionality of the extracted feature and get the eigenfaces. Finally, the nearest distance classification was used to distinguish each face. The method has been accessed on Yale and ATR-Jaffe face databases. Results demonstrate that the proposed method is superior to standard PCA and its recognition rate is higher than the traditional PCA. And the proposed algorithm has strong robustness against the illumination changes, pose, rotation and expressions.


Author(s):  
Zhonghua Liu ◽  
Jiexin Pu ◽  
Yong Qiu ◽  
Moli Zhang ◽  
Xiaoli Zhang ◽  
...  

Sparse representation is a new hot technique in recent years. The two-phase test sample sparse representation method (TPTSSR) achieved an excellent performance in face recognition. In this paper, a kernel two-phase test sample sparse representation method (KTPTSSR) is proposed. Firstly, the input data are mapped into an implicit high-dimensional feature space by a nonlinear mapping function. Secondly, the data are analyzed by means of the TPTSSR method in the feature space. If an appropriate kernel function and the corresponding kernel parameter are selected, a test sample can be accurately represented as the linear combination of the training data with the same label information of the test sample. Therefore, the proposed method could have better recognition performance than TPTSSR. Experiments on the face databases demonstrate the effectiveness of our methods.


2020 ◽  
Vol 3 (2) ◽  
pp. 222-235
Author(s):  
Vivian Nwaocha ◽  
◽  
Ayodele Oloyede ◽  
Deborah Ogunlana ◽  
Michael Adegoke ◽  
...  

Face images undergo considerable amount of variations in pose, facial expression and illumination condition. This large variation in facial appearances of the same individual makes most Existing Face Recognition Systems (E-FRS) lack strong discrimination ability and timely inefficient for face representation due to holistic feature extraction technique used. In this paper, a novel face recognition framework, which is an extension of the standard (PCA) and (ICA) denoted as two-dimensional Principal Component Analysis (2D-PCA) and two-dimensional Independent Component Analysis (2D-ICA) respectively is proposed. The choice of 2D was advantageous as image covariance matrix can be constructed directly using original image matrices. The face images used in this study were acquired from the publicly available ORL and AR Face database. The features belonging to similar class were grouped and correlation calculated in the same order. Each technique was decomposed into different components by employing multi-dimensional grouped empirical mode decomposition using Gaussian function. The nearest neighbor (NN) classifier is used for classification. The results of evaluation showed that the 2D-PCA method using ORL database produced RA of 92.5%, PCA produced RA of 75.00%, ICA produced RA of 77.5%, 2D-ICA produced RA of 96.00%. However, 2D-PCA methods using AR database produced RA of 73.56%, PCA produced RA of 62.41%, ICA produced RA of 66.20%, 2D-ICA method produced RA of 77.45%. This study revealed that the developed face recognition framework algorithm achieves an improvement of 18.5% and 11.25% for the ORL and AR databases respectively as against PCA and ICA feature extraction techniques. Keywords: computer vision, dimensionality reduction techniques, face recognition, pattern recognition


2020 ◽  
Author(s):  
Bilal Salih Abed Alhayani ◽  
Milind Rane

A wide variety of systems require reliable person recognition schemes to either confirm or determine the identity of an individual requesting their services. The purpose of such schemes is to ensure that only a legitimate user and no one else access the rendered services. Examples of such applications include secure access to buildings, computer systems, laptops, cellular phones, and ATMs. Face can be used as Biometrics for person verification. Face is a complex multidimensional structure and needs a good computing techniques for recognition. We treats face recognition as a two-dimensional recognition problem. A well-known technique of Principal Component Analysis (PCA) is used for face recognition. Face images are projected onto a face space that encodes best variation among known face images. The face space is defined by Eigen face which are eigenvectors of the set of faces, which may not correspond to general facial features such as eyes, nose, lips. The system performs by projecting pre extracted face image onto a set of face space that represent significant variations among known face images. The variable reducing theory of PCA accounts for the smaller face space than the training set of face. A Multire solution features based pattern recognition system used for face recognition based on the combination of Radon and wavelet transforms. As the Radon transform is in-variant to rotation and a Wavelet Transform provides the multiple resolution. This technique is robust for face recognition. The technique computes Radon projections in different orientations and captures the directional features of face images. Further, the wavelet transform applied on Radon space provides multire solution features of the facial images. Being the line integral, Radon transform improves the low-frequency components that are useful in face recognition


Sign in / Sign up

Export Citation Format

Share Document