scholarly journals A Robust Face Recognition Algorithm Based on an Improved Generative Confrontation Network

2021 ◽  
Vol 11 (24) ◽  
pp. 11588
Author(s):  
Huilin Ge ◽  
Yuewei Dai ◽  
Zhiyu Zhu ◽  
Biao Wang

Objective: In practical applications, an image of a face is often partially occluded, which decreases the recognition rate and the robustness. Therefore, in response to this situation, an effective face recognition model based on an improved generative adversarial network (GAN) is proposed. Methods: First, we use a generator composed of an autoencoder and the adversarial learning of two discriminators (local discriminator and global discriminator) to fill and repair an occluded face image. On this basis, the Resnet-50 network is used to perform image restoration on the face. In our recognition framework, we introduce a classification loss function that can quantify the distance between classes. The image generated by the generator can only capture the rough shape of the missing facial components or generate the wrong pixels. To obtain a clearer and more realistic image, this paper uses two discriminators (local discriminator and global discriminator, as mentioned above). The images generated by the proposed method are coherent and minimally influence facial expression recognition. Through experiments, facial images with different occlusion conditions are compared before and after the facial expressions are filled, and the recognition rates of different algorithms are compared. Results: The images generated by the method in this paper are truly coherent and have little impact on facial expression recognition. When the occlusion area is less than 50%, the overall recognition rate of the model is above 80%, which is close to the recognition rate pertaining to the non-occluded images. Conclusions: The experimental results show that the method in this paper has a better restoration effect and higher recognition rate for face images of different occlusion types and regions. Furthermore, it can be used for face recognition in a daily occlusion environment, and achieve a better recognition effect.

2013 ◽  
Vol 380-384 ◽  
pp. 4057-4060
Author(s):  
Lang Guo ◽  
Jian Wang

Analyzing the defects of two-dimensional facial expression recognition algorithm, this paper proposes a new three-dimensional facial expression recognition algorithm. The algorithm is tested in JAFFE facial expression database. The results show that the proposed algorithm dynamically determines the size of the local neighborhood according to the manifold structure, effectively solves the problem of facial expression recognition, and has good recognition rate.


2018 ◽  
Vol 173 ◽  
pp. 03066 ◽  
Author(s):  
HE binghua ◽  
CHEN zengzhao ◽  
LI gaoyang ◽  
JIANG lang ◽  
ZHANG zhao ◽  
...  

Aiming at the problem of recognition effect is not stable when 2D facial expression recognition in the complex illumination and posture changes. A facial expression recognition algorithm based on RGB-D dynamic sequence analysis is proposed. The algorithm uses LBP features which are robust to illumination, and adds depth information to study the facial expression recognition. The algorithm firstly extracts 3D texture features of preprocessed RGB-D facial expression sequence, and then uses the CNN to train the dataset. At the same time, in order to verify the performance of the algorithm, a comprehensive facial expression library including 2D image, video and 3D depth information is constructed with the help of Intel RealSense technology. The experimental results show that the proposed algorithm has some advantages over other RGB-D facial expression recognition algorithms in training time and recognition rate, and has certain reference value for future research in facial expression recognition.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2250
Author(s):  
Leyuan Liu ◽  
Rubin Jiang ◽  
Jiao Huo ◽  
Jingying Chen

Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.


Sign in / Sign up

Export Citation Format

Share Document