Classification of Face Images for Gender, Age, Facial Expression, and Identity

Author(s):  
Torsten Wilhelm ◽  
Hans-Joachim Böhme ◽  
Horst-Michael Gross
Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2017 ◽  
Vol 6 (2) ◽  
pp. 58-69 ◽  
Author(s):  
Dao Nam Anh ◽  
Trinh Minh Duc

This article describes how facial expression detection and adjustment in complex psychological aspects of vision is central to a number of visual and cognitive computing applications. This article presents an algorithm for automatically estimating happiness expression of face images whose demographic aspects like race, gender and eye direction are changeable. The method is also broadening for alteration of level of happiness expression for face images. A schema of the weighted modification is proposed for enhancement of happiness expression. The authors employ a robust face representation which combines the color patch similarity and the self-resemblance of image patches. A large set of face images with appearance of the properties is learned in a statistical model for interpreting the facial expression of happiness. The authors will show the experiments of such a model using face features for learning by SVM and analyze the performance.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Basma Abd El-Rahiem ◽  
Ahmed Sedik ◽  
Ghada M. El Banby ◽  
Hani M. Ibrahem ◽  
Mohamed Amin ◽  
...  

PurposeThe objective of this paper is to perform infrared (IR) face recognition efficiently with convolutional neural networks (CNNs). The proposed model in this paper has several advantages such as the automatic feature extraction using convolutional and pooling layers and the ability to distinguish between faces without visual details.Design/methodology/approachA model which comprises five convolutional layers in addition to five max-pooling layers is introduced for the recognition of IR faces.FindingsThe experimental results and analysis reveal high recognition rates of IR faces with the proposed model.Originality/valueA designed CNN model is presented for IR face recognition. Both the feature extraction and classification tasks are incorporated into this model. The problems of low contrast and absence of details in IR images are overcome with the proposed model. The recognition accuracy reaches 100% in experiments on the Terravic Facial IR Database (TFIRDB).


Sign in / Sign up

Export Citation Format

Share Document