scholarly journals Multiparameter Space Decision Voting and Fusion Features for Facial Expression Recognition

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yan Wang ◽  
Ming Li ◽  
Xing Wan ◽  
Congxuan Zhang ◽  
Yue Wang

Obtaining a valid facial expression recognition (FER) method is still a research hotspot in the artificial intelligence field. In this paper, we propose a multiparameter fusion feature space and decision voting-based classification for facial expression recognition. First, the parameter of the fusion feature space is determined according to the cross-validation recognition accuracy of the Multiscale Block Local Binary Pattern Uniform Histogram (MB-LBPUH) descriptor filtering over the training samples. According to the parameters, we build various fusion feature spaces by employing multiclass linear discriminant analysis (LDA). In these spaces, fusion features composed of MB-LBPUH and Histogram of Oriented Gradient (HOG) features are used to represent different facial expressions. Finally, to resolve the inconvenient classifiable pattern problem caused by similar expression classes, a nearest neighbor-based decision voting strategy is designed to predict the classification results. In experiments with the JAFFE, CK+, and TFEID datasets, the proposed model clearly outperformed existing algorithms.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2018 ◽  
Vol 27 (08) ◽  
pp. 1850121 ◽  
Author(s):  
Zhe Sun ◽  
Zheng-Ping Hu ◽  
Raymond Chiong ◽  
Meng Wang ◽  
Wei He

Recent research has demonstrated the effectiveness of deep subspace learning networks, including the principal component analysis network (PCANet) and linear discriminant analysis network (LDANet), since they can extract high-level features and better represent abstract semantics of given data. However, their representation does not consider the nonlinear relationship of data and limits the use of features with nonlinear metrics. In this paper, we propose a novel architecture combining the kernel collaboration representation with deep subspace learning based on the PCANet and LDANet for facial expression recognition. First, the PCANet and LDANet are employed to learn abstract features. These features are then mapped to the kernel space to effectively capture their nonlinear similarities. Finally, we develop a simple yet effective classification method with squared [Formula: see text]-regularization, which improves the recognition accuracy and reduces time complexity. Comprehensive experimental results based on the JAFFE, CK[Formula: see text], KDEF and CMU Multi-PIE datasets confirm that our proposed approach has superior performance not just in terms of accuracy, but it is also robust against block occlusion and varying parameter configurations.


2013 ◽  
Vol 427-429 ◽  
pp. 1963-1967 ◽  
Author(s):  
Shu Yi Wang ◽  
Jing Ling Wang ◽  
Chuan Zhen Li

This paper presents a facial expression recognition algorithm based on multi-channel integration of Gabor feature. First, a Gabor wavelet filter extracts facial features with 5 scales and 8 orientations, and then transform the 40 channels into 13 channels according to the maximum rule presented in this paper. Second, we reduce the dimension of expression features by the method of PCA+LDA. At last, expression features are classified using the nearest neighbor method. The experiments involve two databases and show that the proposed algorithm can recognize facial expression in high rate.


Sign in / Sign up

Export Citation Format

Share Document