scholarly journals Face Identification Using LBP-Based Improved Directional Wavelet Transform

2020 ◽  
Author(s):  
Mohd. Abdul Muqeet ◽  
Qazi Mateenuddin Hameeduddin

Face identification is the most active area of research in computer vision and biometric authentication. Various face identification methods are developed over the time, still, numerous facial appearances are needed to cope with such as facial expression, pose, and illumination variation. Moreover, faces captured in unrestrained situations also impose immense concern in designing effective face identification methods. It is desirable to extract robust local descriptive features to effectively characterize such facial variations both in unrestrained and restrained situations. This chapter discusses such a face identification method that incorporate a popular local descriptor such as local binary patterns (LBP) based on the improved directional wavelet transform (IDW) method to extract facial features. This designed method is applied to complex face databases such as CASIA-WebFace and LFW which consists of a large number of face images collected under an unrestrained environment with extreme facial variations in expression, pose, and illumination. Experiments and comparison with various methods which include not only the local descriptive methods but also local descriptive-based multiresolution analysis (MRA) based methods demonstrate the efficacy of the LBP-based IDW method.

2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


Author(s):  
Chowdhury Mohammad Masum Refat ◽  
Norsinnira Zainul Azlan

Sensor-based Facial expression recognition (FER) is an attractive research topic. Nowadays, FER is used for different application such as smart environments and healthcare solutions. The machine can learn human emotion by using FER technology. It is the primary and essential for quantitative analysis of human sentiments. FER is an image recognition problem within the broader field of computer vision. Face detection and tracking, reliable face recognition still present a considerable challenge for researchers in computer vision and pattern recognition. First, data processing and analytics are intensive and require a large number of computation resources and memory. Second, the fundamental technical limitation is its robustness in changes in the environment. Finally, illumination variation further complicates the design of robust algorithms because of changes in shadow casts. However, sensor-based FER overcomes all these limitations. Sensor technologies, especially low-power, wireless communication, high-capacity, and data processing have made substantial progress, making it possible for sensors to evolve from low-level data collection and transmission to high-level inference. This study aims to develop a stretchable sensor-based FER system. We use random forest machine learning algorithms used for training the FER model. Commercial stretchable facial expression dataset is simulated into the anaconda software. In this research, our stretch sensor FER dataset obtained around 95% accuracy for four different emotions (Neutral, Happy, Sad, and Disgust).


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Author(s):  
Matti Pietikäinen ◽  
Abdenour Hadid ◽  
Guoying Zhao ◽  
Timo Ahonen

Author(s):  
Zakia Hammal ◽  
Zakia Hammal

This chapter addresses recent advances in computer vision for facial expression classification. The authors present the different processing steps of the problem of automatic facial expression recognition. They describe the advances of each stage of the problem and review the future challenges towards the application of such systems to everyday life situations. The authors also introduce the importance of taking advantage of the human strategy by reviewing advances of research in psychology towards multidisciplinary approach for facial expression classification. Finally, the authors describe one contribution which aims at dealing with some of the discussed challenges.


Sign in / Sign up

Export Citation Format

Share Document