scholarly journals Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Faisal Ahmed ◽  
Emam Hossain

Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP)—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK) face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

2019 ◽  
Vol 8 (2S11) ◽  
pp. 4047-4051

The automatic detection of facial expressions is an active research topic, since its wide fields of applications in human-computer interaction, games, security or education. However, the latest studies have been made in controlled laboratory environments, which is not according to real world scenarios. For that reason, a real time Facial Expression Recognition System (FERS) is proposed in this paper, in which a deep learning approach is applied to enhance the detection of six basic emotions: happiness, sadness, anger, disgust, fear and surprise in a real-time video streaming. This system is composed of three main components: face detection, face preparation and face expression classification. The results of proposed FERS achieve a 65% of accuracy, trained over 35558 face images..


2019 ◽  
Vol 16 (04) ◽  
pp. 1941002 ◽  
Author(s):  
Jing Li ◽  
Yang Mi ◽  
Gongfa Li ◽  
Zhaojie Ju

Facial expression recognition has been widely used in human computer interaction (HCI) systems. Over the years, researchers have proposed different feature descriptors, implemented different classification methods, and carried out a number of experiments on various datasets for automatic facial expression recognition. However, most of them used 2D static images or 2D video sequences for the recognition task. The main limitations of 2D-based analysis are problems associated with variations in pose and illumination, which reduce the recognition accuracy. Therefore, an alternative way is to incorporate depth information acquired by 3D sensor, because it is invariant in both pose and illumination. In this paper, we present a two-stream convolutional neural network (CNN)-based facial expression recognition system and test it on our own RGB-D facial expression dataset collected by Microsoft Kinect for XBOX in unspontaneous scenarios since Kinect is an inexpensive and portable device to capture both RGB and depth information. Our fully annotated dataset includes seven expressions (i.e., neutral, sadness, disgust, fear, happiness, anger, and surprise) for 15 subjects (9 males and 6 females) aged from 20 to 25. The two individual CNNs are identical in architecture but do not share parameters. To combine the detection results produced by these two CNNs, we propose the late fusion approach. The experimental results demonstrate that the proposed two-stream network using RGB-D images is superior to that of using only RGB images or depth images.


In many face recognition systems, the important part is face detection. The task of detecting face is complex due to its variability present across human faces including color, pose, expression, position, and orientation. So, by using various modeling techniques it is convenient to recognize various facial expressions. The system proposed consists of three phases, the facial expression database, pre-processing and classification. To simulate and assess recognition efficiency based on different variables (network composition, learning patterns and pre-processing), we present both the Japanese Female Facial Expression Database (JAFFE) and the Extended Cohn-Kanade Dataset (CK+). Comparative approaches of data preprocessing include face detection, translation, normalization of global contrast and histogram equalization. Significant results were obtained with 85.52 percent accuracy particularly in comparison with some other pre-processing phases and raw data in single pre-processing phases. The result indicates the ANN classifier representation produces a satisfactory result which reaches more accuracy.


2020 ◽  
Vol 24 (6) ◽  
pp. 1455-1476
Author(s):  
Xuejian Wang ◽  
Michael C. Fairhurst ◽  
Anne M.P. Canuto

Although several automatic computer systems have been proposed to address facial expression recognition problems, the majority of them still fail to cope with some requirements of many practical application scenarios. In this paper, one of the most influential and common issues raised in practical application scenarios when applying automatic facial expression recognition system, head pose variation, is comprehensively explored and investigated. In order to do this, two novel texture feature representations are proposed for implementing multi-view facial expression recognition systems in practical environments. These representations combine the block-based techniques with Local Ternary Pattern-based features, providing a more informative and efficient feature representation of the facial images. In addition, an in-house multi-view facial expression database has been designed and collected to allow us to conduct a detailed research study of the effect of out-of-plane pose angles on the performance of a multi-view facial expression recognition system. Along with the proposed in-house dataset, the proposed system is tested on two well-known facial expression databases, CK+ and BU-3DFE datasets. The obtained results shows that the proposed system outperforms current state-of-the-art 2D facial expression systems in the presence of pose variations.


2019 ◽  
Vol 2019 ◽  
pp. 1-13
Author(s):  
Ying Tong ◽  
Rui Chen

To overcome the shortcomings of inaccurate textural direction representation and high-computational complexity of Local Binary Patterns (LBPs), we propose a novel feature descriptor named as Local Dominant Directional Symmetrical Coding Patterns (LDDSCPs). Inspired by the directional sensitivity of human visual system, we partition eight convolution masks into two symmetrical groups according to their directions and adopt these two groups to compute the convolution values of each pixel. Then, we encode the dominant direction information of facial expression texture by comparing each pixel’s convolution values with the average strength of its belonging group and obtain LDDSCP-1 and LDDSCP-2 codes, respectively. At last, in view of the symmetry of two groups of direction masks, we stack these corresponding histograms of LDDSCP-1 and LDDSCP-2 codes into the ultimate LDDSCP feature vector which has effects on the more precise facial feature description and the lower computational complexity. Experimental results on the JAFFE and Cohn-Kanade databases demonstrate that the proposed LDDSCP feature descriptor compared with LBP, Gabor, and other traditional operators achieves superior performance in recognition rate and computational complexity. Furthermore, it is also no less inferior to some state-of-the-art local descriptors like as LDP, LDNP, es-LBP, and GDP.


2018 ◽  
Vol 28 (2) ◽  
pp. 399-409 ◽  
Author(s):  
Faisal Ahmed ◽  
Md. Hasanul Kabir

Abstract In recent years, research in automated facial expression recognition has attained significant attention for its potential applicability in human-computer interaction, surveillance systems, animation, and consumer electronics. However, recognition in uncontrolled environments under the presence of illumination and pose variations, low-resolution video, occlusion, and random noise is still a challenging research problem. In this paper, we investigate recognition of facial expression in difficult conditions by means of an effective facial feature descriptor, namely the directional ternary pattern (DTP). Given a face image, the DTP operator describes the facial feature by quantizing the eight-directional edge response values, capturing essential texture properties, such as presence of edges, corners, points, lines, etc. We also present an enhancement of the basic DTP encoding method, namely the compressed DTP (cDTP) that can describe the local texture more effectively with fewer features. The recognition performances of the proposed DTP and cDTP descriptors are evaluated using the Cohn-Kanade (CK) and the Japanese female facial expression (JAFFE) database. In our experiments, we simulate difficult conditions using original database images with lighting variations, low-resolution images obtained by down-sampling the original, and images corrupted with Gaussian noise. In all cases, the proposed method outperforms some of the well-known face feature descriptors.


2021 ◽  
Vol 24 (2) ◽  
pp. 144-148
Author(s):  
Alaa Nabeel Haj Najeb ◽  
Nasser Nasser

Facial expressions are a form of non-verbal communication, they appear as changes on the surface of the facial skin according to one's inner emotional states, aims, or social communications. Classification of these expressions is a normal process for humans, but it is a challenging task for machines.Lately, interest in facial expression recognition has grown, and many systems have been developed to classify expressions from facial images. Any expression recognition system is comprised of three steps. The first one is face acquisition, then feature extraction, and finally classification. The classification accuracy depends primarily on the feature extraction step.  Therefore, in this research we study many texture feature extraction descriptors and compare their results under the same preprocessing circumstances; moreover, we propose two improvements for one of these descriptors, which give better results than the original one. We validate the results on two commonly used databases for expression recognition using Matlab programming language, wishing all of that to be an interesting point for researchers in this field.


Sign in / Sign up

Export Citation Format

Share Document