scholarly journals Facial Expressions Classification with Ensembles of Convolutional Neural Networks and Smart Voting

2018 ◽  
Author(s):  
Rodrigo C. Moraes ◽  
Elloá B. Guedes ◽  
Carlos Maurício S. Figueiredo

Facial Expression is a very important factor in the social interaction of human beings. And technologies that can automatically interpret and respond to stimuli of facial expressions already find a wide variety of applications, from antidepressant drug testing to fatigue analysis of drivers and pilots. In this context, the following work presents a model for Automatic Classification of Facial Expression using as a training base the dataset Challenges in Representation Learning (FER2013), characterized by examples of spontaneous facial expressions in uncontrolled environments. The presented method is composed by a Convolutional Neural Networks Ensemble architecture, using a non-trivial voting system, based on a smart model, Xtreme Gradient Boosting - XGBoost. As performance criteria for validation of the proposed model, were used K-fold and F1 Score Micro techniques to guarantee robustness and reliability of the results, which are competitive with state-of-the-art works.

Human feelings are mental conditions of sentiments that emerge immediately as opposed to cognitive exertion. Some of the basic feelings are happy, angry, neutral, sad and surprise. These internal feelings of a person are reflected on the face as Facial Expressions. This paper presents a novel methodology for Facial Expression Analysis which will aid to develop a facial expression recognition system. This system can be used in real time to classify five basic emotions. The recognition of facial expressions is important because of its applications in many domains such as artificial intelligence, security and robotics. Many different approaches can be used to overcome the problems of Facial Expression Recognition (FER) but the best suited technique for automated FER is Convolutional Neural Networks(CNN). Thus, a novel CNN architecture is proposed and a combination of multiple datasets such as FER2013, FER+, JAFFE and CK+ is used for training and testing. This helps to improve the accuracy and develop a robust real time system. The proposed methodology confers quite good results and the obtained accuracy may give encouragement and offer support to researchers to build better models for Automated Facial Expression Recognition systems.


2021 ◽  
Vol 11 (24) ◽  
pp. 11738
Author(s):  
Thomas Teixeira ◽  
Éric Granger ◽  
Alessandro Lameiras Koerich

Facial expressions are one of the most powerful ways to depict specific patterns in human behavior and describe the human emotional state. However, despite the impressive advances of affective computing over the last decade, automatic video-based systems for facial expression recognition still cannot correctly handle variations in facial expression among individuals as well as cross-cultural and demographic aspects. Nevertheless, recognizing facial expressions is a difficult task, even for humans. This paper investigates the suitability of state-of-the-art deep learning architectures based on convolutional neural networks (CNNs) to deal with long video sequences captured in the wild for continuous emotion recognition. For such an aim, several 2D CNN models that were designed to model spatial information are extended to allow spatiotemporal representation learning from videos, considering a complex and multi-dimensional emotion space, where continuous values of valence and arousal must be predicted. We have developed and evaluated convolutional recurrent neural networks, combining 2D CNNs and long short term-memory units and inflated 3D CNN models, which are built by inflating the weights of a pre-trained 2D CNN model during fine-tuning, using application-specific videos. Experimental results on the challenging SEWA-DB dataset have shown that these architectures can effectively be fine-tuned to encode spatiotemporal information from successive raw pixel images and achieve state-of-the-art results on such a dataset.


2021 ◽  
Vol 4 (2) ◽  
pp. 192-201
Author(s):  
Denys Valeriiovych Petrosiuk ◽  
Olena Oleksandrivna Arsirii ◽  
Oksana Yurievna Babilunha ◽  
Anatolii Oleksandrovych Nikolenko

The application of deep learning convolutional neural networks for solving the problem of automated facial expression recognition and determination of emotions of a person is analyzed. It is proposed to use the advantages of the transfer approach to deep learning convolutional neural networks training to solve the problem of insufficient data volume in sets of images with different facial expressions. Most of these datasets are labeled in accordance with a facial coding system based on the units of human facial movement. The developed technology of transfer learning of the public deep learning convolutional neural networks families DenseNet and MobileNet, with the subsequent “fine tuning” of the network parameters, allowed to reduce the training time and computational resources when solving the problem of facial expression recognition without losing the reliability of recognition of motor units. During the development of deep learning technology for convolutional neural networks, the following tasks were solved. Firstly, the choice of publicly available convolutional neural networks of the DenseNet and MobileNet families pre-trained on the ImageNet dataset was substantiated, taking into account the peculiarities of transfer learning for the task of recognizing facial expressions and determining emotions. Secondary, a model of a deep convolutional neural network and a method for its training have been developed for solving problems of recognizing facial expressions and determining human emotions, taking into account the specifics of the selected pretrained convolutional neural networks. Thirdly, the developed deep learning technology was tested, and finally, the resource intensity and reliability of recognition of motor units on the DISFA set were assessed. The proposed technology of deep learning of convolutional neural networks can be used in the development of systems for automatic recognition of facial expressions and determination of human emotions for both stationary and mobile devices. Further modification of the systems for recognizing motor units of human facial activity in order to increase the reliability of recognition is possible using of the augmentation technique.


2021 ◽  
Vol 9 (5) ◽  
pp. 1141-1152
Author(s):  
Muazu Abdulwakil Auma ◽  
Eric Manzi ◽  
Jibril Aminu

Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


Sign in / Sign up

Export Citation Format

Share Document