The current challenges of automatic recognition of facial expressions: A systematic review

2020 ◽  
Vol 33 (3-6) ◽  
pp. 113-138
Author(s):  
Audrey Masson ◽  
Guillaume Cazenave ◽  
Julien Trombini ◽  
Martine Batt

In recent years, due to its great economic and social potential, the recognition of facial expressions linked to emotions has become one of the most flourishing applications in the field of artificial intelligence, and has been the subject of many developments. However, despite significant progress, this field is still subject to many theoretical debates and technical challenges. It therefore seems important to make a general inventory of the different lines of research and to present a synthesis of recent results in this field. To this end, we have carried out a systematic review of the literature according to the guidelines of the PRISMA method. A search of 13 documentary databases identified a total of 220 references over the period 2014–2019. After a global presentation of the current systems and their performance, we grouped and analyzed the selected articles in the light of the main problems encountered in the field of automated facial expression recognition. The conclusion of this review highlights the strengths, limitations and main directions for future research in this field.

2015 ◽  
Vol 29 (1) ◽  
pp. 121-141 ◽  
Author(s):  
Neus Feliu ◽  
Isabel C. Botero

Philanthropy in family enterprises operates at the crossroads of family, business, and society. Most of the research in this area is approached from the business or the individual level; thus, we have a fragmented understanding of philanthropy in family enterprises. This article presents a systematic review of the literature on the subject. Based on 55 sources published between 1988 and 2014, we explain the drivers of this behavior, the vehicles used to practice it, and the outcomes tied to the practice of philanthropy in family enterprises. We identify gaps in our understanding and provide ideas for future research.


2019 ◽  
Vol 9 (21) ◽  
pp. 4678 ◽  
Author(s):  
Daniel Canedo ◽  
António J. R. Neves

Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.


2011 ◽  
Vol 268-270 ◽  
pp. 471-475
Author(s):  
Sungmo Jung ◽  
Seoksoo Kim

Many 3D films use technologies of facial expression recognition. In order to use the existing technologies, a large number of markers shall be attached to a face, a camera is fixed in front of the face, and movements of the markers are calculated. However, the markers calculate only the changes in regions where the markers are attached, which makes difficult realistic recognition of facial expressions. Therefore, this study extracted a preliminary eye region in 320*240 by defining specific location values of the eye. And the final eye region was selected from the preliminary region. This study suggests an improved method of detecting an eye region, reducing errors arising from noise.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 1076-1079

Automated facial expression recognition can greatly improve the human–machine interface. Many deep learning approaches have been applied in recent years due to their outstanding recognition accuracy after training with large amounts of data. In this research, we enhanced Convolutional Neural Network method to recognize 6 basic emotions and compared some pre processing methods to show the influences of its in CNN performance. The preprocessing methods are :resizing, mean, normalization, standard deviation, scaling and edge detection . Face detection as single pre-processing phase achieved significant result with 100 % of accuracy, compared with another pre-processing phase and raw data.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Gilles Vannuscorps ◽  
Michael Andres ◽  
Alfonso Caramazza

What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Junhuan Wang

Recognizing facial expressions accurately and effectively is of great significance to medical and other fields. Aiming at problem of low accuracy of face recognition in traditional methods, an improved facial expression recognition method is proposed. The proposed method conducts continuous confrontation training between the discriminator structure and the generator structure of the generative adversarial networks (GANs) to ensure enhanced extraction of image features of detected data set. Then, the high-accuracy recognition of facial expressions is realized. To reduce the amount of calculation, GAN generator is improved based on idea of residual network. The image is first reduced in dimension and then processed to ensure the high accuracy of the recognition method and improve real-time performance. Experimental part of the thesis uses JAFEE dataset, CK + dataset, and FER2013 dataset for simulation verification. The proposed recognition method shows obvious advantages in data sets of different sizes. The average recognition accuracy rates are 96.6%, 95.6%, and 72.8%, respectively. It proves that the method proposed has a generalization ability.


2020 ◽  
pp. 103-140
Author(s):  
Yakov A. Bondarenko ◽  
Galina Ya. Menshikova

Background. The study explores two main processes of perception of facial expression: analytical (perception based on individual facial features) and holistic (holistic and non-additive perception of all features). The relative contribution of each process to facial expression recognition is still an open question. Objective. To identify the role of holistic and analytical mechanisms in the process of facial expression recognition. Methods. A method was developed and tested for studying analytical and holistic processes in the task of evaluating subjective differences of expressions, using composite and inverted facial images. A distinctive feature of the work is the use of a multidimensional scaling method, by which a judgment of the contribution of holistic and analytical processes to the perception of facial expressions is based on the analysis of the subjective space of the similarity of expressions obtained when presenting upright and inverted faces. Results. It was shown, first, that when perceiving upright faces, a characteristic clustering of expressions is observed in the subjective space of similarities of expression, which we interpret as a predominance of holistic processes; second, by inversion of the face, there is a change in the spatial configuration of expressions that may reflect a strengthening of analytical processes; in general, the method of multidimensional scaling has proven its effectiveness in solving the problem of the relation between holistic and analytical processes in recognition of facial expressions. Conclusion. The analysis of subjective spaces of the similarity of emotional faces is productive for the study of the ratio of analytical and holistic processes in the recognition of facial expressions.


2010 ◽  
Vol 197 (2) ◽  
pp. 156-157 ◽  
Author(s):  
Katie M. Douglas ◽  
Richard J. Porter

SummaryFacial emotion processing was examined in patients with severe depression (n = 68) and a healthy control group (n = 50), using the Facial Expression Recognition Task. A negative interpretation bias was observed in the depression group: neutral faces were more likely to be interpreted as sad and less likely to be interpreted as happy, compared with controls. The depression group also displayed a specific deficit in the recognition of facial expressions of disgust, compared with controls. This may relate to impaired functioning of frontostriatal structures, particularly the basal ganglia.


Sign in / Sign up

Export Citation Format

Share Document