scholarly journals Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.

2021 ◽  
Vol 11 (23) ◽  
pp. 11171
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Automatic facial action detection is important, but no previous studies have evaluated pre-trained models on the accuracy of facial action detection as the angle of the face changes from frontal to profile. Using static facial images obtained at various angles (0°, 15°, 30°, and 45°), we investigated the performance of three automated facial action detection systems (FaceReader, OpenFace, and Py-feat). The overall performance was best for OpenFace, followed by FaceReader and Py-Feat. The performance of FaceReader significantly decreased at 45° compared to that at other angles, while the performance of Py-Feat did not differ among the four angles. The performance of OpenFace decreased as the target face turned sideways. Prediction accuracy and robustness to angle changes varied with the target facial components and action detection system.


CNS Spectrums ◽  
2019 ◽  
Vol 24 (1) ◽  
pp. 204-205
Author(s):  
Mina Boazak ◽  
Robert Cotes

AbstractIntroductionFacial expressivity in schizophrenia has been a topic of clinical interest for the past century. Besides the schizophrenia sufferers difficulty decoding the facial expressions of others, they often have difficulty encoding facial expressions. Traditionally, evaluations of facial expressions have been conducted by trained human observers using the facial action coding system. The process was slow and subject to intra and inter-observer variability. In the past decade the traditional facial action coding system developed by Ekman has been adapted for use in affective computing. Here we assess the applications of this adaptation for schizophrenia, the findings of current groups, and the future role of this technology.Materials and MethodsWe review the applications of computer vision technology in schizophrenia using pubmed and google scholar search criteria of “computer vision” AND “Schizophrenia” from January of 2010 to June of 2018.ResultsFive articles were selected for inclusion representing 1 case series and 4 case-control analysis. Authors assessed variations in facial action unit presence, intensity, various measures of length of activation, action unit clustering, congruence, and appropriateness. Findings point to variations in each of these areas, except action unit appropriateness, between control and schizophrenia patients. Computer vision techniques were also demonstrated to have high accuracy in classifying schizophrenia from control patients, reaching an AUC just under 0.9 in one study, and to predict psychometric scores, reaching pearson’s correlation values of under 0.7.DiscussionOur review of the literature demonstrates agreement in findings of traditional and contemporary assessment techniques of facial expressivity in schizophrenia. Our findings also demonstrate that current computer vision techniques have achieved capacity to differentiate schizophrenia from control populations and to predict psychometric scores. Nevertheless, the predictive accuracy of these technologies leaves room for growth. On analysis our group found two modifiable areas that may contribute to improving algorithm accuracy: assessment protocol and feature inclusion. Based on our review we recommend assessment of facial expressivity during a period of silence in addition to an assessment during a clinically structured interview utilizing emotionally evocative questions. Furthermore, where underfit is a problem we recommend progressive inclusion of features including action unit activation, intensity, action unit rate of onset and offset, clustering (including richness, distribution, and typicality), and congruence. Inclusion of each of these features may improve algorithm predictive accuracy.ConclusionWe review current applications of computer vision in the assessment of facial expressions in schizophrenia. We present the results of current innovative works in the field and discuss areas for continued development.


2021 ◽  
Vol 7 (1) ◽  
pp. 13-24
Author(s):  
Matahari Bhakti Nendya ◽  
Lailatul Husniah ◽  
Hardianto Wibowo ◽  
Eko Mulyanto Yuniarno

Ekspresi wajah pada karakter virtual 3D memegang penran penting dalam pembuatan sebuah film animasi. Untuk mendapatkan ekspresi wajah yang diinginkan seorang animator kadang mengalami kesulitan dan membutuhkan waktu yang tidak sedikit. Penelitian ini dilakukan untuk mendapatkan ekspresi wajah dengan menggabungkan beberapa Action Unit yang ada pada FACS dan diimplementasikan pada wajah karakter virtual 3D. Action Unit pada FACS dipilih karena mengacu pada struktur otot wajah manusia. Eksperimen yang dilakukan menghasilkan komninasi Action Unit yang dapat membentuk ekspresi seperti joy expression yang dihasilkan dari kombinasi AU 12+26, dan surprise expression yang dihasilkan dari kombinasi AU -4+5+26. Sedangkan untuk sadness expression dan disgust expression karena ada AU yang tidak terwakili pada model 3D sehingga di dapatkan hasil ekspresi yang kurang maksimal.


2022 ◽  
Vol 15 ◽  
Author(s):  
Chongwen Wang ◽  
Zicheng Wang

Facial action unit (AU) detection is an important task in affective computing and has attracted extensive attention in the field of computer vision and artificial intelligence. Previous studies for AU detection usually encode complex regional feature representations with manually defined facial landmarks and learn to model the relationships among AUs via graph neural network. Albeit some progress has been achieved, it is still tedious for existing methods to capture the exclusive and concurrent relationships among different combinations of the facial AUs. To circumvent this issue, we proposed a new progressive multi-scale vision transformer (PMVT) to capture the complex relationships among different AUs for the wide range of expressions in a data-driven fashion. PMVT is based on the multi-scale self-attention mechanism that can flexibly attend to a sequence of image patches to encode the critical cues for AUs. Compared with previous AU detection methods, the benefits of PMVT are 2-fold: (i) PMVT does not rely on manually defined facial landmarks to extract the regional representations, and (ii) PMVT is capable of encoding facial regions with adaptive receptive fields, thus facilitating representation of different AU flexibly. Experimental results show that PMVT improves the AU detection accuracy on the popular BP4D and DISFA datasets. Compared with other state-of-the-art AU detection methods, PMVT obtains consistent improvements. Visualization results show PMVT automatically perceives the discriminative facial regions for robust AU detection.


SINERGI ◽  
2016 ◽  
Vol 20 (1) ◽  
pp. 74
Author(s):  
Puji Aswari ◽  
Nova Eka Diana

Ekspresi wajah menjadi bahasa yang universal. Bahkan perubahan ekspresi wajah dapat membantu pengambilan keputusan. Pada tahun 1972, Paul Ekman mengklasifikasikan emosi dasar manusia ke dalam enam jenis: senang, sedih, terkejut, marah, takut, dan jijik. Kemudian Ekman dan Wallace Friesen mengembangkan sebuah alat untuk mengukur pergerakan pada wajah yang disebut Facial Action Coding System (FACS). FACS menentukan ekspresi wajah berdasarkan pergerakan otot wajah, yang diistilahkan Action Unit (AU). Penelitian ini bertujuan untuk mengetahui emosi tertarik yang dialami seseorang berdasarkan AU yang telah ditentukan oleh Paul Ekman dengan cara membandingkan dua buah citra, yaitu citra wajah tanpa ekspresi dan citra wajah berekspresi. Hasil penelitian ini memperoleh sebuah aplikasi yang mampu mengidentifikasi emosi tertarik dengan akurasi sebesar 80%, True Positive Rate 80%, dan True Negative Rate 80%. Dengan adanya penelitian ini diharapkan dapat diketahui karakteristik action unit yang membentuk emosi tertarik, juga memberikan masukan bagi proses evaluasi belajar mengajar mata kuliah pemrograman.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0244647
Author(s):  
Michal Kawulok ◽  
Jakub Nalepa ◽  
Jolanta Kawulok ◽  
Bogdan Smolka

Applying computer vision techniques to distinguish between spontaneous and posed smiles is an active research topic of affective computing. Although there have been many works published addressing this problem and a couple of excellent benchmark databases created, the existing state-of-the-art approaches do not exploit the action units defined within the Facial Action Coding System that has become a standard in facial expression analysis. In this work, we explore the possibilities of extracting discriminative features directly from the dynamics of facial action units to differentiate between genuine and posed smiles. We report the results of our experimental study which shows that the proposed features offer competitive performance to those based on facial landmark analysis and on textural descriptors extracted from spatial-temporal blocks. We make these features publicly available for the UvA-NEMO and BBC databases, which will allow other researchers to further improve the classification scores, while preserving the interpretation capabilities attributed to the use of facial action units. Moreover, we have developed a new technique for identifying the smile phases, which is robust against the noise and allows for continuous analysis of facial videos.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


2009 ◽  
Vol 35 (2) ◽  
pp. 198-201 ◽  
Author(s):  
Lei WANG ◽  
Bei-Ji ZOU ◽  
Xiao-Ning PENG

Author(s):  
Dakai Ren ◽  
Xiangmin Wen ◽  
Jiazhong Chen ◽  
Yu Han ◽  
Shiqi Zhang

Sign in / Sign up

Export Citation Format

Share Document