scholarly journals Two-Stage Recognition and beyond for Compound Facial Emotion Recognition

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2847
Author(s):  
Dorota Kamińska ◽  
Kadir Aktas ◽  
Davit Rizhinashvili ◽  
Danila Kuklyanov ◽  
Abdallah Hussein Sham ◽  
...  

Facial emotion recognition is an inherently complex problem due to individual diversity in facial features and racial and cultural differences. Moreover, facial expressions typically reflect the mixture of people’s emotional statuses, which can be expressed using compound emotions. Compound facial emotion recognition makes the problem even more difficult because the discrimination between dominant and complementary emotions is usually weak. We have created a database that includes 31,250 facial images with different emotions of 115 subjects whose gender distribution is almost uniform to address compound emotion recognition. In addition, we have organized a competition based on the proposed dataset, held at FG workshop 2020. This paper analyzes the winner’s approach—a two-stage recognition method (1st stage, coarse recognition; 2nd stage, fine recognition), which enhances the classification of symmetrical emotion labels.

Author(s):  
Wang Xiaohua ◽  
Peng Muzi ◽  
Pan Lijuan ◽  
Hu Min ◽  
Jin Chunhua ◽  
...  

2020 ◽  
Author(s):  
Nazire Duran ◽  
ANTHONY P. ATKINSON

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow lead to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 3) and when briefly presented at the mouth (Experiment 2). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260814
Author(s):  
Nazire Duran ◽  
Anthony P. Atkinson

Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2026
Author(s):  
Jung Hwan Kim ◽  
Alwin Poulose ◽  
Dong Seog Han

Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sindhu Nair Mohan ◽  
Firdaus Mukhtar ◽  
Laura Jobson

While culture and depression influence the way in which humans process emotion, these two areas of investigation are rarely combined. Therefore, the aim of this study was to investigate the difference in facial emotion recognition among Malaysian Malays and Australians with a European heritage with and without depression. A total of 88 participants took part in this study (Malays n = 47, Australians n = 41). All participants were screened using The Structured Clinical Interview for DSM-5 Clinician Version (SCID-5-CV) to assess the Major Depressive Disorder (MDD) diagnosis and they also completed the Beck Depression Inventory (BDI). This study consisted of the facial emotion recognition (FER) task whereby the participants were asked to look at facial images and determine the emotion depicted by each of the facial expressions. It was found that depression status and cultural group did not significantly influence overall FER accuracy. Malaysian participants without MDD and Australian participants with MDD performed quicker as compared to Australian participants without MDD on the FER task. Also, Malaysian participants more accurately recognized fear as compared to Australian participants. Future studies can focus on the extent of the influence and other aspects of culture and participant condition on facial emotion recognition.


2021 ◽  
Author(s):  
Michael J. Spilka ◽  
William R. Keller ◽  
Robert W. Buchanan ◽  
james gold ◽  
James I. Koenig ◽  
...  

Objective: Difficulties in social cognition are common in individuals with schizophrenia (SZ) and are not ameliorated by antipsychotic treatment. Intranasal oxytocin (OT) administration has been explored as a potential intervention to improve social cognition; however, results are inconsistent, suggesting potential individual difference variables that may influence treatment response. Less is known about the relationship between endogenous OT and social cognition in SZ, knowledge of which may improve the development of OT-focused therapies. We examined plasma OT in relationship to facial emotion recognition and visual attention to salient facial features in SZ and controls. Methods: Forty-two individuals with SZ and 23 healthy controls viewed photographs of facial expressions of varying emotional intensity and identified the emotional expression displayed. Participants’ gaze behavior during the task was recorded via eye tracking. Plasma oxytocin concentrations were determined by radioimmunoassay. Results: SZ were less accurate than controls at identifying high intensity fearful facial expressions and low intensity sad expressions. Lower facial emotion recognition accuracy was associated with lower plasma OT levels in SZ but not controls. SZ had reduced visual attention to the nose region compared to controls; however, OT was not associated with gaze behavior. Conclusion: Individual differences in endogenous OT predict facial emotion recognition ability in SZ but are not associated with visual attention to salient facial features. Increased understanding of the association between endogenous OT and social cognitive abilities in SZ may help improve the design and interpretation of OT-focused clinical trials in SZ.


2020 ◽  
Vol 32 (10) ◽  
pp. 3243
Author(s):  
Szu-Yin Lin ◽  
Chao-Ming Wu ◽  
Shih-Lun Chen ◽  
Ting-Lan Lin ◽  
Yi-Wen Tseng

Sign in / Sign up

Export Citation Format

Share Document