scholarly journals Confidence of emotion expression recognition recruits brain regions outside the face perception network

2018 ◽  
Vol 14 (1) ◽  
pp. 81-95 ◽  
Author(s):  
Indrit Bègue ◽  
Maarten Vaessen ◽  
Jeremy Hofmeister ◽  
Marice Pereira ◽  
Sophie Schwartz ◽  
...  
2014 ◽  
Vol 28 (3) ◽  
pp. 148-161 ◽  
Author(s):  
David Friedman ◽  
Ray Johnson

A cardinal feature of aging is a decline in episodic memory (EM). Nevertheless, there is evidence that some older adults may be able to “compensate” for failures in recollection-based processing by recruiting brain regions and cognitive processes not normally recruited by the young. We review the evidence suggesting that age-related declines in EM performance and recollection-related brain activity (left-parietal EM effect; LPEM) are due to altered processing at encoding. We describe results from our laboratory on differences in encoding- and retrieval-related activity between young and older adults. We then show that, relative to the young, in older adults brain activity at encoding is reduced over a brain region believed to be crucial for successful semantic elaboration in a 400–1,400-ms interval (left inferior prefrontal cortex, LIPFC; Johnson, Nessler, & Friedman, 2013 ; Nessler, Friedman, Johnson, & Bersick, 2007 ; Nessler, Johnson, Bersick, & Friedman, 2006 ). This reduced brain activity is associated with diminished subsequent recognition-memory performance and the LPEM at retrieval. We provide evidence for this premise by demonstrating that disrupting encoding-related processes during this 400–1,400-ms interval in young adults affords causal support for the hypothesis that the reduction over LIPFC during encoding produces the hallmarks of an age-related EM deficit: normal semantic retrieval at encoding, reduced subsequent episodic recognition accuracy, free recall, and the LPEM. Finally, we show that the reduced LPEM in young adults is associated with “additional” brain activity over similar brain areas as those activated when older adults show deficient retrieval. Hence, rather than supporting the compensation hypothesis, these data are more consistent with the scaffolding hypothesis, in which the recruitment of additional cognitive processes is an adaptive response across the life span in the face of momentary increases in task demand due to poorly-encoded episodic memories.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


2021 ◽  
Author(s):  
Thomas Murray ◽  
Justin O'Brien ◽  
Veena Kumari

The recognition of negative emotions from facial expressions is shown to decline across the adult lifespan, with some evidence that this decline begins around middle age. While some studies have suggested ageing may be associated with changes in neural response to emotional expressions, it is not known whether ageing is associated with changes in the network connectivity associated with processing emotional expressions. In this study, we examined the effect of participant age on whole-brain connectivity to various brain regions that have been associated with connectivity during emotion processing: the left and right amygdalae, medial prefrontal cortex (mPFC), and right posterior superior temporal sulcus (rpSTS). The study involved healthy participants aged 20-65 who viewed facial expressions displaying anger, fear, happiness, and neutral expressions during functional magnetic resonance imaging (fMRI). We found effects of age on connectivity between the left amygdala and voxels in the occipital pole and cerebellum, between the right amygdala and voxels in the frontal pole, and between the rpSTS and voxels in the orbitofrontal cortex, but no effect of age on connectivity with the mPFC. Furthermore, ageing was more greatly associated with a decline in connectivity to the left amygdala and rpSTS for negative expressions in comparison to happy and neutral expressions, consistent with the literature suggesting a specific age-related decline in the recognition of negative emotions. These results add to the literature surrounding ageing and expression recognition by suggesting that changes in underlying functional connectivity might contribute to changes in recognition of negative facial expressions across the adult lifespan.


2021 ◽  
Author(s):  
◽  
Ella Macaskill

<p>Face recognition is a fundamental cognitive function that is essential for social interaction – yet not everyone has it. Developmental prosopagnosia is a lifelong condition in which people have severe difficulty recognising faces but have normal intellect and no brain damage. Despite much research, the component processes of face recognition that are impaired in developmental prosopagnosia are not well understood. Two core processes are face perception, being the formation of visual representations of a currently seen face, and face memory, being the storage, maintenance, and retrieval of those representations. Most studies of developmental prosopagnosia focus on face memory deficits, but a few recent studies indicate that face perception deficits might also be important. Characterising face perception in developmental prosopagnosia is crucial for a better understanding of the condition. In this thesis, I addressed this issue in a large-scale experiment with 108 developmental prosopagnosics and 136 matched controls. I assessed face perception abilities with multiple measures and ran a broad range of analyses to establish the severity, scope, and nature of face perception deficits in developmental prosopagnosia. Three major results stand out. First, face perception deficits in developmental prosopagnosia were severe, and could be comparable in size to face memory deficits. Second, the face perception deficits were widespread, affecting the whole sample rather than a subset of individuals. Third, the deficits were mainly driven by impairments to mechanisms specialised for processing upright faces. Further analyses revealed several other features of the deficits, including the use of atypical and inconsistent strategies for perceiving faces, difficulties matching the same face across different pictures, equivalent impact of lighting and viewpoint variations in face images, and atypical perceptual and non-perceptual components of test performance. Overall, my thesis shows that face perception deficits are more central to developmental prosopagnosia than previously thought and motivates further research on the issue.</p>


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


2021 ◽  
pp. 104-117
Author(s):  
Mari Fitzduff

This chapter looks at the importance of understanding the many cultural differences that exist between different groups and in different contexts around the world. Without a sensitivity to such differences, wars can be lost and positive influences minimized. These differences include the existence of high-context versus low-context societies, differing hierarchical approaches to power and authority, collectivist versus individualist societies, differing emotion expression/recognition, gender differences, differing evidencing of empathy, face preferences, and communication styles. Lack of cultural attunement to these issues can exacerbate misunderstandings and conflicts, unless understood and factored into difficult strategies and dialogues.


2011 ◽  
pp. 5-44 ◽  
Author(s):  
Daijin Kim ◽  
Jaewon Sung

Face detection is the most fundamental step for the research on image-based automated face analysis such as face tracking, face recognition, face authentication, facial expression recognition and facial gesture recognition. When a novel face image is given we must know where the face is located, and how large the scale is to limit our concern to the face patch in the image and normalize the scale and orientation of the face patch. Usually, the face detection results are not stable; the scale of the detected face rectangle can be larger or smaller than that of the real face in the image. Therefore, many researchers use eye detectors to obtain stable normalized face images. Because the eyes have salient patterns in the human face image, they can be located stably and used for face image normalization. The eye detection becomes more important when we want to apply model-based face image analysis approaches.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Aiming at the problem of facial expression recognition under unconstrained conditions, a facial expression recognition method based on an improved capsule network model is proposed. Firstly, the expression image is normalized by illumination based on the improved Weber face, and the key points of the face are detected by the Gaussian process regression tree. Then, the 3dmms model is introduced. The 3D face shape, which is consistent with the face in the image, is provided by iterative estimation so as to further improve the image quality of face pose standardization. In this paper, we consider that the convolution features used in facial expression recognition need to be trained from the beginning and add as many different samples as possible in the training process. Finally, this paper attempts to combine the traditional deep learning technology with capsule configuration, adds an attention layer after the primary capsule layer in the capsule network, and proposes an improved capsule structure model suitable for expression recognition. The experimental results on JAFFE and BU-3DFE datasets show that the recognition rate can reach 96.66% and 80.64%, respectively.


Sign in / Sign up

Export Citation Format

Share Document