Emotion expression and affective computing on international cyber languages

2015 ◽  
pp. 173-177
Author(s):  
Xuan Zhou ◽  
Shuang Huang ◽  
Xueer Yu ◽  
Zhenyi Yang ◽  
Weihui Dai
2015 ◽  
Vol 14 (3) ◽  
pp. 153-162 ◽  
Author(s):  
Andrea Fischbach ◽  
Philipp W. Lichtenthaler ◽  
Nina Horstmann

Abstract. People believe women are more emotional than men but it remains unclear to what extent such emotion stereotypes affect leadership perceptions. Extending the think manager-think male paradigm ( Schein, 1973 ), we examined the similarity of emotion expression descriptions of women, men, and managers. In a field-based online experiment, 1,098 participants (male and female managers and employees) rated one of seven target groups on 17 emotions: men or women (in general, managers, or successful managers), or successful managers. Men in general are described as more similar to successful managers in emotion expression than are women in general. Only with the label manager or successful manager do women-successful manager similarities on emotion expression increase. These emotion stereotypes might hinder women’s leadership success.


2021 ◽  
Author(s):  
Intissar Khalifa ◽  
Ridha Ejbali ◽  
Raimondo Schettini ◽  
Mourad Zaied

Abstract Affective computing is a key research topic in artificial intelligence which is applied to psychology and machines. It consists of the estimation and measurement of human emotions. A person’s body language is one of the most significant sources of information during job interview, and it reflects a deep psychological state that is often missing from other data sources. In our work, we combine two tasks of pose estimation and emotion classification for emotional body gesture recognition to propose a deep multi-stage architecture that is able to deal with both tasks. Our deep pose decoding method detects and tracks the candidate’s skeleton in a video using a combination of depthwise convolutional network and detection-based method for 2D pose reconstruction. Moreover, we propose a representation technique based on the superposition of skeletons to generate for each video sequence a single image synthesizing the different poses of the subject. We call this image: ‘history pose image’, and it is used as input to the convolutional neural network model based on the Visual Geometry Group architecture. We demonstrate the effectiveness of our method in comparison with other methods in the state of the art on the standard Common Object in Context keypoint dataset and Face and Body gesture video database.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4222
Author(s):  
Shushi Namba ◽  
Wataru Sato ◽  
Masaki Osumi ◽  
Koh Shimokawa

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


Sign in / Sign up

Export Citation Format

Share Document