scholarly journals Building an Emotionally Responsive Avatar with Dynamic Facial Expressions in Human—Computer Interactions

2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.

2018 ◽  
Vol 115 (43) ◽  
pp. E10013-E10021 ◽  
Author(s):  
Chaona Chen ◽  
Carlos Crivelli ◽  
Oliver G. B. Garrod ◽  
Philippe G. Schyns ◽  
José-Miguel Fernández-Dols ◽  
...  

Real-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


2017 ◽  
Vol 7 (2) ◽  
pp. 177-202
Author(s):  
James A. Clinton ◽  
Stephen W. Briner ◽  
Andrew M. Sherrill ◽  
Thomas Ackerman ◽  
Joseph P. Magliano

Abstract Filmmakers must rely on cinematic devices of perspective (close-ups and point-of-view shot sequencing) to emphasize facial expressions associated with affective states. This study explored the extent to which differences in the use of these devices across two films that have the same content lead to differences in the understanding of the affective states of characters. Participants viewed one of two versions of the films and made affective judgments about how characters felt about one another with respect to saddness and anger. The extent to which the auditory and visual contexts were present when making the judgments was varied across four experiments. The results of the study showed judgments about sadness differed across the two films, but only when the entire context (sound and visual input) were present. The results are discussed in the context of the role of facial expressions and context in inferring basic emotions.


Author(s):  
Jenni Anttonen ◽  
Veikko Surakka ◽  
Mikko Koivuluoma

The aim of the present paper was to study heart rate changes during a video stimulation depicting two actors (male and female) producing dynamic facial expressions of happiness, sadness, and a neutral expression. We measured ballistocardiographic emotion-related heart rate responses with an unobtrusive measurement device called the EMFi chair. Ratings of subjective responses to the video stimuli were also collected. The results showed that the video stimuli evoked significantly different ratings of emotional valence and arousal. Heart rate decelerated in response to all stimuli and the deceleration was the strongest during negative stimulation. Furthermore, stimuli from the male actor evoked significantly larger arousal ratings and heart rate responses than the stimuli from the female actor. The results also showed differential responding between female and male participants. The present results support the hypothesis that heart rate decelerates in response to films depicting dynamic negative facial expressions. The present results also support the idea that the EMFi chair can be used to perceive emotional responses from people while they are interacting with technology.


2020 ◽  
Author(s):  
Chaona Chen ◽  
Daniel Messinger ◽  
Yaocong Duan ◽  
Robin A A Ince ◽  
Oliver G. B. Garrod ◽  
...  

Facial expressions support effective social communication by dynamically transmitting complex, multi-layered messages, such as emotion categories and their intensity. How facial expressions achieve this signalling task remains unknown. Here, we address this question by identifying the specific facial movements that convey two key components of emotion communication – emotion classification (such as ‘happy,’ ‘sad’) and intensification (such as ‘very strong’) – in the six classic emotions (happy, surprise, fear, disgust, anger and sad). Using a data-driven, reverse correlation approach and an information-theoretic analysis framework, we identified in 60 Western receivers three communicative functions of face movements: those used to classify the emotion (classifiers), to perceive emotional intensity (intensifiers), and those serving the dual role of classifier and intensifier. We then validated the communicative functions of these face movements in a broader set of 18 complex facial expressions of emotion (including excited, shame, anxious, hate). We find that the timing of emotion classifier and intensifier face movements are temporally distinct, in which intensifiers peaked earlier or later than classifiers. Together, these results reveal the complexities of facial expressions as a signalling system, in which individual face movements serve specific communicative functions with a clear temporal structure.


2021 ◽  
Vol 14 (4) ◽  
pp. 4-22
Author(s):  
O.A. Korolkova ◽  
E.A. Lobodinskaya

In an experimental study, we explored the role of the natural or artificial character of expression and the speed of its exposure in the recognition of emotional facial expressions during stroboscopic presentation. In Series 1, participants identified emotions represented as sequences of frames from a video of a natural facial expression; in Series 2 participants were shown sequences of linear morph images. The exposure speed was varied. The results showed that at any exposure speed, the expressions of happiness and disgust were recognized most accurately. Longer presentation increased the accuracy of assessments of happiness, disgust, and surprise. Expression of surprise, demonstrated as a linear transformation, was recognized more efficiently than frames of natural expression of surprise. Happiness was perceived more accurately on video frames. The accuracy of the disgust recognition did not depend on the type of images. The qualitative nature of the stimuli and the speed of their presentation did not affect the accuracy of sadness recognition. The categorical structure of the perception of expressions was stable in any type of exposed images. The obtained results suggest a qualitative difference in the perception of natural and artificial images of expressions, which can be observed under extreme exposure conditions.


SAGE Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 215824402092335
Author(s):  
Rong Shi

Previous research has focused on documenting the perceptual mechanisms of facial expressions of so-called basic emotions; however, little is known about eye movement in terms of recognizing crying expressions. The present study aimed to clarify the visual pattern and the role of face gender in recognizing smiling and crying expressions. Behavioral reactions and fixations duration were recorded, and proportions of fixation counts and viewing time directed at facial features (eyes, nose, and mouth area) were calculated. Results indicated that crying expressions could be processed and recognized faster than that of smiling expressions. Across these expressions, eyes and nose area received more attention than mouth area, but in smiling facial expressions, participants fixated longer on the mouth area. It seems that proportional gaze allocation at facial features was quantitatively modulated by different expressions, but overall gaze distribution was qualitatively similar across crying and smiling facial expressions. Moreover, eye movements showed visual attention was modulated by the gender of faces: Participants looked longer at female faces with smiling expressions relative to male faces. Findings are discussed around the perceptual mechanisms underlying facial expressions recognition and the interaction between gender and expression processing.


2021 ◽  
Vol 2 ◽  
Author(s):  
C. Martin Grewe ◽  
Tuo Liu ◽  
Christoph Kahl ◽  
Andrea Hildebrandt ◽  
Stefan Zachow

A high realism of avatars is beneficial for virtual reality experiences such as avatar-mediated communication and embodiment. Previous work, however, suggested that the usage of realistic virtual faces can lead to unexpected and undesired effects, including phenomena like the uncanny valley. This work investigates the role of photographic and behavioral realism of avatars with animated facial expressions on perceived realism and congruence ratings. More specifically, we examine ratings of photographic and behavioral realism and their mismatch in differently created avatar faces. Furthermore, we utilize these avatars to investigate the effect of behavioral realism on perceived congruence between video-recorded physical person’s expressions and their imitations by the avatar. We compared two types of avatars, both with four identities that were created from the same facial photographs. The first type of avatars contains expressions that were designed by an artistic expert. The second type contains expressions that were statistically learned from a 3D facial expression database. Our results show that the avatars containing learned facial expressions were rated more photographically and behaviorally realistic and possessed a lower mismatch between the two dimensions. They were also perceived as more congruent to the video-recorded physical person’s expressions. We discuss our findings and the potential benefit of avatars with learned facial expressions for experiences in virtual reality and future research on enfacement.


2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.


Sign in / Sign up

Export Citation Format

Share Document