Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated Communication

Author(s):  
Ryosuke Ueno ◽  
Yukiko I. Nakano ◽  
Jie Zeng ◽  
Fumio Nihei
2021 ◽  
Vol 2 ◽  
Author(s):  
C. Martin Grewe ◽  
Tuo Liu ◽  
Christoph Kahl ◽  
Andrea Hildebrandt ◽  
Stefan Zachow

A high realism of avatars is beneficial for virtual reality experiences such as avatar-mediated communication and embodiment. Previous work, however, suggested that the usage of realistic virtual faces can lead to unexpected and undesired effects, including phenomena like the uncanny valley. This work investigates the role of photographic and behavioral realism of avatars with animated facial expressions on perceived realism and congruence ratings. More specifically, we examine ratings of photographic and behavioral realism and their mismatch in differently created avatar faces. Furthermore, we utilize these avatars to investigate the effect of behavioral realism on perceived congruence between video-recorded physical person’s expressions and their imitations by the avatar. We compared two types of avatars, both with four identities that were created from the same facial photographs. The first type of avatars contains expressions that were designed by an artistic expert. The second type contains expressions that were statistically learned from a 3D facial expression database. Our results show that the avatars containing learned facial expressions were rated more photographically and behaviorally realistic and possessed a lower mismatch between the two dimensions. They were also perceived as more congruent to the video-recorded physical person’s expressions. We discuss our findings and the potential benefit of avatars with learned facial expressions for experiences in virtual reality and future research on enfacement.


Author(s):  
Zhe Xu ◽  
David John ◽  
Anthony C. Boucouvalas

As the popularity of the Internet has expanded, an increasing number of people spend time online. More than ever, individuals spend time online reading news, searching for new technologies, and chatting with others. Although the Internet was designed as a tool for computational calculations, it has now become a social environment with computer-mediated communication (CMC). Picard and Healey (1997) demonstrated the potential and importance of emotion in human-computer interaction, and Bates (1992) illustrated the roles that emotion plays in user interactions with synthetic agents. Is emotion communication important for human-computer interaction? Scott and Nass (2002) demonstrated that humans extrapolate their interpersonal interaction patterns onto computers. Humans talk to computers, are angry with them, and even make friends with them. In our previous research, we demonstrated that social norms applied in our daily life are still valid for human-computer interaction. Furthermore, we proved that providing emotion visualisation in the human-computer interface could significantly influence the perceived performances and feelings of humans. For example, in an online quiz environment, human participants answered questions and then a software agent judged the answers and presented either a positive (happy) or negative (sad) expression. Even if two participants performed identically and achieved the same number of correct answers, the perceived performance for the one in the positive-expression environment is significantly higher than the one in the negative-expression environment (Xu, 2005). Although human emotional processes are much more complex than in the above example and it is difficult to build a complete computational model, various models and applications have been developed and applied in human-agent interaction environments such as the OZ project (Bates, 1992), the Cathexis model (Velasquez, 1997), and Elliot’s (1992) affective reasoner. We are interested in investigating the influences of emotions not only for human-agent communication, but also for online human-human communications. The first question is, can we detect a human’s emotional state automatically and intelligently? Previous works have concluded that emotions can be detected in various ways—in speech, in facial expressions, and in text—for example, investigations that focus on the synthesis of facial expressions and acoustic expression including Kaiser and Wehrle (2000), Wehrle, Kaiser, Schmidt, and Scherer (2000), and Zentner and Scherer (1998). As text is still dominating online communications, we believe that emotion detection in textual messages is particularly important.


Author(s):  
Zhe Xu ◽  
David John ◽  
Anthony C. Boucouvalas

As the popularity of the Internet has expanded, an increasing number of people spend time online. More than ever, individuals spend time online reading news, searching for new technologies, and chatting with others. Although the Internet was designed as a tool for computational calculations, it has now become a social environment with computer-mediated communication (CMC). Picard and Healey (1997) demonstrated the potential and importance of emotion in human-computer interaction, and Bates (1992) illustrated the roles that emotion plays in user interactions with synthetic agents. Is emotion communication important for human-computer interaction? Scott and Nass (2002) demonstrated that humans extrapolate their interpersonal interaction patterns onto computers. Humans talk to computers, are angry with them, and even make friends with them. In our previous research, we demonstrated that social norms applied in our daily life are still valid for human-computer interaction. Furthermore, we proved that providing emotion visualisation in the human-computer interface could significantly influence the perceived performances and feelings of humans. For example, in an online quiz environment, human participants answered questions and then a software agent judged the answers and presented either a positive (happy) or negative (sad) expression. Even if two participants performed identically and achieved the same number of correct answers, the perceived performance for the one in the positive-expression environment is significantly higher than the one in the negative-expression environment (Xu, 2005). Although human emotional processes are much more complex than in the above example and it is difficult to build a complete computational model, various models and applications have been developed and applied in human-agent interaction environments such as the OZ project (Bates, 1992), the Cathexis model (Velasquez, 1997), and Elliot’s (1992) affective reasoner. We are interested in investigating the influences of emotions not only for human-agent communication, but also for online human-human communications. The first question is, can we detect a human’s emotional state automatically and intelligently? Previous works have concluded that emotions can be detected in various ways—in speech, in facial expressions, and in text—for example, investigations that focus on the synthesis of facial expressions and acoustic expression including Kaiser and Wehrle (2000), Wehrle, Kaiser, Schmidt, and Scherer (2000), and Zentner and Scherer (1998). As text is still dominating online communications, we believe that emotion detection in textual messages is particularly important.


2019 ◽  
Author(s):  
Brittney O'Neill

The effects of emoticons in textual computer-mediated communication (CMC) remain relatively unexplored. CMC researchers have suggested that emoticons behave much as do facial expressions in face-to-face interaction (e.g. Danet, Ruedenberg-Wright, & Rosenbaum-Tamari, 1997; Rezabek & Cochenour, 1998; Thompson & Foulger, 1996). Some fMRI research suggests, however, that there is not a direct neural correspondence between emoticons and facial expressions, but that emoticons play an important role in determining the positive or negative valence of an utterance (Yuasa, Saito, & Mukawa, 2011). Following the affective priming paradigm developed by Fazio, Sanbonmatsu, Powell, and Kardes (1986), this study explores the priming effects of emoticons vis-à-vis photographs of facial expression and emotional words on valence judgements of emotionally charged words. Significant main effects of age, prime valence, and target valence were found. There were also significant interactions between these three factors. Overall results suggest that younger and older participants have differing experiences of emoticons, with younger participants experiencing an effect of emoticons that is similar to the effect of facial expressions while older adults seem to experience emoticons in ways more like textual information or even just textual nonsense.


2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


2010 ◽  
Vol 24 (3) ◽  
pp. 186-197 ◽  
Author(s):  
Sandra J. E. Langeslag ◽  
Jan W. Van Strien

It has been suggested that emotion regulation improves with aging. Here, we investigated age differences in emotion regulation by studying modulation of the late positive potential (LPP) by emotion regulation instructions. The electroencephalogram of younger (18–26 years) and older (60–77 years) adults was recorded while they viewed neutral, unpleasant, and pleasant pictures and while they were instructed to increase or decrease the feelings that the emotional pictures elicited. The LPP was enhanced when participants were instructed to increase their emotions. No age differences were observed in this emotion regulation effect, suggesting that emotion regulation abilities are unaffected by aging. This contradicts studies that measured emotion regulation by self-report, yet accords with studies that measured emotion regulation by means of facial expressions or psychophysiological responses. More research is needed to resolve the apparent discrepancy between subjective self-report and objective psychophysiological measures.


Crisis ◽  
2020 ◽  
pp. 1-8
Author(s):  
Chao S. Hu ◽  
Jiajia Ji ◽  
Jinhao Huang ◽  
Zhe Feng ◽  
Dong Xie ◽  
...  

Abstract. Background: High school and university teachers need to advise students against attempting suicide, the second leading cause of death among 15–29-year-olds. Aims: To investigate the role of reasoning and emotion in advising against suicide. Method: We conducted a study with 130 students at a university that specializes in teachers' education. Participants sat in front of a camera, videotaping their advising against suicide. Three raters scored their transcribed advice on "wise reasoning" (i.e., expert forms of reasoning: considering a variety of conditions, awareness of the limitation of one's knowledge, taking others' perspectives). Four registered psychologists experienced in suicide prevention techniques rated the transcripts on the potential for suicide prevention. Finally, using the software Facereader 7.1, we analyzed participants' micro-facial expressions during advice-giving. Results: Wiser reasoning and less disgust predicted higher potential for suicide prevention. Moreover, higher potential for suicide prevention was associated with more surprise. Limitations: The actual efficacy of suicide prevention was not assessed. Conclusion: Wise reasoning and counter-stereotypic ideas that trigger surprise probably contribute to the potential for suicide prevention. This advising paradigm may help train teachers in advising students against suicide, measuring wise reasoning, and monitoring a harmful emotional reaction, that is, disgust.


2016 ◽  
Vol 37 (1) ◽  
pp. 16-23 ◽  
Author(s):  
Chit Yuen Yi ◽  
Matthew W. E. Murry ◽  
Amy L. Gentzler

Abstract. Past research suggests that transient mood influences the perception of facial expressions of emotion, but relatively little is known about how trait-level emotionality (i.e., temperament) may influence emotion perception or interact with mood in this process. Consequently, we extended earlier work by examining how temperamental dimensions of negative emotionality and extraversion were associated with the perception accuracy and perceived intensity of three basic emotions and how the trait-level temperamental effect interacted with state-level self-reported mood in a sample of 88 adults (27 men, 18–51 years of age). The results indicated that higher levels of negative mood were associated with higher perception accuracy of angry and sad facial expressions, and higher levels of perceived intensity of anger. For perceived intensity of sadness, negative mood was associated with lower levels of perceived intensity, whereas negative emotionality was associated with higher levels of perceived intensity of sadness. Overall, our findings added to the limited literature on adult temperament and emotion perception.


1998 ◽  
Vol 74 (1) ◽  
pp. 272-279 ◽  
Author(s):  
Chris L. Kleinke ◽  
Thomas R. Peterson ◽  
Thomas R. Rutledge
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document