scholarly journals Association of Game Events with Facial Animations of Computer-Controlled Virtual Characters Based on Probabilistic Human Reaction Modeling

2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.

2008 ◽  
Vol 23 (4) ◽  
pp. 389-397 ◽  
Author(s):  
Daniel Torres

AbstractThis paper holds the material presented at our keynote at the Intelligent Virtual Environments and Virtual Agents Conference, where we analyzed future trends in virtual agent and environment technologies, as applied to the fields of videogames and interactive multimedia. We begin by observing current tendencies in game development projects, both in the market and in the academia, outlining their respective strengths and weaknesses. We follow by describing the aspects of game development that will come to shape the next generation of games, and discuss the requirements for academic projects to achieve the level of quality required to be competitive in this field. In the rest of the paper we analyze the roles, importance, challenges, and research opportunities that a new generation of virtual characters and environments will bring to the world of interactive entertainment.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251057
Author(s):  
Miquel Mascaró ◽  
Francisco J. Serón ◽  
Francisco J. Perales ◽  
Javier Varona ◽  
Ramon Mas

Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.


2017 ◽  
Vol 08 (03) ◽  
pp. 719-730
Author(s):  
Filiz işleyen ◽  
Buket Cinemre ◽  
Mehmet Kemal Samur ◽  
Semiha Şen Kaya ◽  
Neşe Zayim ◽  
...  

SummaryBackground: Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient’s daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games.Objectives: To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia.Methods: We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients.Results: The median score of the correct answers was 17.5 in the control group whereas it was 16. 5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (p<0.001).Conclusions: Computer games may be used for the purpose of educating people who have difficulty in recognizing facial expressions.Citation: Gülkesen KH, Isleyen F, Cinemre B, Samur MK, Sen Kaya S, Zayim N. A web-based game for teaching facial expressions to schizophrenic patients. Appl Clin Inform 2017; 8: 719–730 https://doi.org/10.4338/ACI-2016-10-RA-0172


Author(s):  
Barry Ip ◽  
Martin Capey ◽  
Andrew Baker ◽  
John Carroll

<span>This paper explores a relatively new area in the design and development of assessment procedures for the evaluation of coursework and student performance on computer and video games degrees. Emphasis is placed on an assessment which involves the development of and interaction in a virtual world, where lecturers and students are represented as virtual characters. The establishment of a new assessment framework for games degrees is outlined, followed by the findings from in depth interviews with students and lecturers concerning its implementation. The results provide a self reflective but critical insight on how existing assessment techniques may be adapted for the growing demands and popularity of games courses, as well as issues surrounding the appraisal of undergraduate work within virtual environments. Recommendations are provided on how similar assessments may be conducted in the future.</span>


Author(s):  
Jorge Peña ◽  
Jannath Ghaznavi ◽  
Nicholas Brody ◽  
Rui Prada ◽  
Carlos Martinho ◽  
...  

Abstract. This study explored how group identification, avatar similarity identification, and social presence mediated the effect of character type (avatars or agents) and social identity cues (presence or absence of avatars wearing participants’ school colors) on game enjoyment. Playing with teammate avatars increased enjoyment indirectly by enhancing group identification. In addition, the presence of social identity cues increased enjoyment indirectly by augmenting identification with one’s avatar. Unexpectedly, playing in multiplayer mode in the presence of social identity cues decreased enjoyment, whereas playing in multiplayer mode in the absence of social identity cues increased enjoyment. Social presence was not a reliable mediator. The findings supported media enjoyment and social identity theories, and highlighted how virtual character type and identification processes influence enjoyment.


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Sign in / Sign up

Export Citation Format

Share Document