scholarly journals Rapid facial mimicry in orangutan play

2007 ◽  
Vol 4 (1) ◽  
pp. 27-30 ◽  
Author(s):  
Marina Davila Ross ◽  
Susanne Menzler ◽  
Elke Zimmermann

Emotional contagion enables individuals to experience emotions of others. This important empathic phenomenon is closely linked to facial mimicry, where facial displays evoke the same facial expressions in social partners. In humans, facial mimicry can be voluntary or involuntary, whereby its latter mode can be processed as rapid as within or at 1 s. Thus far, studies have not provided evidence of rapid involuntary facial mimicry in animals. This study assessed whether rapid involuntary facial mimicry is present in orangutans ( Pongo pygmaeus; N =25) for their open-mouth faces (OMFs) during everyday dyadic play. Results clearly indicated that orangutans rapidly mimicked OMFs of their playmates within or at 1 s. Our study revealed the first evidence on rapid involuntary facial mimicry in non-human mammals. This finding suggests that fundamental building blocks of positive emotional contagion and empathy that link to rapid involuntary facial mimicry in humans have homologues in non-human primates.

Autism ◽  
2020 ◽  
pp. 136236132095169 ◽  
Author(s):  
Roser Cañigueral ◽  
Jamie A Ward ◽  
Antonia F de C Hamilton

Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial motion patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial displays as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies. Lay abstract When we are communicating with other people, we exchange a variety of social signals through eye gaze and facial expressions. However, coordinated exchanges of these social signals can only happen when people involved in the interaction are able to see each other. Although previous studies report that autistic individuals have difficulties in using eye gaze and facial expressions during social interactions, evidence from tasks that involve real face-to-face conversations is scarce and mixed. Here, we investigate how eye gaze and facial expressions of typical and high-functioning autistic individuals are modulated by the belief in being seen by another person, and by being in a face-to-face interaction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video (no belief in being seen, no face-to-face), video-call (belief in being seen, no face-to-face) and face-to-face (belief in being seen and face-to-face). Typical participants gazed less to the confederate and made more facial expressions when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial expression patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial expressions as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chun-Ting Hsu ◽  
Wataru Sato ◽  
Sakiko Yoshikawa

Abstract Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models’ positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.


Author(s):  
Jenni Anttonen ◽  
Veikko Surakka ◽  
Mikko Koivuluoma

The aim of the present paper was to study heart rate changes during a video stimulation depicting two actors (male and female) producing dynamic facial expressions of happiness, sadness, and a neutral expression. We measured ballistocardiographic emotion-related heart rate responses with an unobtrusive measurement device called the EMFi chair. Ratings of subjective responses to the video stimuli were also collected. The results showed that the video stimuli evoked significantly different ratings of emotional valence and arousal. Heart rate decelerated in response to all stimuli and the deceleration was the strongest during negative stimulation. Furthermore, stimuli from the male actor evoked significantly larger arousal ratings and heart rate responses than the stimuli from the female actor. The results also showed differential responding between female and male participants. The present results support the hypothesis that heart rate decelerates in response to films depicting dynamic negative facial expressions. The present results also support the idea that the EMFi chair can be used to perceive emotional responses from people while they are interacting with technology.


2021 ◽  
Author(s):  
Arianna Schiano Lomoriello ◽  
Antonio Maffei ◽  
Sabrina Brigadoi ◽  
Paola Sessa

Simulation models of facial expressions suggest that posterior visual areas and brain areas underpinning sensorimotor simulations might interact to improve facial expression processing. According to these models, facial mimicry, a manifestation of sensorimotor simulation, may contribute to the visual processing of facial expressions by influencing early stages. The aim of this study was to assess whether and how sensorimotor simulation influences early stages of face processing, also investigating its relationship with alexithymic traits given that previous studies have suggested that individuals with high levels of alexithymic traits (vs. individuals with low levels of alexithymic traits) tend to use sensorimotor simulation to a lesser extent. We monitored P1 and N170 ERP components of the event-related potentials (ERP) in participants performing a fine discrimination task of facial expressions and animals, as a control condition. In half of the experiment, participants could freely use their facial mimicry whereas in the other half they had their facial mimicry blocked by a gel. Our results revealed that only individuals with lower compared to high alexithymic traits showed a larger modulation of the P1 amplitude as a function of the mimicry manipulation selectively for facial expressions (but not for animals), while we did not observe any modulation of the N170. Given the null results at the behavioural level, we interpreted the P1 modulation as compensative visual processing in individuals with low levels of alexithymia under conditions of interference on the sensorimotor processing, providing a preliminary evidence in favor of sensorimotor simulation models.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yuko Yamashita ◽  
Tetsuya Yamamoto

Emotional contagion is a phenomenon by which an individual’s emotions directly trigger similar emotions in others. We explored the possibility that perceiving others’ emotional facial expressions affect mood in people with subthreshold depression (sD). Around 49 participants were divided into the following four groups: participants with no depression (ND) presented with happy faces; ND participants presented with sad faces; sD participants presented with happy faces; and sD participants presented with sad faces. Participants were asked to answer an inventory about their emotional states before and after viewing the emotional faces to investigate the influence of emotional contagion on their mood. Regardless of depressive tendency, the groups presented with happy faces exhibited a slight increase in the happy mood score and a decrease in the sad mood score. The groups presented with sad faces exhibited an increased sad mood score and a decreased happy mood score. These results demonstrate that emotional contagion affects the mood in people with sD, as well as in individuals with ND. These results indicate that emotional contagion could relieve depressive moods in people with sD. It demonstrates the importance of the emotional facial expressions of those around people with sD such as family and friends from the viewpoint of emotional contagion.


2019 ◽  
Vol 44 (1) ◽  
pp. 133-152 ◽  
Author(s):  
Tanja Lischetzke ◽  
Michael Cugialy ◽  
Tanja Apt ◽  
Michael Eid ◽  
Michael Niedeggen

2016 ◽  
Vol 30 (3) ◽  
pp. 114-123 ◽  
Author(s):  
Tokiko Harada ◽  
Akiko Hayashi ◽  
Norihiro Sadato ◽  
Tetsuya Iidaka

Abstract. Facial expressions play a significant role in displaying feelings. A person’s facial expression automatically induces a similar emotional feeling in an observer; this phenomenon is known as emotional contagion. However, little is known about the neural mechanisms underlying such emotional responses. We conducted an event-related functional magnetic resonance imaging (fMRI) study to examine the neural substrates involved in automatic responses and emotional feelings induced by movies of another person’s happy and sad facial expressions. The fMRI data revealed observing happiness (vs. sadness) evoked activity in the left anterior cingulate gyrus, which is known to be responsible for positive emotional processing and fear inhibition. Conversely, observing sadness (vs. happiness) increased activity in the right superior temporal sulcus and bilateral inferior parietal lobes, which have been reported to be involved in negative emotional processing and the representation of facial movements. In addition, both expressions evoked activity in the right inferior frontal gyrus. These patterns of activity suggest that the observation of dynamic facial expressions automatically elicited dissociable and partially overlapping responses for happy and sad emotions.


Author(s):  
Karthik R. ◽  
Nandana B. ◽  
Mayuri Patil ◽  
Chandreyee Basu ◽  
Vijayarajan R.

Facial expressions are an important means of communication among human beings, as they convey different meanings in a variety of contexts. All human facial expressions, whether voluntary or involuntary, are formed as a result of movement of different facial muscles. Despite their variety and complexity, certain expressions are universally recognized as representing specific emotions - for instance, raised eyebrows in combination with an open mouth are associated with surprise, whereas a smiling face is generally interpreted as happy. Deep learning-based implementations of expression synthesis have demonstrated their ability to preserve essential features of input images, which is desirable. However, one limitation of using deep learning networks is that their dependence on data distribution and the quality of images used for training purposes. The variation in performance can be studied by changing the optimizer and loss functions, and their effectiveness is analysed based on the quality of output images obtained.


Sign in / Sign up

Export Citation Format

Share Document