Perception of Photographic-Quality Caricatures of Emotional Facial Expressions

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 28-28
Author(s):  
A J Calder ◽  
A W Young ◽  
D Rowland ◽  
D R Gibbenson ◽  
B M Hayes ◽  
...  

G Rhodes, S E Brennan, S Carey (1987 Cognitive Psychology19 473 – 497) and P J Benson and D I Perrett (1991 European Journal of Cognitive Psychology3 105 – 135) have shown that computer-enhanced (caricatured) representations of familiar faces are named faster and rated as better likenesses than veridical (undistorted) representations. Here we have applied Benson and Perrett's graphic technique to examine subjects' perception of enhanced representations of photographic-quality facial expressions of basic emotions. To enhance a facial expression the target face is compared to a norm or prototype face, and, by exaggerating the differences between the two, a caricatured image is produced; reducing the differences results in an anticaricatured image. In experiment 1 we examined the effect of degree of caricature and types of norm on subjects' ratings for ‘intensity of expression’. Three facial expressions (fear, anger, and sadness) were caricatured at seven levels (−50%, −30%, −15%, 0%, +15%, +30%, and +50%) relative to three different norms; (1) an average norm prepared by blending pictures of six different emotional expressions; (2) a neutral expression norm; and (3) a different expression norm (eg anger caricatured relative to a happy expression). Irrespective of norm, the caricatured expressions were rated as significantly more intense than the veridical images. Furthermore, for the average and neutral norm sets, the anticaricatures were rated as significantly less intense. We also examined subjects' reaction times to recognise caricatured (−50%, 0%, and +50%) representations of six emotional facial expressions. The results showed that the caricatured images were identified fastest, followed by the veridical, and then anticaricatured images. Hence the perception of facial expression and identity is facilitated by caricaturing; this has important implications for the mental representation of facial expressions.

2018 ◽  
Vol 32 (4) ◽  
pp. 160-171 ◽  
Author(s):  
Léonor Philip ◽  
Jean-Claude Martin ◽  
Céline Clavel

Abstract. People react with Rapid Facial Reactions (RFRs) when presented with human facial emotional expressions. Recent studies show that RFRs are not always congruent with emotional cues. The processes underlying RFRs are still being debated. In our study described herein, we manipulate the context of perception and its influence on RFRs. We use a subliminal affective priming task with emotional labels. Facial electromyography (EMG) (frontalis, corrugator, zygomaticus, and depressor) was recorded while participants observed static facial expressions (joy, fear, anger, sadness, and neutral expression) preceded/not preceded by a subliminal word (JOY, FEAR, ANGER, SADNESS, or NEUTRAL). For the negative facial expressions, when the priming word was congruent with the facial expression, participants displayed congruent RFRs (mimicry). When the priming word was incongruent, we observed a suppression of mimicry. Happiness was not affected by the priming word. RFRs thus appear to be modulated by the context and type of emotion that is presented via facial expressions.


2018 ◽  
Vol 11 (2) ◽  
pp. 16-33 ◽  
Author(s):  
A.V. Zhegallo

The study investigates the specifics of recognition of emotional facial expressions in peripherally exposed facial expressions, while exposition time was shorter compared to the duration of the latent period of a saccade towards the exposed image. The study showed that recognition of peripherical perception reproduces the patterns of the choice of the incorrect responses. The mutual mistaken recognition is common for the facial expressions of a fear, anger and surprise. In the case of worsening of the conditions of recognition, calmness and grief as facial expression were included in the complex of a mutually mistakenly identified expressions. The identification of the expression of happiness deserves a special attention, because it can be mistakenly identified as different facial expression, but other expressions are never recognized as happiness. Individual accuracy of recognition varies from 0.29 to 0.80. The sufficient condition of a high accuracy in recognition was the recognition of the facial expressions using peripherical vision without making a saccade in the direction of the face image exposed.


2012 ◽  
Vol 110 (1) ◽  
pp. 338-350 ◽  
Author(s):  
Mariano Chóliz ◽  
Enrique G. Fernández-Abascal

Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.


2021 ◽  
Vol 15 ◽  
Author(s):  
E. Darcy Burgund

Major theories of hemisphere asymmetries in facial expression processing predict right hemisphere dominance for negative facial expressions of disgust, fear, and sadness, however, some studies observe left hemisphere dominance for one or more of these expressions. Research suggests that tasks requiring the identification of six basic emotional facial expressions (angry, disgusted, fearful, happy, sad, and surprised) are more likely to produce left hemisphere involvement than tasks that do not require expression identification. The present research investigated this possibility in two experiments that presented six basic emotional facial expressions to the right or left hemisphere using a divided-visual field paradigm. In Experiment 1, participants identified emotional expressions by pushing a key corresponding to one of six labels. In Experiment 2, participants detected emotional expressions by pushing a key corresponding to whether an expression was emotional or not. In line with predictions, fearful facial expressions exhibited a left hemisphere advantage during the identification task but not during the detection task. In contrast to predictions, sad expressions exhibited a left hemisphere advantage during both identification and detection tasks. In addition, happy facial expressions exhibited a left hemisphere advantage during the detection task but not during the identification task. Only angry facial expressions exhibited a right hemisphere advantage, and this was only observed when data from both experiments were combined. Together, results highlight the influence of task demands on hemisphere asymmetries in facial expression processing and suggest a greater role for the left hemisphere in negative expressions than predicted by previous theories.


2021 ◽  
Author(s):  
Wataru Sato ◽  
Naotaka Usui ◽  
Reiko Sawada ◽  
Akihiko Kondo ◽  
Motomi Toichi ◽  
...  

Abstract Detecting facial emotional expressions is an initial and indispensable for face-to-face communication. Neuropsychological studies on the neural substrates of this process have shown that bilateral amygdala lesions impaired the detection of emotional facial expressions. However, the findings were inconsistent, possibly due to the limited number of patients examined. Furthermore, this processing is based on emotional or visual factors of facial expressions remains unknown. To investigate this issue, we tested the group of patients (n = 23) with unilateral resection of anterior temporal lobe structures, including the amygdala, and compared their performance under resected and intact hemisphere stimulation conditions. The patients were asked to detect normal facial expressions of anger and happiness, and artificially created anti-expressions, among a crowd with neutral expressions. Reaction times were shorter to detect normal versus anti-expressions when the target faces were presented to the contralateral visual field (i.e., stimulation of the intact hemisphere) than to the ipsilateral visual field (i.e., stimulation of the resected hemisphere). Our findings suggest that the amygdala plays an essential role in the detection of emotional facial expressions, according to the emotional significance of the expressions.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Author(s):  
Peggy Mason

Tracts descending from motor control centers in the brainstem and cortex target motor interneurons and in select cases motoneurons. The mechanisms and constraints of postural control are elaborated and the effect of body mass on posture discussed. Feed-forward reflexes that maintain posture during standing and other conditions of self-motion are described. The role of descending tracts in postural control and the pathological posturing is described. Pyramidal (corticospinal and corticobulbar) and extrapyramidal control of body and face movements is contrasted. Special emphasis is placed on cortical regions and tracts involved in deliberate control of facial expression; these pathways are contrasted with mechanisms for generating emotional facial expressions. The signs associated with lesions of either motoneurons or motor control centers are clearly detailed. The mechanisms and presentation of cerebral palsy are described. Finally, understanding how pre-motor cortical regions generate actions is used to introduce apraxia, a disorder of action.


Author(s):  
Izabela Krejtz ◽  
Krzysztof Krejtz ◽  
Katarzyna Wisiecka ◽  
Marta Abramczyk ◽  
Michał Olszanowski ◽  
...  

Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


2007 ◽  
Vol 38 (10) ◽  
pp. 1475-1483 ◽  
Author(s):  
K. S. Kendler ◽  
L. J. Halberstadt ◽  
F. Butera ◽  
J. Myers ◽  
T. Bouchard ◽  
...  

BackgroundWhile the role of genetic factors in self-report measures of emotion has been frequently studied, we know little about the degree to which genetic factors influence emotional facial expressions.MethodTwenty-eight pairs of monozygotic (MZ) and dizygotic (DZ) twins from the Minnesota Study of Twins Reared Apart were shown three emotion-inducing films and their facial responses recorded. These recordings were blindly scored by trained raters. Ranked correlations between twins were calculated controlling for age and sex.ResultsTwin pairs were significantly correlated for facial expressions of general positive emotions, happiness, surprise and anger, but not for general negative emotions, sadness, or disgust or average emotional intensity. MZ pairs (n=18) were more correlated than DZ pairs (n=10) for most but not all emotional expressions.ConclusionsSince these twin pairs had minimal contact with each other prior to testing, these results support significant genetic effects on the facial display of at least some human emotions in response to standardized stimuli. The small sample size resulted in estimated twin correlations with very wide confidence intervals.


Sign in / Sign up

Export Citation Format

Share Document