The Perception of Facial Expressions from Two-Frame Apparent Motion

Perception ◽  
10.1068/p5769 ◽  
2008 ◽  
Vol 37 (10) ◽  
pp. 1560-1568 ◽  
Author(s):  
Naoyuki Matsuzaki ◽  
Takao Sato

We examined the contribution of motion information in perceiving facial expressions using point-light displays of faces. First, we established the minimum number of feature points necessary for the perception of facial expression from a single image. Next, we examined the effects of motion with a stimulus using an insufficient number of dots. We used two conditions. In the motion condition, the apparent motion was induced by a preceding neutral face image followed by an emotional face image. In the repetition condition, the same emotional face image was presented twice. The performance was higher in the motion condition than in the repetition condition. This advantage was reduced by inserting a white blank field between the neutral and emotional faces thus confirming that the improvement was due to the motion.

2020 ◽  
Vol 34 (5) ◽  
pp. 585-594
Author(s):  
Shivangi Anthwal ◽  
Dinesh Ganotra

Facial expressions are the most preeminent means of conveying one’s emotions and play a significant role in interpersonal communication. Researchers are in pursuit of endowing machines with the ability to interpret emotions from facial expressions as that will make human-computer interaction more efficient. With the objective of effective affect cognition from visual information, we present two dynamic descriptors that can recognise seven principal emotions. The variables of the appearance-based descriptor, FlowCorr, indicate intra-class similarity and inter-class difference by quantifying the degree of correlation of optical flow associated with the image pair and each pre-designed template describing the motion pattern associated with different expressions. The second shape-based descriptor, dyn-HOG, finds the HOG values of the difference image derived by subtracting neutral face from emotional face, and is demonstrated to be more discriminative than previously used static HOG descriptors for classifying facial expressions. Recognition accuracies with multi-class support vector machine obtained on the CK+ and KDEF-dyn datasets are competent with the results of state-of-the-art techniques and empirical analysis of human cognition of emotions.


Author(s):  
Michela Balconi

Neuropsychological studies have underlined the significant presence of distinct brain correlates deputed to analyze facial expression of emotion. It was observed that some cerebral circuits were considered as specific for emotional face comprehension as a function of conscious vs. unconscious processing of emotional information. Moreover, the emotional content of faces (i.e. positive vs. negative; more or less arousing) may have an effect in activating specific cortical networks. Between the others, recent studies have explained the contribution of hemispheres in comprehending face, as a function of type of emotions (mainly related to the distinction positive vs. negative) and of specific tasks (comprehending vs. producing facial expressions). Specifically, ERPs (event-related potentials) analysis overview is proposed in order to comprehend how face may be processed by an observer and how he can make face a meaningful construct even in absence of awareness. Finally, brain oscillations is considered in order to explain the synchronization of neural populations in response to emotional faces when a conscious vs. unconscious processing is activated.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mohammad Rafayet Ali ◽  
Taylor Myers ◽  
Ellen Wagner ◽  
Harshil Ratnu ◽  
E. Ray Dorsey ◽  
...  

AbstractA prevalent symptom of Parkinson’s disease (PD) is hypomimia — reduced facial expressions. In this paper, we present a method for diagnosing PD that utilizes the study of micro-expressions. We analyzed the facial action units (AU) from 1812 videos of 604 individuals (61 with PD and 543 without PD, with a mean age 63.9 y/o, sd. 7.8) collected online through a web-based tool (www.parktest.net). In these videos, participants were asked to make three facial expressions (a smiling, disgusted, and surprised face) followed by a neutral face. Using techniques from computer vision and machine learning, we objectively measured the variance of the facial muscle movements and used it to distinguish between individuals with and without PD. The prediction accuracy using the facial micro-expressions was comparable to methodologies that utilize motor symptoms. Logistic regression analysis revealed that participants with PD had less variance in AU6 (cheek raiser), AU12 (lip corner puller), and AU4 (brow lowerer) than non-PD individuals. An automated classifier using Support Vector Machine was trained on the variances and achieved 95.6% accuracy. Using facial expressions as a future digital biomarker for PD could be potentially transformative for patients in need of remote diagnoses due to physical separation (e.g., due to COVID) or immobility.


2020 ◽  
Vol 11 ◽  
Author(s):  
Jan N. Schneider ◽  
Timothy R. Brick ◽  
Isabel Dziobek

Arousal is one of the dimensions of core affect and frequently used to describe experienced or observed emotional states. While arousal ratings of facial expressions are collected in many studies it is not well understood how arousal is displayed in or interpreted from facial expressions. In the context of socioemotional disorders such as Autism Spectrum Disorder, this poses the question of a differential use of facial information for arousal perception. In this study, we demonstrate how automated face-tracking tools can be used to extract predictors of arousal judgments. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. Based on these results, we tested two measures, average distance to the neutral face and average facial movement speed, within and between neurotypical individuals (N = 401) and individuals with autism (N = 19). Distance to the neutral face was predictive of arousal in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in an high autistic traits group. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found, emphasizing the specificity of our tested measures. Distance and speed predictors share variability and thus speed should not be discarded as a predictor of arousal ratings.


Perception ◽  
2016 ◽  
Vol 46 (5) ◽  
pp. 624-631 ◽  
Author(s):  
Andreas M. Baranowski ◽  
H. Hecht

Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed “Kuleshov effect.” In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants’ emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.


Author(s):  
Guojun Lin ◽  
Meng Yang ◽  
Linlin Shen ◽  
Mingzhong Yang ◽  
Mei Xie

For face recognition, conventional dictionary learning (DL) methods have some disadvantages. First, face images of the same person vary with facial expressions and pose, illumination and disguises, so it is hard to obtain a robust dictionary for face recognition. Second, they don’t cover important components (e.g., particularity and disturbance) completely, which limit their performance. In the paper, we propose a novel robust and discriminative DL (RDDL) model. The proposed model uses sample diversities of the same face image to learn a robust dictionary, which includes class-specific dictionary atoms and disturbance dictionary atoms. These atoms can well represent the data from different classes. Discriminative regularizations on the dictionary and the representation coefficients are used to exploit discriminative information, which improves effectively the classification capability of the dictionary. The proposed RDDL is extensively evaluated on benchmark face image databases, and it shows superior performance to many state-of-the-art dictionary learning methods for face recognition.


2012 ◽  
Vol 25 (0) ◽  
pp. 46-47
Author(s):  
Kazumichi Matsumiya

Adaptation to a face belonging to a facial category, such as expression, causes a subsequently neutral face to be perceived as belonging to an opposite facial category. This is referred to as the face aftereffect (FAE) (Leopold et al., 2001; Rhodes et al., 2004; Webster et al., 2004). The FAE is generally thought of as being a visual phenomenon. However, recent studies have shown that humans can haptically recognize a face (Kilgour and Lederman, 2002; Lederman et al., 2007). Here, I investigated whether FAEs could occur in haptic perception of faces. Three types of facial expressions (happy, sad and neutral) were generated using a computer-graphics software, and three-dimensional masks of these faces were made from epoxy-cured resin for use in the experiments. An adaptation facemask was positioned on the left side of a table in front of the participant, and a test facemask was placed on the right. During adaptation, participants haptically explored the adaptation facemask with their eyes closed for 20 s, after which they haptically explored the test facemask for 5 s. Participants were then requested to classify the test facemask as either happy or sad. The experiment was performed under two adaptation conditions: (1) with adaptation to a happy facemask and (2) with adaptation to a sad facemask. In both cases, the expression of the test facemask was neutral. The results indicate that adaptation to a haptic face that belongs to a specific facial expression causes a subsequently touched neutral face to be perceived as having the opposite facial expression, suggesting that FAEs can be observed in haptic perception of faces.


2017 ◽  
Vol 77 (15) ◽  
pp. 20177-20206 ◽  
Author(s):  
Priya Saha ◽  
Debotosh Bhattacharjee ◽  
Barin Kumar De ◽  
Mita Nasipuri

2013 ◽  
Vol 2013 ◽  
pp. 1-14 ◽  
Author(s):  
Jicai Zhang ◽  
Haibo Chen

Two strategies for estimating open boundary conditions (OBCs) with adjoint method are compared by carrying out semi-idealized numerical experiments. In the first strategy, the OBC is assumed to be partly space varying and generated by linearly interpolating the values at selected feature points. The advantage is that the values at feature points are taken as control variables so that the variations of the curves can be reproduced by the minimum number of points. In the second strategy, the OBC is assumed to be fully space varying and the values at every open boundary points are taken as control variables. A series of semi-idealized experiments are carried out to compare the effectiveness of two inversion strategies. The results demonstrate that the inversion effect is in inverse proportion to the number of feature points which characterize the spatial complexity of open boundary forcing. The effect of ill-posedness of inverse problem will be amplified if the observations contain noises. The parameter estimation problems with more control variables will be much more sensitive to data noises, and the negative effects of noises can be restricted by reducing the number of control variables. This work provides a concrete evidence that ill-posedness of inverse problem can generate wrong parameter inversion results and produce an unreal “good data fitting.”


2021 ◽  
Vol 14 (3) ◽  
pp. 91-103
Author(s):  
L.A. Khrisanfova

The aim of this study was to investigate how differences in anxiety levels relate to selective sensitivity to basic emotions (emotional bias) with minimal exposure time. Masked pictures of happiness, angry, fear, disgust, surprise, sad and neutral facial expressions were presented to 298 men at exposure times in intervals 16ms, 34ms, 49ms, 66ms. After presenting each image, the participants chose on the screen by pressing a key the name of an emotion suitable, in their opinion, Taylor Manifest Anxiety Scale (TMAS) was used to measure of trait anxiety. There were subjects of various professional groups (firefighters, military, athletes, psychologists, mathematicians). We found that Selective sensitivity to basic emotions at exposure times up to 49ms is determined by internal interpolation of the perceiver’s personality. Highly anxious men are unconsciously more likely to choose fear, anger and disgust. The increase in anxiety are accompanied by decreased preference of anger and happiness. Low-anxious men unconsciously ignore fear, anger, disgust, and preferred neutral face. Men of different professions are differed in the level of anxiety and emotional bias in basic emotions. Firefighters have the lowest level of anxiety, mathematics have the highest.


Sign in / Sign up

Export Citation Format

Share Document