Advocating a Componential Appraisal Model to Guide Emotion Recognition

2012 ◽  
Vol 3 (1) ◽  
pp. 18-32 ◽  
Author(s):  
Marcello Mortillaro ◽  
Ben Meuleman ◽  
Klaus R. Scherer

Most models of automatic emotion recognition use a discrete perspective and a black-box approach, i.e., they output an emotion label chosen from a limited pool of candidate terms, on the basis of purely statistical methods. Although these models are successful in emotion classification, a number of practical and theoretical drawbacks limit the range of possible applications. In this paper, the authors suggest the adoption of an appraisal perspective in modeling emotion recognition. The authors propose to use appraisals as an intermediate layer between expressive features (input) and emotion labeling (output). The model would then be made of two parts: first, expressive features would be used to estimate appraisals; second, resulting appraisals would be used to predict an emotion label. While the second part of the model has already been the object of several studies, the first is unexplored. The authors argue that this model should be built on the basis of both theoretical predictions and empirical results about the link between specific appraisals and expressive features. For this purpose, the authors suggest to use the component process model of emotion, which includes detailed predictions of efferent effects of appraisals on facial expression, voice, and body movements.

2005 ◽  
Vol 58 (7) ◽  
pp. 1173-1197 ◽  
Author(s):  
Naomi C. Carroll ◽  
Andrew W. Young

Four experiments investigated priming of emotion recognition using a range of emotional stimuli, including facial expressions, words, pictures, and nonverbal sounds. In each experiment, a prime–target paradigm was used with related, neutral, and unrelated pairs. In Experiment 1, facial expression primes preceded word targets in an emotion classification task. A pattern of priming of emotional word targets by related primes with no inhibition of unrelated primes was found. Experiment 2 reversed these primes and targets and found the same pattern of results, demonstrating bidirectional priming between facial expressions and words. Experiment 2 also found priming of facial expression targets by picture primes. Experiment 3 demonstrated that priming occurs not just between pairs of stimuli that have a high co-occurrence in the environment (for example, nonverbal sounds and facial expressions), but with stimuli that co-occur less frequently and are linked mainly by their emotional category (for example, nonverbal sounds and printed words). This shows the importance of the prime and target sharing a common emotional category, rather than their previous co-occurrence. Experiment 4 extended the findings by showing that there are category-based effects as well as valence effects in emotional priming, supporting a categorical view of emotion recognition.


2020 ◽  
Author(s):  
Laura Israel ◽  
Felix D. Schönbrodt

Appraisal theories are a prominent approach for the explanation and prediction of emotions. According to these theories, the subjective perception of an emotion results from a series of specific event evaluations. To validate and extend one of the most known representatives of appraisal theory, the Component Process Model by Klaus Scherer, we implemented four computational appraisal models that predicted emotion labels based on prototype similarity calculations. Different weighting algorithms, mapping the models' input to a distinct emotion label, were integrated in the models. We evaluated the plausibility of the models' structure by assessing their predictive power and comparing their performance to a baseline model and a highly predictive machine learning algorithm. Model parameters were estimated from empirical data and validated out-of-sample. All models were notably better than the baseline model and able to explain part of the variance in the emotion labels. The preferred model, yielding a relatively high performance and stable parameter estimations, was able to predict a correct emotion label with an accuracy of 40.2% and a correct emotion family with an accuracy of 76.9%. The weighting algorithm of this favored model corresponds to the weighting complexity implied by the Component Process Model, but uses differing weighting parameters.


Author(s):  
Klaus Scherer ◽  
Marcello Mortillaro ◽  
Marc Mehu

Emotion researchers generally concur that most emotions in humans and animals are elicited by the appraisals of events that are highly relevant for the organism, generating action tendencies that are often accompanied by changes in expression, autonomic physiology, and feeling. Scherer’s component process model of emotion (CPM) postulates that individual appraisal checks drive the dynamics and configuration of the facial expression of emotion and that emotion recognition is based on appraisal inference with consequent emotion attribution. This chapter outlines the model and reviews the accrued empirical evidence that supports these claims, covering studies that experimentally induced specific appraisals or that used induction of emotions with typical appraisal configurations (measuring facial expression via electromyographic recording) or behavioral coding of facial action units. In addition, recent studies analyzing the mechanisms of emotion recognition are shown to support the theoretical assumptions.


2020 ◽  
Author(s):  
Aishwarya Gupta ◽  
Devashish Sharma ◽  
Shaurya Sharma ◽  
Anushree Agarwal

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


Sign in / Sign up

Export Citation Format

Share Document