scholarly journals Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Yongrui Huang ◽  
Jianhao Yang ◽  
Pengkai Liao ◽  
Jiahui Pan

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Bahar Hatipoglu Yilmaz ◽  
Cemal Kose

Abstract Emotion is one of the most complex and difficult expression to be predicted. Nowadays, many recognition systems that use classification methods have focused on different types of emotion recognition problems. In this paper, we aimed to propose a multimodal fusion method between electroencephalography (EEG) and electrooculography (EOG) signals for emotion recognition. Therefore, before the feature extraction stage, we applied different angle-amplitude transformations to EEG–EOG signals. These transformations take arbitrary time domain signals and convert them two-dimensional images named as Angle-Amplitude Graph (AAG). Then, we extracted image-based features using a scale invariant feature transform method, fused these features originates basically from EEG–EOG and lastly classified with support vector machines. To verify the validity of these proposed methods, we performed experiments on the multimodal DEAP dataset which is a benchmark dataset widely used for emotion analysis with physiological signals. In the experiments, we applied the proposed emotion recognition procedures on the arousal-valence dimensions. We achieved (91.53%) accuracy for the arousal space and (90.31%) for the valence space after fusion. Experimental results showed that the combination of AAG image features belonging to EEG–EOG signals in the baseline angle amplitude transformation approaches enhanced the classification performance on the DEAP dataset.


2019 ◽  
Vol 11 (5) ◽  
pp. 105 ◽  
Author(s):  
Yongrui Huang ◽  
Jianhao Yang ◽  
Siyu Liu ◽  
Jiahui Pan

Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique.


Author(s):  
Abozar Atya Mohamed Atya ◽  
Khalid Hamid Bilal

The advent of artificial intelligence technology has reduced the gap between humans and machines as equips man to create more near-perfect humanoids. Facial expression is an important tool to communicate one’s emotions as a non-verbally overview of emotion recognition using facial expressions. A remarkable advantage of such a technique recently improved public security through tracking and recognizing, thus led to the high attention to keep up the scientific research in the field. The approaches used for facial expression include classifiers like Support Vector Machine (SVM), Artificial Neural Network (ANN), Convolution Neural Network (CNN), Active Appearance and Machine learning which all used to classify emotions based on certain parts of interest on the face like lips, lower jaw, eyebrows, cheeks and many more. By comparison, the reviews have shown that the average accuracy of the basic emotion ranged from 51% up to 100%, whereas carrying through 7% to 13% in the compound emotions, hence indicated that the indispensable emotion is much comfortable to recognize.


2021 ◽  
Vol 5 (10) ◽  
pp. 57
Author(s):  
Vinícius Silva ◽  
Filomena Soares ◽  
João Sena Esteves ◽  
Cristina P. Santos ◽  
Ana Paula Pereira

Facial expressions are of utmost importance in social interactions, allowing communicative prompts for a speaking turn and feedback. Nevertheless, not all have the ability to express themselves socially and emotionally in verbal and non-verbal communication. In particular, individuals with Autism Spectrum Disorder (ASD) are characterized by impairments in social communication, repetitive patterns of behaviour, and restricted activities or interests. In the literature, the use of robotic tools is reported to promote social interaction with children with ASD. The main goal of this work is to develop a system capable of automatic detecting emotions through facial expressions and interfacing them with a robotic platform (Zeno R50 Robokind® robotic platform, named ZECA) in order to allow social interaction with children with ASD. ZECA was used as a mediator in social communication activities. The experimental setup and methodology for a real-time facial expression (happiness, sadness, anger, surprise, fear, and neutral) recognition system was based on the Intel® RealSense™ 3D sensor and on facial features extraction and multiclass Support Vector Machine classifier. The results obtained allowed to infer that the proposed system is adequate in support sessions with children with ASD, giving a strong indication that it may be used in fostering emotion recognition and imitation skills.


2020 ◽  
Vol 7 (9) ◽  
pp. 190699
Author(s):  
Sarah A. H. Alharbi ◽  
Katherine Button ◽  
Lingshan Zhang ◽  
Kieran J. O'Shea ◽  
Vanessa Fasolt ◽  
...  

Evidence that affective factors (e.g. anxiety, depression, affect) are significantly related to individual differences in emotion recognition is mixed. Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517) reported that individuals who scored lower in anxiety performed significantly better on two measures of facial-expression recognition (emotion-matching and emotion-labelling tasks), but not a third measure (the multimodal emotion recognition test). By contrast, facial-expression recognition was not significantly correlated with measures of depression, positive or negative affect, empathy, or autistic-like traits. Because the range of affective factors considered in this study and its use of multiple expression-recognition tasks mean that it is a relatively comprehensive investigation of the role of affective factors in facial expression recognition, we carried out a direct replication. In common with Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517), scores on the DASS anxiety subscale negatively predicted performance on the emotion recognition tasks across multiple analyses, although these correlations were only consistently significant for performance on the emotion-labelling task. However, and by contrast with Palermo et al . (Palermo et al . 2018 J. Exp. Psychol. Hum. Percept. Perform. 44 , 503–517), other affective factors (e.g. those related to empathy) often also significantly predicted emotion-recognition performance. Collectively, these results support the proposal that affective factors predict individual differences in emotion recognition, but that these correlations are not necessarily specific to measures of general anxiety, such as the DASS anxiety subscale.


2019 ◽  
Vol 9 (11) ◽  
pp. 2218 ◽  
Author(s):  
Maria Grazia Violante ◽  
Federica Marcolin ◽  
Enrico Vezzetti ◽  
Luca Ulrich ◽  
Gianluca Billia ◽  
...  

This study proposes a novel quality function deployment (QFD) design methodology based on customers’ emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users’ emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users’ emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers’ needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.


Sign in / Sign up

Export Citation Format

Share Document