Emotion Recognition using EEG and Physiological Data for Robot-Assisted Rehabilitation Systems

Author(s):  
Elif Gümüslü ◽  
Duygun Erol Barkana ◽  
Hatice Köse
Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4723
Author(s):  
Patrícia Bota ◽  
Chen Wang ◽  
Ana Fred ◽  
Hugo Silva

Emotion recognition based on physiological data classification has been a topic of increasingly growing interest for more than a decade. However, there is a lack of systematic analysis in literature regarding the selection of classifiers to use, sensor modalities, features and range of expected accuracy, just to name a few limitations. In this work, we evaluate emotion in terms of low/high arousal and valence classification through Supervised Learning (SL), Decision Fusion (DF) and Feature Fusion (FF) techniques using multimodal physiological data, namely, Electrocardiography (ECG), Electrodermal Activity (EDA), Respiration (RESP), or Blood Volume Pulse (BVP). The main contribution of our work is a systematic study across five public datasets commonly used in the Emotion Recognition (ER) state-of-the-art, namely: (1) Classification performance analysis of ER benchmarking datasets in the arousal/valence space; (2) Summarising the ranges of the classification accuracy reported across the existing literature; (3) Characterising the results for diverse classifiers, sensor modalities and feature set combinations for ER using accuracy and F1-score; (4) Exploration of an extended feature set for each modality; (5) Systematic analysis of multimodal classification in DF and FF approaches. The experimental results showed that FF is the most competitive technique in terms of classification accuracy and computational complexity. We obtain superior or comparable results to those reported in the state-of-the-art for the selected datasets.


2021 ◽  
Vol 15 ◽  
Author(s):  
Ruixin Li ◽  
Yan Liang ◽  
Xiaojian Liu ◽  
Bingbing Wang ◽  
Wenxin Huang ◽  
...  

Emotion recognition plays an important role in intelligent human–computer interaction, but the related research still faces the problems of low accuracy and subject dependence. In this paper, an open-source software toolbox called MindLink-Eumpy is developed to recognize emotions by integrating electroencephalogram (EEG) and facial expression information. MindLink-Eumpy first applies a series of tools to automatically obtain physiological data from subjects and then analyzes the obtained facial expression data and EEG data, respectively, and finally fuses the two different signals at a decision level. In the detection of facial expressions, the algorithm used by MindLink-Eumpy is a multitask convolutional neural network (CNN) based on transfer learning technique. In the detection of EEG, MindLink-Eumpy provides two algorithms, including a subject-dependent model based on support vector machine (SVM) and a subject-independent model based on long short-term memory network (LSTM). In the decision-level fusion, weight enumerator and AdaBoost technique are applied to combine the predictions of SVM and CNN. We conducted two offline experiments on the Database for Emotion Analysis Using Physiological Signals (DEAP) dataset and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI) dataset, respectively, and conducted an online experiment on 15 healthy subjects. The results show that multimodal methods outperform single-modal methods in both offline and online experiments. In the subject-dependent condition, the multimodal method achieved an accuracy of 71.00% in the valence dimension and an accuracy of 72.14% in the arousal dimension. In the subject-independent condition, the LSTM-based method achieved an accuracy of 78.56% in the valence dimension and an accuracy of 77.22% in the arousal dimension. The feasibility and efficiency of MindLink-Eumpy for emotion recognition is thus demonstrated.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7854
Author(s):  
Luz Santamaria-Granados ◽  
Juan Francisco Mendoza-Moreno ◽  
Angela Chantre-Astaiza ◽  
Mario Munoz-Organero ◽  
Gustavo Ramirez-Gonzalez

The collection of physiological data from people has been facilitated due to the mass use of cheap wearable devices. Although the accuracy is low compared to specialized healthcare devices, these can be widely applied in other contexts. This study proposes the architecture for a tourist experiences recommender system (TERS) based on the user’s emotional states who wear these devices. The issue lies in detecting emotion from Heart Rate (HR) measurements obtained from these wearables. Unlike most state-of-the-art studies, which have elicited emotions in controlled experiments and with high-accuracy sensors, this research’s challenge consisted of emotion recognition (ER) in the daily life context of users based on the gathering of HR data. Furthermore, an objective was to generate the tourist recommendation considering the emotional state of the device wearer. The method used comprises three main phases: The first was the collection of HR measurements and labeling emotions through mobile applications. The second was emotional detection using deep learning algorithms. The final phase was the design and validation of the TERS-ER. In this way, a dataset of HR measurements labeled with emotions was obtained as results. Among the different algorithms tested for ER, the hybrid model of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks had promising results. Moreover, concerning TERS, Collaborative Filtering (CF) using CNN showed better performance.


Author(s):  
Philip Gouverneur ◽  
Joanna Jaworek-Korjakowska ◽  
Lukas Köping ◽  
Kimiaki Shirahama ◽  
Pawel Kleczek ◽  
...  

2013 ◽  
Vol 4 (3) ◽  
pp. 11-25 ◽  
Author(s):  
Imen Tayari Meftah ◽  
Nhan Le Thanh ◽  
Chokri Ben Amar

Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems (i.e., the observation of only one source of emotion) unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, the authors present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. The authors elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. The authors’ method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct.


2007 ◽  
Vol 177 (4S) ◽  
pp. 55-55
Author(s):  
Christian Schwentner ◽  
Andreas Lunacek ◽  
Alexandre E. Pelzer ◽  
Richard Neururer ◽  
Wolfgang Horninger ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document