Automatic Emotion Recognition Based on EEG and ECG Signals While Listening to Quranic Recitation Compared with Listening to Music

Author(s):  
Sabaa Ahmed Yahya Al-Galal ◽  
Imad Fakhri Taha Alshaikhli ◽  
Abdul Wahab Bin Abdul Rahman
Author(s):  
Wenwen He ◽  
Yalan Ye ◽  
Tongjie Pan ◽  
Qianhe Meng ◽  
Yunxia Li

Author(s):  
Kanlaya Rattanyu ◽  
◽  
Makoto Mizukawa ◽  

This paper presents our approach for emotion recognition based on Electrocardiogram (ECG) signals. We propose to use the ECG’s inter-beat features together with within-beat features in our recognition system. In order to reduce the feature space, post hoc tests in the Analysis of Variance (ANOVA) were employed to select the set of eleven most significant features. We conducted experiments on twelve subjects using the International Affective Picture System (IAPS) database. RF-ECG sensors were attached to the subject’s skin to monitor the ECG signal via wireless connection. Results showed that our eleven feature approach outperforms the conventional three feature approach. For simultaneous classification of six emotional states: anger, fear, disgust, sadness, neutral, and joy, the Correct Classification Ratio (CCR) showed significant improvement from 37.23% to over 61.44%. Our system was able to monitor human emotion wirelessly without affecting the subject’s activities. Therefore it is suitable to be integrated with service robots to provide assistive and healthcare services.


2021 ◽  
Vol 11 (11) ◽  
pp. 4945
Author(s):  
Axel Sepúlveda ◽  
Francisco Castillo ◽  
Carlos Palma ◽  
Maria Rodriguez-Fernandez

Affect detection combined with a system that dynamically responds to a person’s emotional state allows an improved user experience with computers, systems, and environments and has a wide range of applications, including entertainment and health care. Previous studies on this topic have used a variety of machine learning algorithms and inputs such as audial, visual, or physiological signals. Recently, a lot of interest has been focused on the last, as speech or video recording is impractical for some applications. Therefore, there is a need to create Human–Computer Interface Systems capable of recognizing emotional states from noninvasive and nonintrusive physiological signals. Typically, the recognition task is carried out from electroencephalogram (EEG) signals, obtaining good accuracy. However, EEGs are difficult to register without interfering with daily activities, and recent studies have shown that it is possible to use electrocardiogram (ECG) signals for this purpose. This work improves the performance of emotion recognition from ECG signals using wavelet transform for signal analysis. Features of the ECG signal are extracted from the AMIGOS database using a wavelet scattering algorithm that allows obtaining features of the signal at different time scales, which are then used as inputs for different classifiers to evaluate their performance. The results show that the proposed algorithm for extracting features and classifying the signals obtains an accuracy of 88.8% in the valence dimension, 90.2% in arousal, and 95.3% in a two-dimensional classification, which is better than the performance reported in previous studies. This algorithm is expected to be useful for classifying emotions using wearable devices.


2010 ◽  
Vol 27 (1) ◽  
pp. 8-14 ◽  
Author(s):  
Ya Xu ◽  
Guangyuan Liu ◽  
Min Hao ◽  
Wanhui Wen ◽  
Xiting Huang

2013 ◽  
Vol 61 (1) ◽  
pp. 7-15 ◽  
Author(s):  
Daniel Dittrich ◽  
Gregor Domes ◽  
Susi Loebel ◽  
Christoph Berger ◽  
Carsten Spitzer ◽  
...  

Die vorliegende Studie untersucht die Hypothese eines mit Alexithymie assoziierten Defizits beim Erkennen emotionaler Gesichtsaudrücke an einer klinischen Population. Darüber hinaus werden Hypothesen zur Bedeutung spezifischer Emotionsqualitäten sowie zu Gender-Unterschieden getestet. 68 ambulante und stationäre psychiatrische Patienten (44 Frauen und 24 Männer) wurden mit der Toronto-Alexithymie-Skala (TAS-20), der Montgomery-Åsberg Depression Scale (MADRS), der Symptom-Check-List (SCL-90-R) und der Emotional Expression Multimorph Task (EEMT) untersucht. Als Stimuli des Gesichtererkennungsparadigmas dienten Gesichtsausdrücke von Basisemotionen nach Ekman und Friesen, die zu Sequenzen mit sich graduell steigernder Ausdrucksstärke angeordnet waren. Mittels multipler Regressionsanalyse untersuchten wir die Assoziation von TAS-20 Punktzahl und facial emotion recognition (FER). Während sich für die Gesamtstichprobe und den männlichen Stichprobenteil kein signifikanter Zusammenhang zwischen TAS-20-Punktzahl und FER zeigte, sahen wir im weiblichen Stichprobenteil durch die TAS-20 Punktzahl eine signifikante Prädiktion der Gesamtfehlerzahl (β = .38, t = 2.055, p < 0.05) und den Fehlern im Erkennen der Emotionen Wut und Ekel (Wut: β = .40, t = 2.240, p < 0.05, Ekel: β = .41, t = 2.214, p < 0.05). Für wütende Gesichter betrug die Varianzaufklärung durch die TAS-20-Punktzahl 13.3 %, für angeekelte Gesichter 19.7 %. Kein Zusammenhang bestand zwischen der Zeit, nach der die Probanden die emotionalen Sequenzen stoppten, um ihre Bewertung abzugeben (Antwortlatenz) und Alexithymie. Die Ergebnisse der Arbeit unterstützen das Vorliegen eines mit Alexithymie assoziierten Defizits im Erkennen emotionaler Gesichtsausdrücke bei weiblchen Probanden in einer heterogenen, klinischen Stichprobe. Dieses Defizit könnte die Schwierigkeiten Hochalexithymer im Bereich sozialer Interaktionen zumindest teilweise begründen und so eine Prädisposition für psychische sowie psychosomatische Erkrankungen erklären.


Sign in / Sign up

Export Citation Format

Share Document