scholarly journals Emotion Recognition Using Smart Watch Sensor Data: A Mixed-Design Study (Preprint)

Author(s):  
Juan Carlos Quiroz ◽  
Elena Geangu ◽  
Min Hooi Yong

BACKGROUND Research in psychology has shown that the way a person walks reflects that person's current mood (or emotional state). Recent studies have started using smartphones to detect emotional states from movement data. OBJECTIVE This study investigates the use of movement sensor data from a smart watch to infer an individual's emotional state. We present our findings on a user study with 50 participants. METHODS The experimental design is a mixed-design study; within-subjects (emotions; happy, sad, neutral) and between-subjects (stimulus type: audio visual "movie clips", audio "music clips"). Each participant experienced both emotions in a single stimulus type. All participants walked 250m while wearing a smart watch on one wrist and a heart rate monitor strap on their chest. They also had to answer a short questionnaire (20 items; PANAS) before and after experiencing each emotion. The heart rate monitor serves as a supplementary information to our data. We performed time-series analysis on the data from the smart watch and a t-test on the questionnaire items to measure the change in emotional state. The heart rate data was analyzed using one-way ANOVA. We extracted features from the time-series using sliding windows and used the features to train and validate classifiers that determine an individual's emotion. RESULTS We had 50 young adults participate in our study, with 49 included for the affective PANAS questionnaire and all for the feature extraction. Participants reported feeling less negative affect after watching sad videos or after listening to the sad music, P < .006. For the task of emotion recognition using classifiers, our results show that the personal models outperformed personal baselines, and achieve median accuracies higher than 78% for all conditions of the design study for the binary classification of happiness vs sadness. CONCLUSIONS Our findings show that we are able to detect the changes in emotional state with data obtained from the smartwatch as well as behavioral responses. Together with the high accuracies achieved across all users for the classification of happy vs sad emotional states, this is further evidence for the hypothesis that movement sensor data can be used for emotion recognition.

10.2196/10153 ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. e10153 ◽  
Author(s):  
Juan Carlos Quiroz ◽  
Elena Geangu ◽  
Min Hooi Yong

Background Research in psychology has shown that the way a person walks reflects that person’s current mood (or emotional state). Recent studies have used mobile phones to detect emotional states from movement data. Objective The objective of our study was to investigate the use of movement sensor data from a smart watch to infer an individual’s emotional state. We present our findings of a user study with 50 participants. Methods The experimental design is a mixed-design study: within-subjects (emotions: happy, sad, and neutral) and between-subjects (stimulus type: audiovisual “movie clips” and audio “music clips”). Each participant experienced both emotions in a single stimulus type. All participants walked 250 m while wearing a smart watch on one wrist and a heart rate monitor strap on the chest. They also had to answer a short questionnaire (20 items; Positive Affect and Negative Affect Schedule, PANAS) before and after experiencing each emotion. The data obtained from the heart rate monitor served as supplementary information to our data. We performed time series analysis on data from the smart watch and a t test on questionnaire items to measure the change in emotional state. Heart rate data was analyzed using one-way analysis of variance. We extracted features from the time series using sliding windows and used features to train and validate classifiers that determined an individual’s emotion. Results Overall, 50 young adults participated in our study; of them, 49 were included for the affective PANAS questionnaire and 44 for the feature extraction and building of personal models. Participants reported feeling less negative affect after watching sad videos or after listening to sad music, P<.006. For the task of emotion recognition using classifiers, our results showed that personal models outperformed personal baselines and achieved median accuracies higher than 78% for all conditions of the design study for binary classification of happiness versus sadness. Conclusions Our findings show that we are able to detect changes in the emotional state as well as in behavioral responses with data obtained from the smartwatch. Together with high accuracies achieved across all users for classification of happy versus sad emotional states, this is further evidence for the hypothesis that movement sensor data can be used for emotion recognition.


2021 ◽  
Author(s):  
Talieh Seyed Tabtabae

Automatic Emotion Recognition (AER) is an emerging research area in the Human-Computer Interaction (HCI) field. As Computers are becoming more and more popular every day, the study of interaction between humans (users) and computers is catching more attention. In order to have a more natural and friendly interface between humans and computers, it would be beneficial to give computers the ability to recognize situations the same way a human does. Equipped with an emotion recognition system, computers will be able to recognize their users' emotional state and show the appropriate reaction to that. In today's HCI systems, machines can recognize the speaker and also content of the speech, using speech recognition and speaker identification techniques. If machines are equipped with emotion recognition techniques, they can also know "how it is said" to react more appropriately, and make the interaction more natural. One of the most important human communication channels is the auditory channel which carries speech and vocal intonation. In fact people can perceive each other's emotional state by the way they talk. Therefore in this work the speech signals are analyzed in order to set up an automatic system which recognizes the human emotional state. Six discrete emotional states have been considered and categorized in this research: anger, happiness, fear, surprise, sadness, and disgust. A set of novel spectral features are proposed in this contribution. Two approaches are applied and the results are compared. In the first approach, all the acoustic features are extracted from consequent frames along the speech signals. The statistical values of features are considered to constitute the features vectors. Suport Vector Machine (SVM), which is a relatively new approach in the field of machine learning is used to classify the emotional states. In the second approach, spectral features are extracted from non-overlapping logarithmically-spaced frequency sub-bands. In order to make use of all the extracted information, sequence discriminant SVMs are adopted. The empirical results show that the employed techniques are very promising.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Mehmet Akif Ozdemir ◽  
Murside Degirmenci ◽  
Elif Izci ◽  
Aydin Akan

AbstractThe emotional state of people plays a key role in physiological and behavioral human interaction. Emotional state analysis entails many fields such as neuroscience, cognitive sciences, and biomedical engineering because the parameters of interest contain the complex neuronal activities of the brain. Electroencephalogram (EEG) signals are processed to communicate brain signals with external systems and make predictions over emotional states. This paper proposes a novel method for emotion recognition based on deep convolutional neural networks (CNNs) that are used to classify Valence, Arousal, Dominance, and Liking emotional states. Hence, a novel approach is proposed for emotion recognition with time series of multi-channel EEG signals from a Database for Emotion Analysis and Using Physiological Signals (DEAP). We propose a new approach to emotional state estimation utilizing CNN-based classification of multi-spectral topology images obtained from EEG signals. In contrast to most of the EEG-based approaches that eliminate spatial information of EEG signals, converting EEG signals into a sequence of multi-spectral topology images, temporal, spectral, and spatial information of EEG signals are preserved. The deep recurrent convolutional network is trained to learn important representations from a sequence of three-channel topographical images. We have achieved test accuracy of 90.62% for negative and positive Valence, 86.13% for high and low Arousal, 88.48% for high and low Dominance, and finally 86.23% for like–unlike. The evaluations of this method on emotion recognition problem revealed significant improvements in the classification accuracy when compared with other studies using deep neural networks (DNNs) and one-dimensional CNNs.


2020 ◽  
Vol 13 (4) ◽  
pp. 4-24 ◽  
Author(s):  
V.A. Barabanschikov ◽  
E.V. Suvorova

The article is devoted to the results of approbation of the Geneva Emotion Recognition Test (GERT), a Swiss method for assessing dynamic emotional states, on Russian sample. Identification accuracy and the categorical fields’ structure of emotional expressions of a “living” face are analysed. Similarities and differences in the perception of affective groups of dynamic emotions in the Russian and Swiss samples are considered. A number of patterns of recognition of multi-modal expressions with changes in valence and arousal of emotions are described. Differences in the perception of dynamics and statics of emotional expressions are revealed. GERT method confirmed it’s high potential for solving a wide range of academic and applied problems.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Cai ◽  
Ruolan Xiao ◽  
Wenjie Cui ◽  
Shang Zhang ◽  
Guangda Liu

Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.


Author(s):  
Kanlaya Rattanyu ◽  
◽  
Makoto Mizukawa ◽  

This paper presents our approach for emotion recognition based on Electrocardiogram (ECG) signals. We propose to use the ECG’s inter-beat features together with within-beat features in our recognition system. In order to reduce the feature space, post hoc tests in the Analysis of Variance (ANOVA) were employed to select the set of eleven most significant features. We conducted experiments on twelve subjects using the International Affective Picture System (IAPS) database. RF-ECG sensors were attached to the subject’s skin to monitor the ECG signal via wireless connection. Results showed that our eleven feature approach outperforms the conventional three feature approach. For simultaneous classification of six emotional states: anger, fear, disgust, sadness, neutral, and joy, the Correct Classification Ratio (CCR) showed significant improvement from 37.23% to over 61.44%. Our system was able to monitor human emotion wirelessly without affecting the subject’s activities. Therefore it is suitable to be integrated with service robots to provide assistive and healthcare services.


Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 646 ◽  
Author(s):  
Tomasz Sapiński ◽  
Dorota Kamińska ◽  
Adam Pelikant ◽  
Gholamreza Anbarjafari

Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states—namely, happy, sad, surprise, fear, anger, disgust and neutral—utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e2364 ◽  
Author(s):  
Shun Li ◽  
Liqing Cui ◽  
Changye Zhu ◽  
Baobin Li ◽  
Nan Zhao ◽  
...  

Automatic emotion recognition is of great value in many applications, however, to fully display the application value of emotion recognition, more portable, non-intrusive, inexpensive technologies need to be developed. Human gaits could reflect the walker’s emotional state, and could be an information source for emotion recognition. This paper proposed a novel method to recognize emotional state through human gaits by using Microsoft Kinect, a low-cost, portable, camera-based sensor. Fifty-nine participants’ gaits under neutral state, induced anger and induced happiness were recorded by two Kinect cameras, and the original data were processed through joint selection, coordinate system transformation, sliding window gauss filtering, differential operation, and data segmentation. Features of gait patterns were extracted from 3-dimentional coordinates of 14 main body joints by Fourier transformation and Principal Component Analysis (PCA). The classifiers NaiveBayes, RandomForests, LibSVM and SMO (Sequential Minimal Optimization) were trained and evaluated, and the accuracy of recognizing anger and happiness from neutral state achieved 80.5% and 75.4%. Although the results of distinguishing angry and happiness states were not ideal in current study, it showed the feasibility of automatically recognizing emotional states from gaits, with the characteristics meeting the application requirements.


Medicine ◽  
2019 ◽  
Vol 98 (33) ◽  
pp. e16863 ◽  
Author(s):  
Yi-Chun Chen ◽  
Chun-Chieh Hsiao ◽  
Wen-Dian Zheng ◽  
Ren-Guey Lee ◽  
Robert Lin

Author(s):  
Apichart Jaratrotkamjorn

The emotions are very important in human daily life. In order to make the machine can recognize the human emotional state, and it can intelligently respond to need for human, which are very important in human-computer interaction. The majority of existing work concentrate on the classification of six basic emotions only. In this research work propose the emotion recognition system through the multimodal approach, which integrated information from both facial and speech expressions. The database has eight basic emotions (neutral, calm, happy, sad, angry, fearful, disgust, and surprised). Emotions are classified using deep belief network method. The experiment results show that the performance of bimodal emotion recognition system, it has better improvement. The overall accuracy rate is 97.92%.


Sign in / Sign up

Export Citation Format

Share Document