scholarly journals Fast Emotion Recognition Based on Single Pulse PPG Signal with Convolutional Neural Network

2019 ◽  
Vol 9 (16) ◽  
pp. 3355 ◽  
Author(s):  
Min Seop Lee ◽  
Yun Kyu Lee ◽  
Dong Sung Pae ◽  
Myo Taeg Lim ◽  
Dong Won Kim ◽  
...  

Physiological signals contain considerable information regarding emotions. This paper investigated the ability of photoplethysmogram (PPG) signals to recognize emotion, adopting a two-dimensional emotion model based on valence and arousal to represent human feelings. The main purpose was to recognize short term emotion using a single PPG signal pulse. We used a one-dimensional convolutional neural network (1D CNN) to extract PPG signal features to classify the valence and arousal. We split the PPG signal into a single 1.1 s pulse and normalized it for input to the neural network based on the personal maximum and minimum values. We chose the dataset for emotion analysis using physiological (DEAP) signals for the experiment and tested the 1D CNN as a binary classification (high or low valence and arousal), achieving the short-term emotion recognition of 1.1 s with 75.3% and 76.2% valence and arousal accuracies, respectively, on the DEAP data.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 124928-124938 ◽  
Author(s):  
Simin Wang ◽  
Junhuai Li ◽  
Ting Cao ◽  
Huaijun Wang ◽  
Pengjia Tu ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5533 ◽  
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Visual stimuli from photographs and artworks raise corresponding emotional responses. It is a long process to prove whether the emotions that arise from photographs and artworks are different or not. We answer this question by employing electroencephalogram (EEG)-based biosignals and a deep convolutional neural network (CNN)-based emotion recognition model. We employ Russell’s emotion model, which matches emotion keywords such as happy, calm or sad to a coordinate system whose axes are valence and arousal, respectively. We collect photographs and artwork images that match the emotion keywords and build eighteen one-minute video clips for nine emotion keywords for photographs and artwork. We hired forty subjects and executed tests about the emotional responses from the video clips. From the t-test on the results, we concluded that the valence shows difference, while the arousal does not.


Circulation ◽  
2018 ◽  
Vol 138 (Suppl_2) ◽  
Author(s):  
Tetsuo Hatanaka ◽  
Hiroshi Kaneko ◽  
Aki Nagase ◽  
Seishiro Marukawa

Introduction: An interruption of chest compressions during CPR adversely affects patient outcome. Currently, however, periodical interruptions are unavoidable to assess the ECG rhythms and to give shocks for defibrillation if indicated. Evidence suggests a 5-second interruption immediately before shocks may translate into ~15% reduction of the chance of survival. The objective of this study was to build, train and validate a convolutional neural network (artificial intelligence) for detecting shock-indicated rhythms out of ECG signals corrupted with chest compression artifacts during CPR. Methods: Our convolutional neural network consisted of 7 convolutional layers, 3 pooling layers and 3 fully-connected layers for binary classification (shock-indicated vs non-shock-indicated). The input data set was a spectrogram consisting of 56 frequency-bins by 80 time-segments transformed from a 12.16-seconds ECG signal. From AEDs used for 236 patients with out-of-hospital cardiac arrest, 1,223 annotated ECG strips were extracted. Ventricular fibrillation and wide-QRS ventricular tachycardia with HR>180 beats/min were annotated as shock-indicated, and the others as non-shock-indicated. The total length of the strips was 8:49:57 (hr:min:sec) and 8:02:07 respectively for shock-indicated and non-shock-indicated rhythms. Those strips were converted into 465,102 spectrograms allowing partial overlaps and were fed into the neural network for training. The validation data set was obtained from a separate group of 225 patients, from which annotated ECG strips (total duration of 62:11:28) were extracted, yielding 43,800 spectrograms. Results: After the training, both the sensitivity and specificity of detecting shock-indicated rhythms over the training data set were 99.7% - 100% (varying with training instances). The sensitivity and specificity over the validation data set were 99.3% - 99.7% and 99.3% - 99.5%, respectively. Conclusions: The convolutional neural network has accurately and continuously evaluated the ECG rhythms during CPR, potentially obviating the need for rhythm checks for defibrillation during CPR.


Author(s):  
V. B. Savinov ◽  
S. A. Botman ◽  
V. V. Sapunov ◽  
V. A. Petrov ◽  
I. G. Samusev ◽  
...  

The existing emotion recognition techniques based on the analysis of the tone of voice or facial expressions do not possess sufficient specificity and accuracy. These parameters can be significantly improved by employing physiological signals that escape the filters of human consciousness. The aim of this work was to carry out an EEG-based binary classification of emotional valence using a convolutional neural network and to compare its performance to that of a random forest algorithm. A healthy 30-year old male was recruited for the experiment. The experiment included 10 two-hour-long sessions of watching videos that the participant had selected according to his personal preferences. During the sessions, an electroencephalogram was recorded. Then, the signal was cleared of artifacts, segmented and fed to the model. Using a neural network, we were able to achieve a F1 score of 87%, which is significantly higher than the F1 score for a random forest model (67%). The results of our experiment suggest that convolutional neural networks in general and the proposed architecture in particular hold great promise for emotion recognition based on electrophysiological signals. Further refinement of the proposed approach may involve optimization of the network architecture to include more classes of emotions and improvement of the network’s generalization capacity when working with a large number of participants.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 52
Author(s):  
Tianyi Zhang ◽  
Abdallah El Ali ◽  
Chen Wang ◽  
Alan Hanjalic ◽  
Pablo Cesar

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.


Sign in / Sign up

Export Citation Format

Share Document