scholarly journals Multidimensional Emotion Recognition Based on Semantic Analysis of Biomedical EEG Signal for Knowledge Discovery in Psychological Healthcare

2021 ◽  
Vol 11 (3) ◽  
pp. 1338
Author(s):  
Ling Wang ◽  
Hangyu Liu ◽  
Tiehua Zhou ◽  
Wenlong Liang ◽  
Minglei Shan

Electroencephalogram (EEG) as biomedical signal is widely applied in the medical field such as the detection of Alzheimer’s disease, Parkinson’s disease, etc. Moreover, by analyzing the EEG-based emotions, the mental status of individual can be revealed for further analysis on the psychological causes of some diseases such as cancer, which is considered as a vital factor on the induction of certain diseases. Therefore, once the emotional status can be correctly analyzed based on EEG signal, more healthcare-oriented applications can be furtherly carried out. Currently, in order to achieve efficiency and accuracy, diverse amounts of EEG-based emotions recognition methods generally extract features by analyzing the overall characteristics of signal, along with optimization strategy of channel selection to minimize the information redundancy. Those methods have been proved their effectiveness, however, there still remains a big challenge when applied with single channel information for emotion recognition task. Therefore, in order to recognize multidimensional emotions based on single channel information, an emotion quantification analysis (EQA) method is proposed to objectively analyze the semantically similarity between emotions in valence-arousal domains, and a multidimensional emotion recognition (EMER) model is proposed on recognizing multidimensional emotions according to the partial fluctuation pattern (PFP) features based on single channel information, and result shows that even though semantically similar emotions are proved to have similar change patterns in EEG signals, each single channel of 4 frequency bands can efficiently recognize 20 different emotions with an average accuracy above 93% separately.

Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5218 ◽  
Author(s):  
Muhammad Adeel Asghar ◽  
Muhammad Jamil Khan ◽  
Fawad ◽  
Yasar Amin ◽  
Muhammad Rizwan ◽  
...  

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.


2020 ◽  
Vol 14 ◽  
Author(s):  
Qinghua Zhong ◽  
Yongsheng Zhu ◽  
Dongli Cai ◽  
Luwei Xiao ◽  
Han Zhang

In the human-computer interaction (HCI), electroencephalogram (EEG) access for automatic emotion recognition is an effective way for robot brains to perceive human behavior. In order to improve the accuracy of the emotion recognition, a method of EEG access for emotion recognition based on a deep hybrid network was proposed in this paper. Firstly, the collected EEG was decomposed into four frequency band signals, and the multiscale sample entropy (MSE) features of each frequency band were extracted. Secondly, the constructed 3D MSE feature matrices were fed into a deep hybrid network for autonomous learning. The deep hybrid network was composed of a continuous convolutional neural network (CNN) and hidden Markov models (HMMs). Lastly, HMMs trained with multiple observation sequences were used to replace the artificial neural network classifier in the CNN, and the emotion recognition task was completed by HMM classifiers. The proposed method was applied to the DEAP dataset for emotion recognition experiments, and the average accuracy could achieve 79.77% on arousal, 83.09% on valence, and 81.83% on dominance. Compared with the latest related methods, the accuracy was improved by 0.99% on valence and 14.58% on dominance, which verified the effectiveness of the proposed method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ajay Kumar Maddirala ◽  
Kalyana C Veluvolu

AbstractIn recent years, the usage of portable electroencephalogram (EEG) devices are becoming popular for both clinical and non-clinical applications. In order to provide more comfort to the subject and measure the EEG signals for several hours, these devices usually consists of fewer EEG channels or even with a single EEG channel. However, electrooculogram (EOG) signal, also known as eye-blink artifact, produced by involuntary movement of eyelids, always contaminate the EEG signals. Very few techniques are available to remove these artifacts from single channel EEG and most of these techniques modify the uncontaminated regions of the EEG signal. In this paper, we developed a new framework that combines unsupervised machine learning algorithm (k-means) and singular spectrum analysis (SSA) technique to remove eye blink artifact without modifying actual EEG signal. The novelty of the work lies in the extraction of the eye-blink artifact based on the time-domain features of the EEG signal and the unsupervised machine learning algorithm. The extracted eye-blink artifact is further processed by the SSA method and finally subtracted from the contaminated single channel EEG signal to obtain the corrected EEG signal. Results with synthetic and real EEG signals demonstrate the superiority of the proposed method over the existing methods. Moreover, the frequency based measures [the power spectrum ratio ($$\Gamma $$ Γ ) and the mean absolute error (MAE)] also show that the proposed method does not modify the uncontaminated regions of the EEG signal while removing the eye-blink artifact.


2021 ◽  
pp. 147715352110026
Author(s):  
Y Mao ◽  
S Fotios

Obstacle detection and facial emotion recognition are two critical visual tasks for pedestrians. In previous studies, the effect of changes in lighting was tested for these as individual tasks, where the task to be performed next in a sequence was known. In natural situations, a pedestrian is required to attend to multiple tasks, perhaps simultaneously, or at least does not know which of several possible tasks would next require their attention. This multi-tasking might impair performance on any one task and affect evaluation of optimal lighting conditions. In two experiments, obstacle detection and facial emotion recognition tasks were performed in parallel under different illuminances. Comparison of these results with previous studies, where these same tasks were performed individually, suggests that multi-tasking impaired performance on the peripheral detection task but not the on-axis facial emotion recognition task.


2021 ◽  
pp. 1-11
Author(s):  
Sara A. Heyn ◽  
Collin Schmit ◽  
Taylor J. Keding ◽  
Richard Wolf ◽  
Ryan J. Herringa

Abstract Despite broad evidence suggesting that adversity-exposed youth experience an impaired ability to recognize emotion in others, the underlying biological mechanisms remains elusive. This study uses a multimethod approach to target the neurological substrates of this phenomenon in a well-phenotyped sample of youth meeting diagnostic criteria for posttraumatic stress disorder (PTSD). Twenty-one PTSD-afflicted youth and 23 typically developing (TD) controls completed clinical interview schedules, an emotion recognition task with eye-tracking, and an implicit emotion processing task during functional magnetic resonance imaging )fMRI). PTSD was associated with decreased accuracy in identification of angry, disgust, and neutral faces as compared to TD youth. Of note, these impairments occurred despite the normal deployment of visual attention in youth with PTSD relative to TD youth. Correlation with a related fMRI task revealed a group by accuracy interaction for amygdala–hippocampus functional connectivity (FC) for angry expressions, where TD youth showed a positive relationship between anger accuracy and amygdala–hippocampus FC; this relationship was reversed in youth with PTSD. These findings are a novel characterization of impaired threat recognition within a well-phenotyped population of severe pediatric PTSD. Further, the differential amygdala–hippocampus FC identified in youth with PTSD may imply aberrant efficiency of emotional contextualization circuits.


2011 ◽  
Vol 198 (4) ◽  
pp. 302-308 ◽  
Author(s):  
Ian M. Anderson ◽  
Clare Shippen ◽  
Gabriella Juhasz ◽  
Diana Chase ◽  
Emma Thomas ◽  
...  

BackgroundNegative biases in emotional processing are well recognised in people who are currently depressed but are less well described in those with a history of depression, where such biases may contribute to vulnerability to relapse.AimsTo compare accuracy, discrimination and bias in face emotion recognition in those with current and remitted depression.MethodThe sample comprised a control group (n = 101), a currently depressed group (n = 30) and a remitted depression group (n = 99). Participants provided valid data after receiving a computerised face emotion recognition task following standardised assessment of diagnosis and mood symptoms.ResultsIn the control group women were more accurate in recognising emotions than men owing to greater discrimination. Among participants with depression, those in remission correctly identified more emotions than controls owing to increased response bias, whereas those currently depressed recognised fewer emotions owing to decreased discrimination. These effects were most marked for anger, fear and sadness but there was no significant emotion × group interaction, and a similar pattern tended to be seen for happiness although not for surprise or disgust. These differences were confined to participants who were antidepressant-free, with those taking antidepressants having similar results to the control group.ConclusionsAbnormalities in face emotion recognition differ between people with current depression and those in remission. Reduced discrimination in depressed participants may reflect withdrawal from the emotions of others, whereas the increased bias in those with a history of depression could contribute to vulnerability to relapse. The normal face emotion recognition seen in those taking medication may relate to the known effects of antidepressants on emotional processing and could contribute to their ability to protect against depressive relapse.


2021 ◽  
Vol 168 ◽  
pp. S130
Author(s):  
Xiaodan Zhang ◽  
Yawen Zhai ◽  
Nan Zhang ◽  
Junwei Kang ◽  
Tao Li ◽  
...  

Author(s):  
Tie Hua Zhou ◽  
Wen Long Liang ◽  
Hang Yu Liu ◽  
Wei Jian Pu ◽  
Ling Wang

2022 ◽  
Vol 12 (2) ◽  
pp. 807
Author(s):  
Huafei Xiao ◽  
Wenbo Li ◽  
Guanzhong Zeng ◽  
Yingzhang Wu ◽  
Jiyong Xue ◽  
...  

With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.


Sign in / Sign up

Export Citation Format

Share Document