scholarly journals Personality first in emotion: a deep neural network based on electroencephalogram channel attention for cross-subject emotion recognition

2021 ◽  
Vol 8 (8) ◽  
pp. 201976
Author(s):  
Zhihang Tian ◽  
Dongmin Huang ◽  
Sijin Zhou ◽  
Zhidan Zhao ◽  
Dazhi Jiang

In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals’ personalities from the perspective of ‘birds of a feather flock together’. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.

2021 ◽  
Vol 15 ◽  
Author(s):  
Yanling An ◽  
Shaohai Hu ◽  
Xiaoying Duan ◽  
Ling Zhao ◽  
Caiyun Xie ◽  
...  

As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4543 ◽  
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Visual contents such as movies and animation evoke various human emotions. We examine an argument that the emotion from the visual contents may vary according to the contrast control of the scenes contained in the contents. We sample three emotions including positive, neutral and negative to prove our argument. We also sample several scenes of these emotions from visual contents and control the contrast of the scenes. We manipulate the contrast of the scenes and measure the change of valence and arousal from human participants who watch the contents using a deep emotion recognition module based on electroencephalography (EEG) signals. As a result, we conclude that the enhancement of contrast induces the increase of valence, while the reduction of contrast induces the decrease. Meanwhile, the contrast control affects arousal on a very minute scale.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012044
Author(s):  
Lingzhi Chen ◽  
Wei Deng ◽  
Chunjin Ji

Abstract Pattern Recognition is the most important part of the brain computer interface (BCI) system. More and more profound learning methods were applied in BCI to increase the overall quality of pattern recognition accuracy, especially in the BCI based on Electroencephalogram (EEG) signal. Convolutional Neural Networks (CNN) holds great promises, which has been extensively employed for feature classification in BCI. This paper will review the application of the CNN method in BCI based on various EEG signals.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing-Shan Huang ◽  
Wan-Shan Liu ◽  
Bin Yao ◽  
Zhan-Xiang Wang ◽  
Si-Fang Chen ◽  
...  

The classification of electroencephalogram (EEG) signals is of significant importance in brain-computer interface (BCI) systems. Aiming to achieve intelligent classification of motor imagery EEG types with high accuracy, a classification methodology using the wavelet packet decomposition (WPD) and the proposed deep residual convolutional networks (DRes-CNN) is proposed. Firstly, EEG waveforms are segmented into sub-signals. Then the EEG signal features are obtained through the WPD algorithm, and some selected wavelet coefficients are retained and reconstructed into EEG signals in their respective frequency bands. Subsequently, the reconstructed EEG signals were utilized as input of the proposed deep residual convolutional networks to classify EEG signals. Finally, EEG types of motor imagination are classified by the DRes-CNN classifier intelligently. The datasets from BCI Competition were used to test the performance of the proposed deep learning classifier. Classification experiments show that the average recognition accuracy of this method reaches 98.76%. The proposed method can be further applied to the BCI system of motor imagination control.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7103
Author(s):  
Heekyung Yang ◽  
Jongdae Han ◽  
Kyungha Min

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


2020 ◽  
Vol 65 (4) ◽  
pp. 393-404
Author(s):  
Ali Momennezhad

AbstractIn this paper, we suggest an efficient, accurate and user-friendly brain-computer interface (BCI) system for recognizing and distinguishing different emotion states. For this, we used a multimodal dataset entitled “MAHOB-HCI” which can be freely reached through an email request. This research is based on electroencephalogram (EEG) signals carrying emotions and excludes other physiological features, as it finds EEG signals more reliable to extract deep and true emotions compared to other physiological features. EEG signals comprise low information and signal-to-noise ratios (SNRs); so it is a huge challenge for proposing a robust and dependable emotion recognition algorithm. For this, we utilized a new method, based on the matching pursuit (MP) algorithm, to resolve this imperfection. We applied the MP algorithm for increasing the quality and SNRs of the original signals. In order to have a signal of high quality, we created a new dictionary including 5-scale Gabor atoms with 5000 atoms. For feature extraction, we used a 9-scale wavelet algorithm. A 32-electrode configuration was used for signal collection, but we used just eight electrodes out of that; therefore, our method is highly user-friendly and convenient for users. In order to evaluate the results, we compared our algorithm with other similar works. In average accuracy, the suggested algorithm is superior to the same algorithm without applying MP by 2.8% and in terms of f-score by 0.03. In comparison with corresponding works, the accuracy and f-score of the proposed algorithm are better by 10.15% and 0.1, respectively. So as it is seen, our method has improved past works in terms of accuracy, f-score and user-friendliness despite using just eight electrodes.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012028
Author(s):  
Huiping Shi ◽  
Hong Xie ◽  
Mengran Wu

Abstract Emotion recognition is a key technology of human-computer emotional interaction, which plays an important role in various fields and has attracted the attention of many researchers. However, the issue of interactivity and correlation between multi-channel EEG signals has not attracted much attention. For this reason, an EEG signal emotion recognition method based on 2DCNN-BiGRU and attention mechanism is tentatively proposed. This method firstly forms a two-dimensional matrix according to the electrode position, and then takes the pre-processed two-dimensional feature matrix as input, in the two-dimensional convolutional neural network (2DCNN) and the bidirectional gated recurrent unit (BiGRU) with the attention mechanism layer Extract spatial features and time domain features in, and finally classify by softmax function. The experimental results show that the average classification accuracy of this model are 93.66% and 94.32% in the valence and arousal, respectively.


2020 ◽  
Vol 6 (3) ◽  
pp. 255-287
Author(s):  
Wanrou Hu ◽  
Gan Huang ◽  
Linling Li ◽  
Li Zhang ◽  
Zhiguo Zhang ◽  
...  

Emotions, formed in the process of perceiving external environment, directly affect human daily life, such as social interaction, work efficiency, physical wellness, and mental health. In recent decades, emotion recognition has become a promising research direction with significant application values. Taking the advantages of electroencephalogram (EEG) signals (i.e., high time resolution) and video‐based external emotion evoking (i.e., rich media information), video‐triggered emotion recognition with EEG signals has been proven as a useful tool to conduct emotion‐related studies in a laboratory environment, which provides constructive technical supports for establishing real‐time emotion interaction systems. In this paper, we will focus on video‐triggered EEG‐based emotion recognition and present a systematical introduction of the current available video‐triggered EEG‐based emotion databases with the corresponding analysis methods. First, current video‐triggered EEG databases for emotion recognition (e.g., DEAP, MAHNOB‐HCI, SEED series databases) will be presented with full details. Then, the commonly used EEG feature extraction, feature selection, and modeling methods in video‐triggered EEG‐based emotion recognition will be systematically summarized and a brief review of current situation about video‐triggered EEG‐based emotion studies will be provided. Finally, the limitations and possible prospects of the existing video‐triggered EEG‐emotion databases will be fully discussed.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1870
Author(s):  
Tianjiao Kong ◽  
Jie Shao ◽  
Jiuyuan Hu ◽  
Xin Yang ◽  
Shiyiling Yang ◽  
...  

Emotion recognition, as a challenging and active research area, has received considerable awareness in recent years. In this study, an attempt was made to extract complex network features from electroencephalogram (EEG) signals for emotion recognition. We proposed a novel method of constructing forward weighted horizontal visibility graphs (FWHVG) and backward weighted horizontal visibility graphs (BWHVG) based on angle measurement. The two types of complex networks were used to extract network features. Then, the two feature matrices were fused into a single feature matrix to classify EEG signals. The average emotion recognition accuracies based on complex network features of proposed method in the valence and arousal dimension were 97.53% and 97.75%. The proposed method achieved classification accuracies of 98.12% and 98.06% for valence and arousal when combined with time-domain features.


Sign in / Sign up

Export Citation Format

Share Document