scholarly journals Expression EEG Multimodal Emotion Recognition Method Based on the Bidirectional LSTM and Attention Mechanism

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Due to the complexity of human emotions, there are some similarities between different emotion features. The existing emotion recognition method has the problems of difficulty of character extraction and low accuracy, so the bidirectional LSTM and attention mechanism based on the expression EEG multimodal emotion recognition method are proposed. Firstly, facial expression features are extracted based on the bilinear convolution network (BCN), and EEG signals are transformed into three groups of frequency band image sequences, and BCN is used to fuse the image features to obtain the multimodal emotion features of expression EEG. Then, through the LSTM with the attention mechanism, important data is extracted in the process of timing modeling, which effectively avoids the randomness or blindness of sampling methods. Finally, a feature fusion network with a three-layer bidirectional LSTM structure is designed to fuse the expression and EEG features, which is helpful to improve the accuracy of emotion recognition. On the MAHNOB-HCI and DEAP datasets, the proposed method is tested based on the MATLAB simulation platform. Experimental results show that the attention mechanism can enhance the visual effect of the image, and compared with other methods, the proposed method can extract emotion features from expressions and EEG signals more effectively, and the accuracy of emotion recognition is higher.

2021 ◽  
Vol 2078 (1) ◽  
pp. 012028
Author(s):  
Huiping Shi ◽  
Hong Xie ◽  
Mengran Wu

Abstract Emotion recognition is a key technology of human-computer emotional interaction, which plays an important role in various fields and has attracted the attention of many researchers. However, the issue of interactivity and correlation between multi-channel EEG signals has not attracted much attention. For this reason, an EEG signal emotion recognition method based on 2DCNN-BiGRU and attention mechanism is tentatively proposed. This method firstly forms a two-dimensional matrix according to the electrode position, and then takes the pre-processed two-dimensional feature matrix as input, in the two-dimensional convolutional neural network (2DCNN) and the bidirectional gated recurrent unit (BiGRU) with the attention mechanism layer Extract spatial features and time domain features in, and finally classify by softmax function. The experimental results show that the average classification accuracy of this model are 93.66% and 94.32% in the valence and arousal, respectively.


2021 ◽  
Vol 70 ◽  
pp. 103029
Author(s):  
Ying Tan ◽  
Zhe Sun ◽  
Feng Duan ◽  
Jordi Solé-Casals ◽  
Cesar F. Caiafa

2018 ◽  
Vol 174 ◽  
pp. 33-42 ◽  
Author(s):  
Dung Nguyen ◽  
Kien Nguyen ◽  
Sridha Sridharan ◽  
David Dean ◽  
Clinton Fookes

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Bahar Hatipoglu Yilmaz ◽  
Cemal Kose

Abstract Emotion is one of the most complex and difficult expression to be predicted. Nowadays, many recognition systems that use classification methods have focused on different types of emotion recognition problems. In this paper, we aimed to propose a multimodal fusion method between electroencephalography (EEG) and electrooculography (EOG) signals for emotion recognition. Therefore, before the feature extraction stage, we applied different angle-amplitude transformations to EEG–EOG signals. These transformations take arbitrary time domain signals and convert them two-dimensional images named as Angle-Amplitude Graph (AAG). Then, we extracted image-based features using a scale invariant feature transform method, fused these features originates basically from EEG–EOG and lastly classified with support vector machines. To verify the validity of these proposed methods, we performed experiments on the multimodal DEAP dataset which is a benchmark dataset widely used for emotion analysis with physiological signals. In the experiments, we applied the proposed emotion recognition procedures on the arousal-valence dimensions. We achieved (91.53%) accuracy for the arousal space and (90.31%) for the valence space after fusion. Experimental results showed that the combination of AAG image features belonging to EEG–EOG signals in the baseline angle amplitude transformation approaches enhanced the classification performance on the DEAP dataset.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 59844-59861 ◽  
Author(s):  
Baixi Xing ◽  
Hui Zhang ◽  
Kejun Zhang ◽  
Lekai Zhang ◽  
Xinda Wu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document