Emotion Recognition Model Based on SOM Algorithm

Author(s):  
Yi Liao ◽  
Youyi Jiang
2020 ◽  
Vol 140 ◽  
pp. 358-365
Author(s):  
Zijiang Zhu ◽  
Weihuang Dai ◽  
Yi Hu ◽  
Junshan Li

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 25278-25290
Author(s):  
Bo Li ◽  
Hui Ren ◽  
Xuekun Jiang ◽  
Fang Miao ◽  
Feng Feng ◽  
...  

2021 ◽  
Author(s):  
Jian Zhao ◽  
ZhiWei Zhang ◽  
Jinping Qiu ◽  
Lijuan Shi ◽  
Zhejun KUANG ◽  
...  

Abstract With the rapid development of deep learning in recent years, automatic electroencephalography (EEG) emotion recognition has been widely concerned. At present, most deep learning methods do not normalize EEG data properly and do not fully extract the features of time and frequency domain, which will affect the accuracy of EEG emotion recognition. To solve these problems, we propose GTScepeion, a deep learning EEG emotion recognition model. In pre-processing, the EEG time slicing data including channels were pre-processed. In our model, global convolution kernels are used to extract overall semantic features, followed by three kinds of temporal convolution kernels representing different emotional periods, followed by two kinds of spatial convolution kernels highlighting brain hemispheric differences to extract spatial features, and finally emotions are dichotomy classified by the full connected layer. The experiments is based on the DEAP dataset, and our model can effectively normalize the data and fully extract features. For Arousal, ours is 8.76% higher than the current optimal emotion recognition model based on Inception. For Valence, the best accuracy of our model reaches 91.51%.


Measurement ◽  
2020 ◽  
Vol 164 ◽  
pp. 108047 ◽  
Author(s):  
Tian Chen ◽  
Sihang Ju ◽  
Fuji Ren ◽  
Mingyan Fan ◽  
Yu Gu

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Mingyong Li ◽  
Xue Qiu ◽  
Shuang Peng ◽  
Lirong Tang ◽  
Qiqi Li ◽  
...  

With the rapid development of deep learning and wireless communication technology, emotion recognition has received more and more attention from researchers. Computers can only be truly intelligent when they have human emotions, and emotion recognition is its primary consideration. This paper proposes a multimodal emotion recognition model based on a multiobjective optimization algorithm. The model combines voice information and facial information and can optimize the accuracy and uniformity of recognition at the same time. The speech modal is based on an improved deep convolutional neural network (DCNN); the video image modal is based on an improved deep separation convolution network (DSCNN). After single mode recognition, a multiobjective optimization algorithm is used to fuse the two modalities at the decision level. The experimental results show that the proposed model has a large improvement in each evaluation index, and the accuracy of emotion recognition is 2.88% higher than that of the ISMS_ALA model. The results show that the multiobjective optimization algorithm can effectively improve the performance of the multimodal emotion recognition model.


Sign in / Sign up

Export Citation Format

Share Document