scholarly journals Optimizing Residual Networks and VGG for Classification of EEG Signals: Identifying Ideal Channels for Emotion Recognition

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Kit Hwa Cheah ◽  
Humaira Nisar ◽  
Vooi Voon Yap ◽  
Chen-Yi Lee ◽  
G. R. Sinha

Emotion is a crucial aspect of human health, and emotion recognition systems serve important roles in the development of neurofeedback applications. Most of the emotion recognition methods proposed in previous research take predefined EEG features as input to the classification algorithms. This paper investigates the less studied method of using plain EEG signals as the classifier input, with the residual networks (ResNet) as the classifier of interest. ResNet having excelled in the automated hierarchical feature extraction in raw data domains with vast number of samples (e.g., image processing) is potentially promising in the future as the amount of publicly available EEG databases has been increasing. Architecture of the original ResNet designed for image processing is restructured for optimal performance on EEG signals. The arrangement of convolutional kernel dimension is demonstrated to largely affect the model’s performance on EEG signal processing. The study is conducted on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED), with our proposed ResNet18 architecture achieving 93.42% accuracy on the 3-class emotion classification, compared to the original ResNet18 at 87.06% accuracy. Our proposed ResNet18 architecture has also achieved a model parameter reduction of 52.22% from the original ResNet18. We have also compared the importance of different subsets of EEG channels from a total of 62 channels for emotion recognition. The channels placed near the anterior pole of the temporal lobes appeared to be most emotionally relevant. This agrees with the location of emotion-processing brain structures like the insular cortex and amygdala.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5135
Author(s):  
Ngoc-Dau Mai ◽  
Boon-Giin Lee ◽  
Wan-Young Chung

In this research, we develop an affective computing method based on machine learning for emotion recognition using a wireless protocol and a wearable electroencephalography (EEG) custom-designed device. The system collects EEG signals using an eight-electrode placement on the scalp; two of these electrodes were placed in the frontal lobe, and the other six electrodes were placed in the temporal lobe. We performed experiments on eight subjects while they watched emotive videos. Six entropy measures were employed for extracting suitable features from the EEG signals. Next, we evaluated our proposed models using three popular classifiers: a support vector machine (SVM), multi-layer perceptron (MLP), and one-dimensional convolutional neural network (1D-CNN) for emotion classification; both subject-dependent and subject-independent strategies were used. Our experiment results showed that the highest average accuracies achieved in the subject-dependent and subject-independent cases were 85.81% and 78.52%, respectively; these accuracies were achieved using a combination of the sample entropy measure and 1D-CNN. Moreover, our study investigates the T8 position (above the right ear) in the temporal lobe as the most critical channel among the proposed measurement positions for emotion classification through electrode selection. Our results prove the feasibility and efficiency of our proposed EEG-based affective computing method for emotion recognition in real-world applications.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Chen ◽  
Haifeng Li ◽  
Lin Ma ◽  
Hongjian Bo ◽  
Frank Soong ◽  
...  

Recently, emotion classification from electroencephalogram (EEG) data has attracted much attention. As EEG is an unsteady and rapidly changing voltage signal, the features extracted from EEG usually change dramatically, whereas emotion states change gradually. Most existing feature extraction approaches do not consider these differences between EEG and emotion. Microstate analysis could capture important spatio-temporal properties of EEG signals. At the same time, it could reduce the fast-changing EEG signals to a sequence of prototypical topographical maps. While microstate analysis has been widely used to study brain function, few studies have used this method to analyze how brain responds to emotional auditory stimuli. In this study, the authors proposed a novel feature extraction method based on EEG microstates for emotion recognition. Determining the optimal number of microstates automatically is a challenge for applying microstate analysis to emotion. This research proposed dual-threshold-based atomize and agglomerate hierarchical clustering (DTAAHC) to determine the optimal number of microstate classes automatically. By using the proposed method to model the temporal dynamics of auditory emotion process, we extracted microstate characteristics as novel temporospatial features to improve the performance of emotion recognition from EEG signals. We evaluated the proposed method on two datasets. For public music-evoked EEG Dataset for Emotion Analysis using Physiological signals, the microstate analysis identified 10 microstates which together explained around 86% of the data in global field power peaks. The accuracy of emotion recognition achieved 75.8% in valence and 77.1% in arousal using microstate sequence characteristics as features. Compared to previous studies, the proposed method outperformed the current feature sets. For the speech-evoked EEG dataset, the microstate analysis identified nine microstates which together explained around 85% of the data. The accuracy of emotion recognition achieved 74.2% in valence and 72.3% in arousal using microstate sequence characteristics as features. The experimental results indicated that microstate characteristics can effectively improve the performance of emotion recognition from EEG signals.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jing Cai ◽  
Ruolan Xiao ◽  
Wenjie Cui ◽  
Shang Zhang ◽  
Guangda Liu

Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.


2021 ◽  
Vol 14 ◽  
Author(s):  
Yinfeng Fang ◽  
Haiyang Yang ◽  
Xuguang Zhang ◽  
Han Liu ◽  
Bo Tao

Due to the rapid development of human–computer interaction, affective computing has attracted more and more attention in recent years. In emotion recognition, Electroencephalogram (EEG) signals are easier to be recorded than other physiological experiments and are not easily camouflaged. Because of the high dimensional nature of EEG data and the diversity of human emotions, it is difficult to extract effective EEG features and recognize the emotion patterns. This paper proposes a multi-feature deep forest (MFDF) model to identify human emotions. The EEG signals are firstly divided into several EEG frequency bands and then extract the power spectral density (PSD) and differential entropy (DE) from each frequency band and the original signal as features. A five-class emotion model is used to mark five emotions, including neutral, angry, sad, happy, and pleasant. With either original features or dimension reduced features as input, the deep forest is constructed to classify the five emotions. These experiments are conducted on a public dataset for emotion analysis using physiological signals (DEAP). The experimental results are compared with traditional classifiers, including K Nearest Neighbors (KNN), Random Forest (RF), and Support Vector Machine (SVM). The MFDF achieves the average recognition accuracy of 71.05%, which is 3.40%, 8.54%, and 19.53% higher than RF, KNN, and SVM, respectively. Besides, the accuracies with the input of features after dimension reduction and raw EEG signal are only 51.30 and 26.71%, respectively. The result of this study shows that the method can effectively contribute to EEG-based emotion classification tasks.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3491 ◽  
Author(s):  
Jungchan Cho ◽  
Hyoseok Hwang

Emotion recognition plays an important role in the field of human–computer interaction (HCI). An electroencephalogram (EEG) is widely used to estimate human emotion owing to its convenience and mobility. Deep neural network (DNN) approaches using an EEG for emotion recognition have recently shown remarkable improvement in terms of their recognition accuracy. However, most studies in this field still require a separate process for extracting handcrafted features despite the ability of a DNN to extract meaningful features by itself. In this paper, we propose a novel method for recognizing an emotion based on the use of three-dimensional convolutional neural networks (3D CNNs), with an efficient representation of the spatio-temporal representations of EEG signals. First, we spatially reconstruct raw EEG signals represented as stacks of one-dimensional (1D) time series data to two-dimensional (2D) EEG frames according to the original electrode position. We then represent a 3D EEG stream by concatenating the 2D EEG frames to the time axis. These 3D reconstructions of the raw EEG signals can be efficiently combined with 3D CNNs, which have shown a remarkable feature representation from spatio-temporal data. Herein, we demonstrate the accuracy of the emotional classification of the proposed method through extensive experiments on the DEAP (a Dataset for Emotion Analysis using EEG, Physiological, and video signals) dataset. Experimental results show that the proposed method achieves a classification accuracy of 99.11%, 99.74%, and 99.73% in the binary classification of valence and arousal, and, in four-class classification, respectively. We investigate the spatio-temporal effectiveness of the proposed method by comparing it to several types of input methods with 2D/3D CNN. We then verify the best performing shape of both the kernel and input data experimentally. We verify that an efficient representation of an EEG and a network that fully takes advantage of the data characteristics can outperform methods that apply handcrafted features.


2022 ◽  
Vol 12 ◽  
Author(s):  
Mingxing Liu

This paper presents an in-depth study and analysis of the emotional classification of EEG neurofeedback interactive electronic music compositions using a multi-brain collaborative brain-computer interface (BCI). Based on previous research, this paper explores the design and performance of sound visualization in an interactive format from the perspective of visual performance design and the psychology of participating users with the help of knowledge from various disciplines such as psychology, acoustics, aesthetics, neurophysiology, and computer science. This paper proposes a specific mapping model for the conversion of sound to visual expression based on people’s perception and aesthetics of sound based on the phenomenon of audiovisual association, which provides a theoretical basis for the subsequent research. Based on the mapping transformation pattern between audio and visual, this paper investigates the realization path of interactive sound visualization, the visual expression form and its formal composition, and the aesthetic style, and forms a design expression method for the visualization of interactive sound, to benefit the practice of interactive sound visualization. In response to the problem of neglecting the real-time and dynamic nature of the brain in traditional brain network research, dynamic brain networks proposed for analyzing the EEG signals induced by long-time music appreciation. During prolonged music appreciation, the connectivity of the brain changes continuously. We used mutual information on different frequency bands of EEG signals to construct dynamic brain networks, observe changes in brain networks over time and use them for emotion recognition. We used the brain network for emotion classification and achieved an emotion recognition rate of 67.3% under four classifications, exceeding the highest recognition rate available.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Ayan Seal ◽  
Puthi Prem Nivesh Reddy ◽  
Pingali Chaithanya ◽  
Arramada Meghana ◽  
Kamireddy Jahnavi ◽  
...  

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 984
Author(s):  
Longxin Yao ◽  
Mingjiang Wang ◽  
Yun Lu ◽  
Heng Li ◽  
Xue Zhang

It is well known that there may be significant individual differences in physiological signal patterns for emotional responses. Emotion recognition based on electroencephalogram (EEG) signals is still a challenging task in the context of developing an individual-independent recognition method. In our paper, from the perspective of spatial topology and temporal information of brain emotional patterns in an EEG, we exploit complex networks to characterize EEG signals to effectively extract EEG information for emotion recognition. First, we exploit visibility graphs to construct complex networks from EEG signals. Then, two kinds of network entropy measures (nodal degree entropy and clustering coefficient entropy) are calculated. By applying the AUC method, the effective features are input into the SVM classifier to perform emotion recognition across subjects. The experiment results showed that, for the EEG signals of 62 channels, the features of 18 channels selected by AUC were significant (p < 0.005). For the classification of positive and negative emotions, the average recognition rate was 87.26%; for the classification of positive, negative, and neutral emotions, the average recognition rate was 68.44%. Our method improves mean accuracy by an average of 2.28% compared with other existing methods. Our results fully demonstrate that a more accurate recognition of emotional EEG signals can be achieved relative to the available relevant studies, indicating that our method can provide more generalizability in practical use.


2019 ◽  
Vol 29 (02) ◽  
pp. 1850044 ◽  
Author(s):  
Jennifer Sorinas ◽  
Maria Dolores Grima ◽  
Jose Manuel Ferrandez ◽  
Eduardo Fernandez

The development of suitable EEG-based emotion recognition systems has become a main target in the last decades for Brain Computer Interface applications (BCI). However, there are scarce algorithms and procedures for real-time classification of emotions. The present study aims to investigate the feasibility of real-time emotion recognition implementation by the selection of parameters such as the appropriate time window segmentation and target bandwidths and cortical regions. We recorded the EEG-neural activity of 24 participants while they were looking and listening to an audiovisual database composed of positive and negative emotional video clips. We tested 12 different temporal window sizes, 6 ranges of frequency bands and 60 electrodes located along the entire scalp. Our results showed a correct classification of 86.96% for positive stimuli. The correct classification for negative stimuli was a little bit less (80.88%). The best time window size, from the tested 1[Formula: see text]s to 12[Formula: see text]s segments, was 12[Formula: see text]s. Although more studies are still needed, these preliminary results provide a reliable way to develop accurate EEG-based emotion classification.


2018 ◽  
Vol 12 (12) ◽  
pp. 2153-2162 ◽  
Author(s):  
Sairamya Nanjappan Jothiraj ◽  
Thomas George Selvaraj ◽  
Balakrishnan Ramasamy ◽  
Narain Ponraj Deivendran ◽  
Subathra M.S.P

Sign in / Sign up

Export Citation Format

Share Document