scholarly journals Emotion Recognition of EEG Signals Based on the Ensemble Learning Method: AdaBoost

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yu Chen ◽  
Rui Chang ◽  
Jifeng Guo

In recent years, with the continuous development of artificial intelligence and brain-computer interface technology, emotion recognition based on physiological signals, especially, electroencephalogram (EEG) signals, has become a popular research topic and attracted wide attention. However, how to extract effective features from EEG signals and accurately recognize them by classifiers have also become an increasingly important task. Therefore, in this paper, we propose an emotion recognition method of EEG signals based on the ensemble learning method, AdaBoost. First, we consider the time domain, time-frequency domain, and nonlinear features related to emotion, extract them from the preprocessed EEG signals, and fuse the features into an eigenvector matrix. Then, the linear discriminant analysis feature selection method is used to reduce the dimensionality of the features. Next, we use the optimized feature sets and train a classifier based on the ensemble learning method, AdaBoost, for binary classification. Finally, the proposed method has been tested in the DEAP data set on four emotional dimensions: valence, arousal, dominance, and liking. The proposed method is proved to be effective in emotion recognition, and the best average accuracy rate can reach up to 88.70% on the dominance dimension. Compared with other existing methods, the performance of the proposed method is significantly improved.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1262
Author(s):  
Fangyao Shen ◽  
Yong Peng ◽  
Wanzeng Kong ◽  
Guojun Dai

Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the “SEED IV” data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the “DEAP” data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.


Author(s):  
Murside Degirmenci ◽  
Ebru Sayilgan ◽  
Yalcin Isler

Brain Computer Interface (BCI) is a system that enables people to communicate with the outside world and control various electronic devices by interpreting only brain activity (motor movement imagination, emotional state, any focused visual or auditory stimulus, etc.). The visual stimulation based recording is one of the most popular methods among various electroencephalography (EEG) recording methods. Steady-state visual-evoked potentials (SSVEPs) where visual objects are blinking at a fixed frequency play an important role due to their high signal-to-noise ratio and higher information transfer rate in BCI applications. However, the design of multiple (more than 3) commands systems in SSVEPs based BCI systems is limited. The different approaches are recommended to overcome these problems. In this study, an approach based on machine learning is proposed to determine stimulating frequency in SSVEP signals. The data set (AVI SSVEP Dataset) is obtained through open access from the internet for simulations. The dataset includes EEG signals that was recorded when subjects looked at a flickering frequency at seven different frequencies (6-6.5-7-7.5-8.2-9.3-10Hz). In the machine learning-based approach Wigner-Ville Distribution (WVD) is used and features are extracted using Time-Frequency (TF) representations of EEG signals. These features are classified by Decision Tree, Linear Discriminant Analysis (LDA), k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), Naive Bayes, Ensemble Learning classifiers. Simulation results demonstrate that the proposed approach achieved promising accuracy rates for 7 command SSVEP systems. As a consequence, the maximum accuracy is achieved in the Ensemble Learning classifier with 47.60%.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ahmet Mert ◽  
Hasan Huseyin Celik

Abstract The feasibility of using time–frequency (TF) ridges estimation is investigated on multi-channel electroencephalogram (EEG) signals for emotional recognition. Without decreasing accuracy rate of the valence/arousal recognition, the informative component extraction with low computational cost will be examined using multivariate ridge estimation. The advanced TF representation technique called multivariate synchrosqueezing transform (MSST) is used to obtain well-localized components of multi-channel EEG signals. Maximum-energy components in the 2D TF distribution are determined using TF-ridges estimation to extract instantaneous frequency and instantaneous amplitude, respectively. The statistical values of the estimated ridges are used as a feature vector to the inputs of machine learning algorithms. Thus, component information in multi-channel EEG signals can be captured and compressed into low dimensional space for emotion recognition. Mean and variance values of the five maximum-energy ridges in the MSST based TF distribution are adopted as feature vector. Properties of five TF-ridges in frequency and energy plane (e.g., mean frequency, frequency deviation, mean energy, and energy deviation over time) are computed to obtain 20-dimensional feature space. The proposed method is performed on the DEAP emotional EEG recordings for benchmarking, and the recognition rates are yielded up to 71.55, and 70.02% for high/low arousal, and high/low valence, respectively.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2739 ◽  
Author(s):  
Rami Alazrai ◽  
Rasha Homoud ◽  
Hisham Alwanni ◽  
Mohammad Daoud

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73 . 8 % – 86 . 2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3496
Author(s):  
Jiacan Xu ◽  
Hao Zheng ◽  
Jianhui Wang ◽  
Donglin Li ◽  
Xiaoke Fang

Recognition of motor imagery intention is one of the hot current research focuses of brain-computer interface (BCI) studies. It can help patients with physical dyskinesia to convey their movement intentions. In recent years, breakthroughs have been made in the research on recognition of motor imagery task using deep learning, but if the important features related to motor imagery are ignored, it may lead to a decline in the recognition performance of the algorithm. This paper proposes a new deep multi-view feature learning method for the classification task of motor imagery electroencephalogram (EEG) signals. In order to obtain more representative motor imagery features in EEG signals, we introduced a multi-view feature representation based on the characteristics of EEG signals and the differences between different features. Different feature extraction methods were used to respectively extract the time domain, frequency domain, time-frequency domain and spatial features of EEG signals, so as to made them cooperate and complement. Then, the deep restricted Boltzmann machine (RBM) network improved by t-distributed stochastic neighbor embedding(t-SNE) was adopted to learn the multi-view features of EEG signals, so that the algorithm removed the feature redundancy while took into account the global characteristics in the multi-view feature sequence, reduced the dimension of the multi-visual features and enhanced the recognizability of the features. Finally, support vector machine (SVM) was chosen to classify deep multi-view features. Applying our proposed method to the BCI competition IV 2a dataset we obtained excellent classification results. The results show that the deep multi-view feature learning method further improved the classification accuracy of motor imagery tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dennis Wagner ◽  
Dominik Heider ◽  
Georges Hattab

AbstractPredicting if a set of mushrooms is edible or not corresponds to the task of classifying them into two groups—edible or poisonous—on the basis of a classification rule. To support this binary task, we have collected the largest and most comprehensive attribute based data available. In this work, we detail the creation, curation and simulation of a data set for binary classification. Thanks to natural language processing, the primary data are based on a text book for mushroom identification and contain 173 species from 23 families. While the secondary data comprise simulated or hypothetical entries that are structurally comparable to the 1987 data, it serves as pilot data for classification tasks. We evaluated different machine learning algorithms, namely, naive Bayes, logistic regression, and linear discriminant analysis (LDA), and random forests (RF). We found that the RF provided the best results with a five-fold Cross-Validation accuracy and F2-score of 1.0 ($$\mu =1$$ μ = 1 , $$\sigma =0$$ σ = 0 ), respectively. The results of our pilot are conclusive and indicate that our data were not linearly separable. Unlike the 1987 data which showed good results using a linear decision boundary with the LDA. Our data set contains 23 families and is the largest available. We further provide a fully reproducible workflow and provide the data under the FAIR principles.


2018 ◽  
Vol 30 (04) ◽  
pp. 1850026 ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

These days, emotion recognition has been receiving more attention due to the growth of the brain–computer interfaces (systems) (BCIs). Moreover, estimating emotions is widely used in different aspects such as psychology, neuroscience, entertainment, e-learning, etc. This paper aims to classify emotions through EEG signals. When it comes to emotion recognition, participants’ opinions toward induced emotions are really case-dependent and thus corresponding labels might be imprecise and uncertain. Furthermore, it is acceptable that mixture classifiers lead to higher accuracy (ACE) and lower uncertainty. This paper, introduces new methods, including setting time intervals to process EEG signals, extracting relative values of nonlinear features and classifying them through Dempster–Shafer theory (DST) of evidence method. In this work, we used EEG signals which are taken from a very reliable database and the extracted features are classified by DST in order to reduce uncertainty and consequently achieve better results. First, time windows are determined based on signal complexity. Then, nonlinear features are extracted. Actually, this paper suggests feature variability through time intervals instead of absolute values of features and discriminant features are selected using genetic algorithm (GA). Finally, data is fed in the classification process and different classifiers are combined through DST. 10-fold cross-validation is applied and the results are extracted and compared with some basic classifiers. We managed to achieve high classification performance in terms of emotion recognition [Formula: see text]. Results prove that EEG signals can reflect emotional responses of the brain and the proposed method is effective which gives considerably precise estimation of emotions.


Author(s):  
Nikolai Kapralov ◽  
Zhanna Nagornova ◽  
Natalia Shemyakina

The review focuses on the most promising methods for classifying EEG signals for non-invasive BCIs and theoretical approaches for the successful classification of EEG patterns. The paper provides an overview of articles using Riemannian geometry, deep learning methods and various options for preprocessing and "clustering" EEG signals, for example, common-spatial pattern (CSP). Among other approaches, pre-processing of EEG signals using CSP is often used, both offline and online. The combination of CSP, linear discriminant analysis, support vector machine and neural network (BPNN) made it possible to achieve 91% accuracy for binary classification with exoskeleton control as a feedback. There is very little work on the use of Riemannian geometry online and the best accuracy achieved so far for a binary classification problem is 69.3% in the work. At the same time, in offline testing, the average percentage of correct classification in the considered articles for approaches with CSP – 77.5 ± 5.8%, deep learning networks – 81.7 ± 4.7%, Riemannian geometry – 90.2 ± 6.6%. Due to nonlinear transformations, Riemannian geometry-based approaches and complex deep neural networks provide higher accuracy and better extract of useful information from raw EEG recordings rather than linear CSP transformation. However, in real-time setup, not only accuracy is important, but also a minimum time delay. Therefore, approaches using the CSP transformation and Riemannian geometry with a time delay of less than 500 ms may be in the future advantage.


Sign in / Sign up

Export Citation Format

Share Document