scholarly journals A new model for the implementation of positive and negative emotion recognition

2019 ◽  
Author(s):  
Jennifer Sorinas ◽  
Juan C. Fernandez-Troyano ◽  
Mikel Val-Calvo ◽  
Jose Manuel Ferrández ◽  
Eduardo Fernandez

ABSTRACTThe large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective BCI (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. 12 seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-base emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Although results were not conclusive, new insights regarding subject-independent approximation have been discussed.

2019 ◽  
Vol 29 (02) ◽  
pp. 1850044 ◽  
Author(s):  
Jennifer Sorinas ◽  
Maria Dolores Grima ◽  
Jose Manuel Ferrandez ◽  
Eduardo Fernandez

The development of suitable EEG-based emotion recognition systems has become a main target in the last decades for Brain Computer Interface applications (BCI). However, there are scarce algorithms and procedures for real-time classification of emotions. The present study aims to investigate the feasibility of real-time emotion recognition implementation by the selection of parameters such as the appropriate time window segmentation and target bandwidths and cortical regions. We recorded the EEG-neural activity of 24 participants while they were looking and listening to an audiovisual database composed of positive and negative emotional video clips. We tested 12 different temporal window sizes, 6 ranges of frequency bands and 60 electrodes located along the entire scalp. Our results showed a correct classification of 86.96% for positive stimuli. The correct classification for negative stimuli was a little bit less (80.88%). The best time window size, from the tested 1[Formula: see text]s to 12[Formula: see text]s segments, was 12[Formula: see text]s. Although more studies are still needed, these preliminary results provide a reliable way to develop accurate EEG-based emotion classification.


2020 ◽  
Author(s):  
aras Masood Ismael ◽  
Ömer F Alçin ◽  
Karmand H Abdalla ◽  
Abdulkadir k sengur

Abstract In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG based emotion classification. Emotion recognition is important for human-machine interactions. Facial-features and body-gestures based approaches have been generally proposed for emotion recognition. Recently, EEG based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG based emotion classification.


2019 ◽  
Author(s):  
Jacob Israelashvlili

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognizing others’ emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N = 363), we found that individuals high in emotion differentiation were more accurate in recognizing others’ emotional facial expressions. Study 2 (N = 217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.


2020 ◽  
pp. 1-15
Author(s):  
Wang Wei ◽  
Xinyi Cao ◽  
He Li ◽  
Lingjie Shen ◽  
Yaqin Feng ◽  
...  

Abstract To improve speech emotion recognition, a U-acoustic words emotion dictionary (AWED) features model is proposed based on an AWED. The method models emotional information from acoustic words level in different emotion classes. The top-list words in each emotion are selected to generate the AWED vector. Then, the U-AWED model is constructed by combining utterance-level acoustic features with the AWED features. Support vector machine and convolutional neural network are employed as the classifiers in our experiment. The results show that our proposed method in four tasks of emotion classification all provides significant improvement in unweighted average recall.


2020 ◽  
Author(s):  
Aras Masood Ismael ◽  
Ömer F. Alçin ◽  
Karmand Hussein Abdalla ◽  
Abdulkadir K Şengür

Abstract In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG based emotion classification. Emotion recognition is important for human-machine interactions. Facial-features and body-gestures based approaches have been generally proposed for emotion recognition. Recently, EEG based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG based emotion classification.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Aras M. Ismael ◽  
Ömer F. Alçin ◽  
Karmand Hussein Abdalla ◽  
Abdulkadir Şengür

Abstract In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG-based emotion classification. Emotion recognition is important for human–machine interactions. Facial features- and body gestures-based approaches have been generally proposed for emotion recognition. Recently, EEG-based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension-based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG-based emotion classification.


2021 ◽  
Vol 7 ◽  
pp. e786
Author(s):  
Vaibhav Bhat ◽  
Anita Yadav ◽  
Sonal Yadav ◽  
Dhivya Chandrasekaran ◽  
Vijay Mago

Emotion recognition in conversations is an important step in various virtual chatbots which require opinion-based feedback, like in social media threads, online support, and many more applications. Current emotion recognition in conversations models face issues like: (a) loss of contextual information in between two dialogues of a conversation, (b) failure to give appropriate importance to significant tokens in each utterance, (c) inability to pass on the emotional information from previous utterances. The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by performing unique feature extraction using knowledge graphs, sentiment lexicons and phrases of natural language at all levels (word and position embedding) of the utterances. Experiments on emotion recognition in conversations datasets show that AdCOFE is beneficial in capturing emotions in conversations.


2014 ◽  
Vol 25 (4) ◽  
pp. 279-287 ◽  
Author(s):  
Stefan Hey ◽  
Panagiota Anastasopoulou ◽  
André Bideaux ◽  
Wilhelm Stork

Ambulatory assessment of emotional states as well as psychophysiological, cognitive and behavioral reactions constitutes an approach, which is increasingly being used in psychological research. Due to new developments in the field of information and communication technologies and an improved application of mobile physiological sensors, various new systems have been introduced. Methods of experience sampling allow to assess dynamic changes of subjective evaluations in real time and new sensor technologies permit a measurement of physiological responses. In addition, new technologies facilitate the interactive assessment of subjective, physiological, and behavioral data in real-time. Here, we describe these recent developments from the perspective of engineering science and discuss potential applications in the field of neuropsychology.


2021 ◽  
Vol 1757 (1) ◽  
pp. 012021
Author(s):  
Yuqiong Wang ◽  
Zehui Zhao ◽  
Zhiwei Huang

2021 ◽  
Vol 1827 (1) ◽  
pp. 012130
Author(s):  
Qi Li ◽  
Yun Qing Liu ◽  
Yue Qi Peng ◽  
Cong Liu ◽  
Jun Shi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document