scholarly journals Investigating EEG Patterns for Dual-Stimuli Induced Human Fear Emotional State

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 522 ◽  
Author(s):  
Naveen Masood ◽  
Humera Farooq

Most electroencephalography (EEG) based emotion recognition systems make use of videos and images as stimuli. Few used sounds, and even fewer studies were found involving self-induced emotions. Furthermore, most of the studies rely on single stimuli to evoke emotions. The question of “whether different stimuli for same emotion elicitation generate any subject-independent correlations” remains unanswered. This paper introduces a dual modality based emotion elicitation paradigm to investigate if emotions can be classified induced with different stimuli. A method has been proposed based on common spatial pattern (CSP) and linear discriminant analysis (LDA) to analyze human brain signals for fear emotions evoked with two different stimuli. Self-induced emotional imagery is one of the considered stimuli, while audio/video clips are used as the other stimuli. The method extracts features from the CSP algorithm and LDA performs classification. To investigate associated EEG correlations, a spectral analysis was performed. To further improve the performance, CSP was compared with other regularized techniques. Critical EEG channels are identified based on spatial filter weights. To the best of our knowledge, our work provides the first contribution for the assessment of EEG correlations in the case of self versus video induced emotions captured with a commercial grade EEG device.

Author(s):  
І. Chaykovskyi ◽  
V. Kalnysh ◽  
Т. Yena ◽  
А. Yena ◽  
Yu. Vyrovyi ◽  
...  

The is presented the development of method for evaluation of emotional state of man, what suitable for use at the workplace based on analysis of heart rate (HR) variability.28 healthy volunteers were examined. 3 audiovisual clips were consistently presented on the display of the personal computer for each of them. One clip contained information originating the positive emotions, the second one – negative emotions, the third one – neutral. All possible pairs of the emotional states were analysed with help of one- and multi-dimensional linear discriminant analysis based on HR variability.Showing the emotional video-clips (of both signs) causes reliable slowing of HR frequency and also some decreasing of HR variability. In addition, negative emotions cause regularizing and simplification of structural organization of heart rhythm. Accuracy of discrimination for pair “emotional – neutral” video clips was 98 %, for pair “rest – neutral” was 74 %, for pair “positive – negative” was 91 %.Analysis of HR variability enables to determine the emotional state of observed person at the workplace with high reliability.


2013 ◽  
Vol 459 ◽  
pp. 228-231 ◽  
Author(s):  
Hao Yang ◽  
Song Wu

Electroencephalogram (EEG) is generally used in Brain-Computer Interface (BCI) applications to measure the brain signals. However, the multichannel EEG signals characterized by unrelated and redundant features will deteriorate the classification accuracy. This paper presents a method based on common spatial pattern (CSP) for feature extraction and support vector machine with genetic algorithm (SVM-GA) as a classifier, the GA is used to optimize the kernel parameters setting. The proposed algorithm is performed on data set Iva of BCI Competition III. Results show that the proposed method outperforms the conventional linear discriminant analysis (LDA) in average classification performance.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2218 ◽  
Author(s):  
Sharifa Alghowinem ◽  
Roland Goecke ◽  
Michael Wagner ◽  
Areej Alwabil

With the advancement of technology in both hardware and software, estimating human affective states has become possible. Currently, movie clips are used as they are a widely-accepted method of eliciting emotions in a replicable way. However, cultural differences might influence the effectiveness of some video clips to elicit the target emotions. In this paper, we describe several sensors and techniques to measure, validate and investigate the relationship between cultural acceptance and eliciting universal expressions of affect using movie clips. For emotion elicitation, a standardised list of English language clips, as well as an initial set of Arabic video clips are used for comparison. For validation, bio-signal devices to measure physiological and behavioural responses associated with emotional stimuli are used. Physiological and behavioural responses are measured from 29 subjects of Arabic background while watching the selected clips. For the six emotions’ classification, a multiclass SVM (six-class) classifier using the physiological and behavioural measures as input results in a higher recognition rate for elicited emotions from Arabic video clips (avg. 60%) compared to the English video clips (avg. 52%). These results might reflect that using video clips from the subjects’ culture is more likely to elicit the target emotions. Besides measuring the physiological and behavioural responses, an online survey was carried out to evaluate the effectiveness of the selected video clips in eliciting the target emotions. The online survey, having on average 220 respondents for each clip, supported the findings.


Author(s):  
Negin Manshouri ◽  
Mesut Melek ◽  
Temel Kayikcioglu

Despite the long and extensive history of 3D technology, it has recently attracted the attention of researchers. This technology has become the center of interest of young people because of the real feelings and sensations it creates. People see their environment as 3D because of their eye structure. In this study, it is hypothesized that people lose their perception of depth during sleepy moments and that there is a sudden transition from 3D vision to 2D vision. Regarding these transitions, the EEG signal analysis method was used for deep and comprehensive analysis of 2D and 3D brain signals. In this study, a single-stream anaglyph video of random 2D and 3D segments was prepared. After watching this single video, the obtained EEG recordings were considered for two different analyses: the part involving the critical transition (transition-state) and the state analysis of only the 2D versus 3D or 3D versus 2D parts (steady-state). The main objective of this study is to see the behavioral changes of brain signals in 2D and 3D transitions. To clarify the impacts of the human brain’s power spectral density (PSD) in 2D-to-3D (2D_3D) and 3D-to-2D (3D_2D) transitions of anaglyph video, 9 visual healthy individuals were prepared for testing in this pioneering study. Spectrogram graphs based on Short Time Fourier transform (STFT) were considered to evaluate the power spectrum analysis in each EEG channel of transition or steady-state. Thus, in 2D and 3D transition scenarios, important channels representing EEG frequency bands and brain lobes will be identified. To classify the 2D and 3D transitions, the dominant bands and time intervals representing the maximum difference of PSD were selected. Afterward, effective features were selected by applying statistical methods such as standard deviation (SD), maximum (max), and Hjorth parameters to epochs indicating transition intervals. Ultimately, k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA) algorithms were applied to classify 2D_3D and 3D_2D transitions. The frontal, temporal, and partially parietal lobes show 2D_3D and 3D_2D transitions with a good classification success rate. Overall, it was found that Hjorth parameters and LDA algorithms have 71.11% and 77.78% classification success rates for transition and steady-state, respectively.


Author(s):  
Petr Kremen ◽  
Miroslav Blaško ◽  
Zdenek Kouba

Compared to traditional ways of annotating multimedia resources (textual documents, photographs, audio/video clips etc.) by keywords in form of text fragments, semantic annotations are based on tagging such multimedia resources with meaning of objects (like cultural/historical artifacts) the resource is dealing with. The search for multimedia resources stored in a repository enriched with semantic annotations makes use of an appropriate reasoning algorithm. Knowledge management and Semantic Web communities have developed a number of relevant formalisms and methods. This chapter is motivated by practical experience with authoring of semantic annotations of cultural heritage related resources/objects. Keeping this experience in mind, the chapter compares various knowledge representation techniques, like frame-based formalisms, RDF(S), and description logics based formalisms from the viewpoint of their appropriateness for resource annotations and their ability to automatically support the semantic annotation process through advanced inference services, like error explanations and expressive construct modeling, namely n-ary relations.


2020 ◽  
Author(s):  
Negin Manshouri ◽  
Mesut Melek ◽  
Temel Kayıkcıoglu

Abstract Despite the long and extensive history of 3D technology, it has recently attracted the attention of researchers. This technology has become the center of interest of young people because of the real feelings and sensations it creates. People see their environment as 3D because of their eye structure. In this study, it is hypothesized that people lose their perception of depth during sleepy moments and that there is a sudden transition from 3D vision to 2D vision. Regarding these transitions, the EEG signal analysis method was used for deep and comprehensive analysis of 2D and 3D brain signals. In this study, a single-stream anaglyph video of random 2D and 3D segments was prepared. After watching this single video, the obtained EEG recordings were considered for two different analyses: the part involving the critical transition (transition state) and the state analysis of only the 2D versus 3D or 3D versus 2D parts (steady state). The main objective of this study is to see the behavioral changes of brain signals in 2D and 3D transitions. To clarify the impacts of the human brain’s power spectral density (PSD) in 2D-to-3D (2D_3D) and 3D-to-2D (3D_2D) transitions of anaglyph video, nine visual healthy individuals were prepared for testing in this pioneering study. Spectrogram graphs based on short time Fourier transform (STFT) were considered to evaluate the power spectrum analysis in each EEG channel of transition or steady state. Thus, in 2D and 3D transition scenarios, important channels representing EEG frequency bands and brain lobes will be identified. To classify the 2D and 3D transitions, the dominant bands and time intervals representing the maximum difference of PSD were selected. Afterward, effective features were selected by applying statistical methods such as standard deviation, maximum (max) and Hjorth parameters to epochs indicating transition intervals. Ultimately, k-nearest neighbors, support vector machine and linear discriminant analysis (LDA) algorithms were applied to classify 2D_3D and 3D_2D transitions. The frontal, temporal and partially parietal lobes show 2D_3D and 3D_2D transitions with a good classification success rate. Overall, it was found that Hjorth parameters and LDA algorithms have 71.11% and 77.78% classification success rates for transition and steady state, respectively.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Jian Kui Feng ◽  
Jing Jin ◽  
Ian Daly ◽  
Jiale Zhou ◽  
Yugang Niu ◽  
...  

Background. Due to the redundant information contained in multichannel electroencephalogram (EEG) signals, the classification accuracy of brain-computer interface (BCI) systems may deteriorate to a large extent. Channel selection methods can help to remove task-independent electroencephalogram (EEG) signals and hence improve the performance of BCI systems. However, in different frequency bands, brain areas associated with motor imagery are not exactly the same, which will result in the inability of traditional channel selection methods to extract effective EEG features. New Method. To address the above problem, this paper proposes a novel method based on common spatial pattern- (CSP-) rank channel selection for multifrequency band EEG (CSP-R-MF). It combines the multiband signal decomposition filtering and the CSP-rank channel selection methods to select significant channels, and then linear discriminant analysis (LDA) was used to calculate the classification accuracy. Results. The results showed that our proposed CSP-R-MF method could significantly improve the average classification accuracy compared with the CSP-rank channel selection method.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 53
Author(s):  
Francisco Laport ◽  
Paula M. Castro ◽  
Adriana Dapena ◽  
Francisco J. Vazquez-Araujo ◽  
Daniel Iglesia

A comparison of different machine learning techniques for eye state identification through Electroencephalography (EEG) signals is presented in this paper. (1) Background: We extend our previous work by studying several techniques for the extraction of the features corresponding to the mental states of open and closed eyes and their subsequent classification; (2) Methods: A prototype developed by the authors is used to capture the brain signals. We consider the Discrete Fourier Transform (DFT) and the Discrete Wavelet Transform (DWT) for feature extraction; Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) for state classification; and Independent Component Analysis (ICA) for preprocessing the data; (3) Results: The results obtained from some subjects show the good performance of the proposed methods; and (4) Conclusion: The combination of several techniques allows us to obtain a high accuracy of eye identification.


Author(s):  
Jingxia Chen ◽  
Dongmei Jiang ◽  
Yanning Zhang ◽  
◽  

To effectively reduce the day-to-day fluctuations and differences in subjects’ brain electroencephalogram (EEG) signals and improve the accuracy and stability of EEG emotion classification, a new EEG feature extraction method based on common spatial pattern (CSP) and wavelet packet decomposition (WPD) is proposed. For the five-day emotion related EEG data of 12 subjects, the CSP algorithm is firstly used to project the raw EEG data into an optimal subspace to extract the discriminative features by maximizing the Kullback-Leibler (KL) divergences between the two categories of EEG data. Then the WPD algorithm is used to decompose the EEG signals into the related features in time-frequency domain. Finally, four state-of-the-art classifiers including Bagging tree, SVM, linear discriminant analysis and Bayesian linear discriminant analysis are used to make binary emotion classification. The experimental results show that with CSP spatial filtering, the emotion classification on the WPD features extracted with bior3.3 wavelet base gets the best accuracy of 0.862, which is 29.3% higher than that of the power spectral density (PSD) feature without CSP preprocessing, is 23% higher than that of the PSD feature with CSP preprocessing, is 1.9% higher than that of the WPD feature extracted with bior3.3 wavelet base without CSP preprocessing, and is 3.2% higher than that of the WPD feature extracted with the rbio6.8 wavelet base without CSP preprocessing. Our proposed method can effectively reduce the variance and non-stationary of the cross-day EEG signals, extract the emotion related features and improve the accuracy and stability of the cross-day EEG emotion classification. It is valuable for the development of robust emotional brain-computer interface applications.


2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Dieter Devlaminck ◽  
Bart Wyns ◽  
Moritz Grosse-Wentrup ◽  
Georges Otte ◽  
Patrick Santens

Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.


Sign in / Sign up

Export Citation Format

Share Document