scholarly journals Multiclass Motor Imagery Recognition of Single Joint in Upper Limb Based on NSGA- II OVO TWSVM

2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Shan Guan ◽  
Kai Zhao ◽  
Fuwang Wang

In the study of the brain computer interface (BCI) system, electroencephalogram (EEG) signals induced by different movements of the same joint are hard to distinguish. This paper proposes a novel scheme that combined amplitude-frequency (AF) information of intrinsic mode function (IMF) with common spatial pattern (CSP), namely, AF-CSP to extract motor imagery (MI) features, and to improve classification performance, the second generation nondominated sorting evolutionary algorithm (NSGA-II) is used to tune hyperparameters for linear and nonlinear kernel one versus one twin support vector machine (OVO TWSVM). This model is compared with least squares support vector machine (LS-SVM), back propagation (BP), extreme learning machine (ELM), particle swarm optimization support vector machine (PSO-SVM), and grid search OVO TWSVM (GS OVO TWSVM) on our dataset; the recognition accuracy increased by 5.92%, 22.44%, 22.65%, 8.69%, and 5.75%. The proposed method has helped to achieve higher accuracy in BCI systems.

2019 ◽  
Vol 9 (12) ◽  
pp. 372
Author(s):  
Mustafa Yazici ◽  
Mustafa Ulutas ◽  
Mukadder Okuyan

Brain–computer interface (BCI) is a technology used to convert brain signals to control external devices. Researchers have designed and built many interfaces and applications in the last couple of decades. BCI is used for prevention, detection, diagnosis, rehabilitation, and restoration in healthcare. EEG signals are analyzed in this paper to help paralyzed people in rehabilitation. The electroencephalogram (EEG) signals recorded from five healthy subjects are used in this study. The sensor level EEG signals are converted to source signals using the inverse problem solution. Then, the cortical sources are calculated using sLORETA methods at nine regions marked by a neurophysiologist. The features are extracted from cortical sources by using the common spatial pattern (CSP) method and classified by a support vector machine (SVM). Both the sensor and the computed cortical signals corresponding to motor imagery of the hand and foot are used to train the SVM algorithm. Then, the signals outside the training set are used to test the classification performance of the classifier. The 0.1–30 Hz and mu rhythm band-pass filtered activity is also analyzed for the EEG signals. The classification performance and recognition of the imagery improved up to 100% under some conditions for the cortical level. The cortical source signals at the regions contributing to motor commands are investigated and used to improve the classification of motor imagery.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7309
Author(s):  
Junhyuk Choi ◽  
Keun Tae Kim ◽  
Ji Hyeok Jeong ◽  
Laehyun Kim ◽  
Song Joo Lee ◽  
...  

This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) and mutual information-based best individual feature (MIBIF) selection were used in the study to decode MI electroencephalogram (EEG) signals and extract a feature matrix as an input to the support vector machine (SVM) classifier. A successive eye-blink switch was sequentially combined with the EEG decoder in operating the lower-limb exoskeleton. Ten subjects demonstrated more than 80% accuracy in both offline (training) and online. All subjects successfully completed a gait task by wearing the lower-limb exoskeleton through the developed real-time BCI controller. The BCI controller achieved a time ratio of 1.45 compared with a manual smartwatch controller. The developed system can potentially be benefit people with neurological disorders who may have difficulties operating manual control.


2020 ◽  
Vol 91 (3) ◽  
pp. 034106 ◽  
Author(s):  
Fei Wang ◽  
Zongfeng Xu ◽  
Weiwei Zhang ◽  
Shichao Wu ◽  
Yahui Zhang ◽  
...  

2014 ◽  
Vol 644-650 ◽  
pp. 1640-1643
Author(s):  
Xiao Peng Hua ◽  
Xian Feng Li

Twin support vector machine (TWSVM), as a variant of the generalized eigenvalue proximal support vector machine (GEPSVM), attempts to improve the generalization of GEPSVM, whose solution follows from solving two quadratic programming problems (QPPs), each of which is smaller than in a standard SVM. Unfortunately, TWSVM fails to fully consider the local geometry structure and the local underlying descriminant information inside the samples that may be important for classification performance and only preserves the global data structure. In this paper, a novel TWSVM with manifold regularization is proposed by introducing the basic idea of the locality preserving within-class scatter matrix (LPWSM) into TWSVM. We termed this method manifold TWSVM (MTWSVM). MTWSVM not only retains the superior characteristics of TWSVM, but also preserves the local geometry structure between samples and shows the local underlying discriminant information. Experimental results confirm the effectiveness of our method.


2019 ◽  
Vol 14 (4) ◽  
pp. 475-488
Author(s):  
Benchun Cao ◽  
Yanchun Liang ◽  
Shinichi Yoshida ◽  
Renchu Guan

The analysis of facial expressions is a hot topic in brain-computer interface research. To determine the facial expressions of the subjects under the corresponding stimulation, we analyze the fMRI images acquired by the Magnetic Resonance. There are six kinds of facial expressions: "anger", "disgust", "sadness", "happiness", "joy" and "surprise". We demonstrate that brain decoding is achievable through the parsing of two facial expressions ("anger" and "joy"). Support vector machine and extreme learning machine are selected to classify these expressions based on time series features. Experimental results show that the classification performance of the extreme learning machine algorithm is better than support vector machine. Among the eight participants in the trials, the classification accuracy of three subjects reached 70-80%, and the remaining five subjects also achieved accuracy of 50-60%. Therefore, we can conclude that the brain decoding can be used to help analyzing human facial expressions.


Sign in / Sign up

Export Citation Format

Share Document