scholarly journals Transfer Kernel Common Spatial Patterns for Motor Imagery Brain-Computer Interface Classification

2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Mengxi Dai ◽  
Dezhi Zheng ◽  
Shucong Liu ◽  
Pengju Zhang

Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern (CSP) as preprocessing step before classification. The CSP method is a supervised algorithm. Therefore a lot of time-consuming training data is needed to build the model. To address this issue, one promising approach is transfer learning, which generalizes a learning model can extract discriminative information from other subjects for target classification task. To this end, we propose a transfer kernel CSP (TKCSP) approach to learn a domain-invariant kernel by directly matching distributions of source subjects and target subjects. The dataset IVa of BCI Competition III is used to demonstrate the validity by our proposed methods. In the experiment, we compare the classification performance of the TKCSP against CSP, CSP for subject-to-subject transfer (CSP SJ-to-SJ), regularizing CSP (RCSP), stationary subspace CSP (ssCSP), multitask CSP (mtCSP), and the combined mtCSP and ssCSP (ss + mtCSP) method. The results indicate that the superior mean classification performance of TKCSP can achieve 81.14%, especially in case of source subjects with fewer number of training samples. Comprehensive experimental evidence on the dataset verifies the effectiveness and efficiency of the proposed TKCSP approach over several state-of-the-art methods.

2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Dieter Devlaminck ◽  
Bart Wyns ◽  
Moritz Grosse-Wentrup ◽  
Georges Otte ◽  
Patrick Santens

Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.


Author(s):  
Jing Jin ◽  
Hua Fang ◽  
Ian Daly ◽  
Ruocheng Xiao ◽  
Yangyang Miao ◽  
...  

The common spatial patterns (CSP) algorithm is one of the most frequently used and effective spatial filtering methods for extracting relevant features for use in motor imagery brain–computer interfaces (MI-BCIs). However, the inherent defect of the traditional CSP algorithm is that it is highly sensitive to potential outliers, which adversely affects its performance in practical applications. In this work, we propose a novel feature optimization and outlier detection method for the CSP algorithm. Specifically, we use the minimum covariance determinant (MCD) to detect and remove outliers in the dataset, then we use the Fisher score to evaluate and select features. In addition, in order to prevent the emergence of new outliers, we propose an iterative minimum covariance determinant (IMCD) algorithm. We evaluate our proposed algorithm in terms of iteration times, classification accuracy and feature distribution using two BCI competition datasets. The experimental results show that the average classification performance of our proposed method is 12% and 22.9% higher than that of the traditional CSP method in two datasets ([Formula: see text]), and our proposed method obtains better performance in comparison with other competing methods. The results show that our method improves the performance of MI-BCI systems.


2019 ◽  
Vol 29 (03) ◽  
pp. 2050034 ◽  
Author(s):  
Jin Wang ◽  
Qingguo Wei

To improve the classification performance of motor imagery (MI) based brain-computer interfaces (BCIs), a new signal processing algorithm for classifying electroencephalogram (EEG) signals by combining filter bank and sparse representation is proposed. The broadband EEG signals of 8–30[Formula: see text]Hz are segmented into 10 sub-band signals using a filter bank. EEG signals in each sub-band are spatially filtered by common spatial pattern (CSP). Fisher score combined with grid search is used for selecting the optimal sub-band, the band power of which is employed for designing a dictionary matrix. A testing signal can be sparsely represented as a linear combination of some columns of the dictionary. The sparse coefficients are estimated by [Formula: see text] norm optimization, and the residuals of sparse coefficients are exploited for classification. The proposed classification algorithm was applied to two BCI datasets and compared with two traditional broadband CSP-based algorithms. The results showed that the proposed algorithm provided superior classification accuracies, which were better than those yielded by traditional algorithms, verifying the efficacy of the present algorithm.


Author(s):  
S. A. Chitnis ◽  
Z. Huang ◽  
K. Khoshelham

Abstract. Mobile lidar point clouds are commonly used for 3d mapping of road environments as they provide a rich, highly detailed geometric representation of objects on and around the road. However, raw lidar point clouds lack semantic information about the type of objects, which is necessary for various applications. Existing methods for the classification of objects in mobile lidar data, including state of the art deep learning methods, achieve relatively low accuracies, and a primary reason for this under-performance is the inadequacy of available 3d training samples to sufficiently train deep networks. In this paper, we propose a generative model for creating synthetic 3d point segments that can aid in improving the classification performance of mobile lidar point clouds. We train a 3d Adversarial Autoencoder (3dAAE) to generate synthetic point segments that exhibit a high resemblance to and share similar geometric features with real point segments. We evaluate the performance of a PointNet-like classifier trained with and without the synthetic point segments. The evaluation results support our hypothesis that training a classifier with training data augmented with synthetic samples leads to significant improvement in the classification performance. Specifically, our model achieves an F1 score of 0.94 for vehicles and pedestrians and 1.00 for traffic signs.


2020 ◽  
Vol 34 (07) ◽  
pp. 11029-11036
Author(s):  
Jiabo Huang ◽  
Qi Dong ◽  
Shaogang Gong ◽  
Xiatian Zhu

Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6995
Author(s):  
Hammad Nazeer ◽  
Noman Naseer ◽  
Aakif Mehboob ◽  
Muhammad Jawad Khan ◽  
Rayyan Azam Khan ◽  
...  

A state-of-the-art brain–computer interface (BCI) system includes brain signal acquisition, noise removal, channel selection, feature extraction, classification, and an application interface. In functional near-infrared spectroscopy-based BCI (fNIRS-BCI) channel selection may enhance classification performance by identifying suitable brain regions that contain brain activity. In this study, the z-score method for channel selection is proposed to improve fNIRS-BCI performance. The proposed method uses cross-correlation to match the similarity between desired and recorded brain activity signals, followed by forming a vector of each channel’s correlation coefficients’ maximum values. After that, the z-score is calculated for each value of that vector. A channel is selected based on a positive z-score value. The proposed method is applied to an open-access dataset containing mental arithmetic (MA) and motor imagery (MI) tasks for twenty-nine subjects. The proposed method is compared with the conventional t-value method and with no channel selected, i.e., using all channels. The z-score method yielded significantly improved (p < 0.0167) classification accuracies of 87.2 ± 7.0%, 88.4 ± 6.2%, and 88.1 ± 6.9% for left motor imagery (LMI) vs. rest, right motor imagery (RMI) vs. rest, and mental arithmetic (MA) vs. rest, respectively. The proposed method is also validated on an open-access database of 17 subjects, containing right-hand finger tapping (RFT), left-hand finger tapping (LFT), and dominant side foot tapping (FT) tasks.The study shows an enhanced performance of the z-score method over the t-value method as an advancement in efforts to improve state-of-the-art fNIRS-BCI systems’ performance.


Author(s):  
Pasquale Arpaia ◽  
Francesco Donnarumma ◽  
Antonio Esposito ◽  
Marco Parvis

A method for selecting electroencephalographic (EEG) signals in motor imagery-based brain-computer interfaces (MI-BCI) is proposed for enhancing the online interoperability and portability of BCI systems, as well as user comfort. The attempt is also to reduce variability and noise of MI-BCI, which could be affected by a large number of EEG channels. The relation between selected channels and MI-BCI performance is therefore analyzed. The proposed method is able to select acquisition channels common to all subjects, while achieving a performance compatible with the use of all the channels. Results are reported with reference to a standard benchmark dataset, the BCI competition IV dataset 2a. They prove that a performance compatible with the best state-of-the-art approaches can be achieved, while adopting a significantly smaller number of channels, both in two and in four tasks classification. In particular, classification accuracy is about 77–83% in binary classification with down to 6 EEG channels, and above 60% for the four-classes case when 10 channels are employed. This gives a contribution in optimizing the EEG measurement while developing non-invasive and wearable MI-based brain-computer interfaces.


2020 ◽  
Vol 34 (07) ◽  
pp. 10542-10550 ◽  
Author(s):  
Jingjing Chen ◽  
Liangming Pan ◽  
Zhipeng Wei ◽  
Xiang Wang ◽  
Chong-Wah Ngo ◽  
...  

Recognizing ingredients for a given dish image is at the core of automatic dietary assessment, attracting increasing attention from both industry and academia. Nevertheless, the task is challenging due to the difficulty of collecting and labeling sufficient training data. On one hand, there are hundred thousands of food ingredients in the world, ranging from the common to rare. Collecting training samples for all of the ingredient categories is difficult. On the other hand, as the ingredient appearances exhibit huge visual variance during the food preparation, it requires to collect the training samples under different cooking and cutting methods for robust recognition. Since obtaining sufficient fully annotated training data is not easy, a more practical way of scaling up the recognition is to develop models that are capable of recognizing unseen ingredients. Therefore, in this paper, we target the problem of ingredient recognition with zero training samples. More specifically, we introduce multi-relational GCN (graph convolutional network) that integrates ingredient hierarchy, attribute as well as co-occurrence for zero-shot ingredient recognition. Extensive experiments on both Chinese and Japanese food datasets are performed to demonstrate the superior performance of multi-relational GCN and shed light on zero-shot ingredients recognition.


Sign in / Sign up

Export Citation Format

Share Document