scholarly journals Convolutional Neural Networks for P300 Detection with Application to Brain-Computer Interfaces

2011 ◽  
Vol 33 (3) ◽  
pp. 433-445 ◽  
Author(s):  
H Cecotti ◽  
A Graser
2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Eduardo Carabez ◽  
Miho Sugi ◽  
Isao Nambu ◽  
Yasuhiro Wada

From allowing basic communication to move through an environment, several attempts are being made in the field of brain-computer interfaces (BCI) to assist people that somehow find it difficult or impossible to perform certain activities. Focusing on these people as potential users of BCI, we obtained electroencephalogram (EEG) readings from nine healthy subjects who were presented with auditory stimuli via earphones from six different virtual directions. We presented the stimuli following the oddball paradigm to elicit P300 waves within the subject’s brain activity for later identification and classification using convolutional neural networks (CNN). The CNN models are given a novel single trial three-dimensional (3D) representation of the EEG data as an input, maintaining temporal and spatial information as close to the experimental setup as possible, a relevant characteristic as eliciting P300 has been shown to cause stronger activity in certain brain regions. Here, we present the results of CNN models using the proposed 3D input for three different stimuli presentation time intervals (500, 400, and 300 ms) and compare them to previous studies and other common classifiers. Our results show >80% accuracy for all the CNN models using the proposed 3D input in single trial P300 classification.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6672
Author(s):  
Ji-Hyeok Jeong ◽  
Jun-Hyuk Choi ◽  
Keun-Tae Kim ◽  
Song-Joo Lee ◽  
Dong-Joo Kim ◽  
...  

Motor imagery (MI) brain–computer interfaces (BCIs) have been used for a wide variety of applications due to their intuitive matching between the user’s intentions and the performance of tasks. Applying dry electroencephalography (EEG) electrodes to MI BCI applications can resolve many constraints and achieve practicality. In this study, we propose a multi-domain convolutional neural networks (MD-CNN) model that learns subject-specific and electrode-dependent EEG features using a multi-domain structure to improve the classification accuracy of dry electrode MI BCIs. The proposed MD-CNN model is composed of learning layers for three domain representations (time, spatial, and phase). We first evaluated the proposed MD-CNN model using a public dataset to confirm 78.96% classification accuracy for multi-class classification (chance level accuracy: 30%). After that, 10 healthy subjects participated and performed three classes of MI tasks related to lower-limb movement (gait, sitting down, and resting) over two sessions (dry and wet electrodes). Consequently, the proposed MD-CNN model achieved the highest classification accuracy (dry: 58.44%; wet: 58.66%; chance level accuracy: 43.33%) with a three-class classifier and the lowest difference in accuracy between the two electrode types (0.22%, d = 0.0292) compared with the conventional classifiers (FBCSP, EEGNet, ShallowConvNet, and DeepConvNet) that used only a single domain. We expect that the proposed MD-CNN model could be applied for developing robust MI BCI systems with dry electrodes.


2021 ◽  
Author(s):  
Thomas Stephens ◽  
Jon Cafaro ◽  
Ryan MacRae ◽  
Stephen B Simons

Chronically implanted brain-computer interfaces (BCIs) provide amazing opportunities to those living with disability and for the treatment of chronic disorders of the nervous system. However, this potential has yet to be fully realized in part due to the lack of stability in measured signals over time. Signal disruption stems from multiple sources including mechanical failure of the interface, changes in neuron health, and glial encapsulation of the electrodes that alter the impedance. In this study we present an algorithmic solution to the problem of long-term signal disruption in chronically implanted neural interfaces. Our approach utilizes a generative adversarial network (GAN), based on the original Unsupervised Image to Image Translation (UNIT) algorithm, which learns how to recover degraded signals back to their analogous non-disrupted (clean) exemplars measured at the time of implant. We demonstrate that this approach can reliably recover simulated signals in two types of commonly used neural interfaces: multi-electrode arrays (MEA), and electrocorticography (ECoG). To test the accuracy of signal recovery we employ a common BCI paradigm wherein a classification algorithm (neural decoder) is trained on the starting (non-disrupted) set of signals. Performance of the decoder demonstrates expected failure over time as the signal disruption accumulates. In simulated MEA experiments, our approach recovers decoder accuracy to >90% when as many as 13/ 32 channels are lost, or as many as 28/32 channels have their neural responses altered. In simulated ECoG experiments, our approach shows stabilization of the neural decoder indefinitely with decoder accuracies >95% over simulated lifetimes of over 1 year. Our results suggest that these types of neural networks can provide a useful tool to improve the long-term utility of chronically implanted neural interfaces.


Sign in / Sign up

Export Citation Format

Share Document