scholarly journals Decoding Visual Motions from EEG Using Attention-Based RNN

2020 ◽  
Vol 10 (16) ◽  
pp. 5662
Author(s):  
Dongxu Yang ◽  
Yadong Liu ◽  
Zongtan Zhou ◽  
Yang Yu ◽  
Xinbin Liang

The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both investigated in this study. Attention mechanisms and a variant of recurrent neural networks (RNNs) are incorporated as the decoding model. Attention mechanisms emphasize task-related responses and reduce redundant information of EEG, whereas RNN learns feature representations for classification from the processed EEG data. To promote generalization of the decoding model, a novel online data augmentation method that randomly averages EEG sequences to generate artificial signals is proposed for single-trial EEG. For our dataset, the data augmentation method improves the accuracy of our model (based on RNN) and two benchmark models (based on convolutional neural networks) by 5.60%, 3.92%, and 3.02%, respectively. The attention-based RNN reaches mean accuracies of 67.18% for single-trial EEG decoding with data augmentation. When performing multi-trial EEG classification, the amount of training data decreases linearly after averaging, which may result in poor generalization. To address this deficiency, we devised three schemes to randomly combine data for network training. Accordingly, the results indicate that the proposed strategies effectively prevent overfitting and improve the correct classification rate compared with averaging EEG fixedly (by up to 19.20%). The highest accuracy of the three strategies for multi-trial EEG classification achieves 82.92%. The decoding performance for the methods proposed in this work indicates they have application potential in the brain–computer interface (BCI) system based on visual motion perception.

2020 ◽  
Vol 12 (7) ◽  
pp. 1092
Author(s):  
David Browne ◽  
Michael Giering ◽  
Steven Prestwich

Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3560 ◽  
Author(s):  
Acharya ◽  
Wi ◽  
Lee

Advanced metering infrastructure (AMI) is spreading to households in some countries, and could be a source for forecasting the residential electric demand. However, load forecasting of a single household is still a fairly challenging topic because of the high volatility and uncertainty of the electric demand of households. Moreover, there is a limitation in the use of historical load data because of a change in house ownership, change in lifestyle, integration of new electric devices, and so on. The paper proposes a novel method to forecast the electricity loads of single residential households. The proposed forecasting method is based on convolution neural networks (CNNs) combined with a data-augmentation technique, which can artificially enlarge the training data. This method can address issues caused by a lack of historical data and improve the accuracy of residential load forecasting. Simulation results illustrate the validation and efficacy of the proposed method.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 648 ◽  
Author(s):  
Ismoilov Nusrat ◽  
Sung-Bong Jang

Artificial neural networks (ANN) have attracted significant attention from researchers because many complex problems can be solved by training them. If enough data are provided during the training process, ANNs are capable of achieving good performance results. However, if training data are not enough, the predefined neural network model suffers from overfitting and underfitting problems. To solve these problems, several regularization techniques have been devised and widely applied to applications and data analysis. However, it is difficult for developers to choose the most suitable scheme for a developing application because there is no information regarding the performance of each scheme. This paper describes comparative research on regularization techniques by evaluating the training and validation errors in a deep neural network model, using a weather dataset. For comparisons, each algorithm was implemented using a recent neural network library of TensorFlow. The experiment results showed that an autoencoder had the worst performance among schemes. When the prediction accuracy was compared, data augmentation and the batch normalization scheme showed better performance than the others.


2020 ◽  
Author(s):  
Kun Chen ◽  
Manning Wang ◽  
Zhijian Song

Abstract Background: Deep neural networks have been widely used in medical image segmentation and have achieved state-of-the-art performance in many tasks. However, different from the segmentation of natural images or video frames, the manual segmentation of anatomical structures in medical images needs high expertise so the scale of labeled training data is very small, which is a major obstacle for the improvement of deep neural networks performance in medical image segmentation. Methods: In this paper, we proposed a new end-to-end generation-segmentation framework by integrating Generative Adversarial Networks (GAN) and a segmentation network and train them simultaneously. The novelty is that during the training of the GAN, the intermediate synthetic images generated by the generator of the GAN are used to pre-train the segmentation network. As the advances of the training of the GAN, the synthetic images evolve gradually from being very coarse to containing more realistic textures, and these images help train the segmentation network gradually. After the training of GAN, the segmentation network is then fine-tuned by training with the real labeled images. Results: We evaluated the proposed framework on four different datasets, including 2D cardiac dataset and lung dataset, 3D prostate dataset and liver dataset. Compared with original U-net and CE-Net, our framework can achieve better segmentation performance. Our framework also can get better segmentation results than U-net on small datasets. In addition, our framework is more effective than the usual data augmentation methods. Conclusions: The proposed framework can be used as a pre-train method of segmentation network, which helps to get a better segmentation result. Our method can solve the shortcomings of current data augmentation methods to some extent.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


Agriculture ◽  
2018 ◽  
Vol 8 (10) ◽  
pp. 147 ◽  
Author(s):  
Chenxiao Zhang ◽  
Peng Yue ◽  
Liping Di ◽  
Zhaoyan Wu

Being hailed as the greatest mechanical innovation in agriculture since the replacement of draft animals by the tractor, center pivot irrigation systems irrigate crops with a significant reduction in both labor and water needs compared to traditional irrigation methods, such as flood irrigation. In the last few decades, the deployment of center pivot irrigation systems has increased dramatically throughout the United States. Monitoring the installment and operation of the center pivot systems can help: (i) Water resource management agencies to objectively assess water consumption and properly allocate water resources, (ii) Agro-businesses to locate potential customers, and (iii) Researchers to investigate land use change. However, few studies have been carried out on the automatic identification and location of center pivot irrigation systems from satellite images. Growing rapidly in recent years, machine learning techniques have been widely applied on image recognition, and they provide a possible solution for identification of center pivot systems. In this study, a Convolutional Neural Networks (CNNs) approach was proposed for identification of center pivot irrigation systems. CNNs with different structures were constructed and compared for the task. A sampling approach was presented for training data augmentation. The CNN with the best performance and less training time was used in the testing area. A variance-based approach was proposed to further locate the center of each center pivot system. The experiment was applied to a 30-m resolution Landsat image, covering an area of 20,000 km2 in North Colorado. A precision of 95.85% and a recall of 93.33% of the identification results indicated that the proposed approach performed well in the center pivot irrigation systems identification task.


2020 ◽  
Author(s):  
Yating Lin ◽  
Haojun Li ◽  
Xu Xiao ◽  
Wenxian Yang ◽  
Rongshan Yu

Understanding the immune-cell abundances of cancer and other disease-related tissues has an important role in guiding cancer treatments. We propose data augmentation through in silico mixing with deep neural networks (DAISM-DNN), where highly accurate and unbiased immune-cell proportion estimation is achieved through DNN with dataset-specific training data created from partial samples from the same batch with ground truth cell proportions. We evaluated the performance of DAISM-DNN on three publicly available real-world datasets and results showed that DAISM-DNN is robust against platform-specific variations among different datasets and outperforms other existing methods by a significant margin on all the datasets evaluated.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250093
Author(s):  
Fabian Englbrecht ◽  
Iris E. Ruider ◽  
Andreas R. Bausch

Dataset annotation is a time and labor-intensive task and an integral requirement for training and testing deep learning models. The segmentation of images in life science microscopy requires annotated image datasets for object detection tasks such as instance segmentation. Although the amount of annotated image data has been steadily reduced due to methods such as data augmentation, the process of manual or semi-automated data annotation is the most labor and cost intensive task in the process of cell nuclei segmentation with deep neural networks. In this work we propose a system to fully automate the annotation process of a custom fluorescent cell nuclei image dataset. By that we are able to reduce nuclei labelling time by up to 99.5%. The output of our system provides high quality training data for machine learning applications to identify the position of cell nuclei in microscopy images. Our experiments have shown that the automatically annotated dataset provides coequal segmentation performance compared to manual data annotation. In addition, we show that our system enables a single workflow from raw data input to desired nuclei segmentation and tracking results without relying on pre-trained models or third-party training datasets for neural networks.


Sign in / Sign up

Export Citation Format

Share Document