scholarly journals Instance Transfer Subject-Dependent Strategy for Motor Imagery Signal Classification Using Deep Convolutional Neural Networks

2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Kai Zhang ◽  
Guanghua Xu ◽  
Longtin Chen ◽  
Peiyuan Tian ◽  
ChengCheng Han ◽  
...  

In the process of brain-computer interface (BCI), variations across sessions/subjects result in differences in the properties of potential of the brain. This issue may lead to variations in feature distribution of electroencephalogram (EEG) across subjects, which greatly reduces the generalization ability of a classifier. Although subject-dependent (SD) strategy provides a promising way to solve the problem of personalized classification, it cannot achieve expected performance due to the limitation of the amount of data especially for a deep neural network (DNN) classification model. Herein, we propose an instance transfer subject-independent (ITSD) framework combined with a convolutional neural network (CNN) to improve the classification accuracy of the model during motor imagery (MI) task. The proposed framework consists of the following steps. Firstly, an instance transfer learning based on the perceptive Hash algorithm is proposed to measure similarity of spectrogram EEG signals between different subjects. Then, we develop a CNN to decode these signals after instance transfer learning. Next, the performance of classifications by different training strategies (subject-independent- (SI-) CNN, SD-CNN, and ITSD-CNN) are compared. To verify the effectiveness of the algorithm, we evaluate it on the dataset of BCI competition IV-2b. Experiments show that the instance transfer learning can achieve positive instance transfer using a CNN classification model. Among the three different training strategies, the average classification accuracy of ITSD-CNN can achieve 94.7±2.6 and obtain obvious improvement compared with a contrast model p<0.01. Compared with other methods proposed in previous research, the framework of ITSD-CNN outperforms the state-of-the-art classification methods with a mean kappa value of 0.664.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4520
Author(s):  
Luis Lopes Chambino ◽  
José Silvestre Silva ◽  
Alexandre Bernardino

Facial recognition is a method of identifying or authenticating the identity of people through their faces. Nowadays, facial recognition systems that use multispectral images achieve better results than those that use only visible spectral band images. In this work, a novel architecture for facial recognition that uses multiple deep convolutional neural networks and multispectral images is proposed. A domain-specific transfer-learning methodology applied to a deep neural network pre-trained in RGB images is shown to generalize well to the multispectral domain. We also propose a skin detector module for forgery detection. Several experiments were planned to assess the performance of our methods. First, we evaluate the performance of the forgery detection module using face masks and coverings of different materials. A second study was carried out with the objective of tuning the parameters of our domain-specific transfer-learning methodology, in particular which layers of the pre-trained network should be retrained to obtain good adaptation to multispectral images. A third study was conducted to evaluate the performance of support vector machines (SVM) and k-nearest neighbor classifiers using the embeddings obtained from the trained neural network. Finally, we compare the proposed method with other state-of-the-art approaches. The experimental results show performance improvements in the Tufts and CASIA NIR-VIS 2.0 multispectral databases, with a rank-1 score of 99.7% and 99.8%, respectively.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-22
Author(s):  
Mengyao Lu ◽  
Shuwen Jiang ◽  
Cong Wang ◽  
Dong Chen ◽  
Tian’en Chen

HighlightsA classification model for the front and back sides of tobacco leaves was developed for application in industry.A tobacco leaf grading method that combines a CNN with double-branch integration was proposed.The A-ResNet network was proposed and compared with other classic CNN networks.The grading accuracy of eight different grades was 91.30% and the testing time was 82.180 ms, showing a relatively high classification accuracy and efficiency.Abstract. Flue-cured tobacco leaf grading is a key step in the production and processing of Chinese-style cigarette raw materials, directly affecting cigarette blend and quality stability. At present, manual grading of tobacco leaves is dominant in China, resulting in unsatisfactory grading quality and consuming considerable material and financial resources. In this study, for fast, accurate, and non-destructive tobacco leaf grading, 2,791 flue-cured tobacco leaves of eight different grades in south Anhui Province, China, were chosen as the study sample, and a tobacco leaf grading method that combines convolutional neural networks and double-branch integration was proposed. First, a classification model for the front and back sides of tobacco leaves was trained by transfer learning. Second, two processing methods (equal-scaled resizing and cropping) were used to obtain global images and local patches from the front sides of tobacco leaves. A global image-based tobacco leaf grading model was then developed using the proposed A-ResNet-65 network, and a local patch-based tobacco leaf grading model was developed using the ResNet-34 network. These two networks were compared with classic deep learning networks, such as VGGNet, GoogLeNet-V3, and ResNet. Finally, the grading results of the two grading models were integrated to realize tobacco leaf grading. The tobacco leaf classification accuracy of the final model, for eight different grades, was 91.30%, and grading of a single tobacco leaf required 82.180 ms. The proposed method achieved a relatively high grading accuracy and efficiency. It provides a method for industrial implementation of the tobacco leaf grading and offers a new approach for the quality grading of other agricultural products. Keywords: Convolutional neural network, Deep learning, Image classification, Transfer learning, Tobacco leaf grading


2021 ◽  
Vol 11 (12) ◽  
pp. 2918-2927
Author(s):  
A. Shankar ◽  
S. Muttan ◽  
D. Vaithiyanathan

Brain Computer Interface (BCI) is a fast growing area of research to enable communication between our brains and computers. EEG based motor imagery BCI involves the user imagining movement, the subsequent recording and signal processing on the electroencephalogram signals from the brain, and the translation of those signals into specific commands. Ultimately, motor imagery BCI has the potential to be applied to helping those with special abilities recover motor control. This paper presents an evaluation of performance for EEG based motor imagery BCI with a classification accuracy of 80.2%, making use of features extracted using the Fast Fourier Transform and the Discrete Wavelet Transform, and classification is done using an Artificial Neural Network. It goes on to conclude how the performance is affected by the particular feature sets and neural network parameters.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4749
Author(s):  
Shaorong Zhang ◽  
Zhibin Zhu ◽  
Benxin Zhang ◽  
Bao Feng ◽  
Tianyou Yu ◽  
...  

The common spatial pattern (CSP) is a very effective feature extraction method in motor imagery based brain computer interface (BCI), but its performance depends on the selection of the optimal frequency band. Although a lot of research works have been proposed to improve CSP, most of these works have the problems of large computation costs and long feature extraction time. To this end, three new feature extraction methods based on CSP and a new feature selection method based on non-convex log regularization are proposed in this paper. Firstly, EEG signals are spatially filtered by CSP, and then three new feature extraction methods are proposed. We called them CSP-wavelet, CSP-WPD and CSP-FB, respectively. For CSP-Wavelet and CSP-WPD, the discrete wavelet transform (DWT) or wavelet packet decomposition (WPD) is used to decompose the spatially filtered signals, and then the energy and standard deviation of the wavelet coefficients are extracted as features. For CSP-FB, the spatially filtered signals are filtered into multiple bands by a filter bank (FB), and then the logarithm of variances of each band are extracted as features. Secondly, a sparse optimization method regularized with a non-convex log function is proposed for the feature selection, which we called LOG, and an optimization algorithm for LOG is given. Finally, ensemble learning is used for secondary feature selection and classification model construction. Combing feature extraction and feature selection methods, a total of three new EEG decoding methods are obtained, namely CSP-Wavelet+LOG, CSP-WPD+LOG, and CSP-FB+LOG. Four public motor imagery datasets are used to verify the performance of the proposed methods. Compared to existing methods, the proposed methods achieved the highest average classification accuracy of 88.86, 83.40, 81.53, and 80.83 in datasets 1–4, respectively. The feature extraction time of CSP-FB is the shortest. The experimental results show that the proposed methods can effectively improve the classification accuracy and reduce the feature extraction time. With comprehensive consideration of classification accuracy and feature extraction time, CSP-FB+LOG has the best performance and can be used for the real-time BCI system.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Jianghui Wen ◽  
Yeshu Liu ◽  
Yu Shi ◽  
Haoran Huang ◽  
Bing Deng ◽  
...  

Abstract Background Long-chain non-coding RNA (lncRNA) is closely related to many biological activities. Since its sequence structure is similar to that of messenger RNA (mRNA), it is difficult to distinguish between the two based only on sequence biometrics. Therefore, it is particularly important to construct a model that can effectively identify lncRNA and mRNA. Results First, the difference in the k-mer frequency distribution between lncRNA and mRNA sequences is considered in this paper, and they are transformed into the k-mer frequency matrix. Moreover, k-mers with more species are screened by relative entropy. The classification model of the lncRNA and mRNA sequences is then proposed by inputting the k-mer frequency matrix and training the convolutional neural network. Finally, the optimal k-mer combination of the classification model is determined and compared with other machine learning methods in humans, mice and chickens. The results indicate that the proposed model has the highest classification accuracy. Furthermore, the recognition ability of this model is verified to a single sequence. Conclusion We established a classification model for lncRNA and mRNA based on k-mers and the convolutional neural network. The classification accuracy of the model with 1-mers, 2-mers and 3-mers was the highest, with an accuracy of 0.9872 in humans, 0.8797 in mice and 0.9963 in chickens, which is better than those of the random forest, logistic regression, decision tree and support vector machine.


Author(s):  
D. A. Gavrilov ◽  
N. N. Shchelkunov ◽  
A. V. Melerzanov

<p><strong>Abstract.</strong> Melanoma is one of the most virulent lesions of human’s skin. The visual diagnosis accuracy of melanoma directly depends on the doctor’s qualification and specialization. State-of-the-art solutions in the field of image processing and machine learning allows to create intelligent systems based on artificial convolutional neural network exceeding human’s rates in the field of object classification, including the case of malignant skin lesions. This paper presents an algorithm for the early melanoma diagnosis based on artificial deep convolutional neural networks. The algorithm proposed allows to reach the classification accuracy of melanoma at least 91%.</p>


Biosensors ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 22
Author(s):  
Ghadir Ali Altuwaijri ◽  
Ghulam Muhammad

Automatic high-level feature extraction has become a possibility with the advancement of deep learning, and it has been used to optimize efficiency. Recently, classification methods for convolutional neural network (CNN)-based electroencephalography (EEG) motor imagery have been proposed, and have achieved reasonably high classification accuracy. These approaches, however, use the CNN single convolution scale, whereas the best convolution scale varies from subject to subject. This limits the precision of classification. This paper proposes multibranch CNN models to address this issue by effectively extracting the spatial and temporal features from raw EEG data, where the branches correspond to different filter kernel sizes. The proposed method’s promising performance is demonstrated by experimental results on two public datasets, the BCI Competition IV 2a dataset and the High Gamma Dataset (HGD). The results of the technique show a 9.61% improvement in the classification accuracy of multibranch EEGNet (MBEEGNet) from the fixed one-branch EEGNet model, and 2.95% from the variable EEGNet model. In addition, the multibranch ShallowConvNet (MBShallowConvNet) improved the accuracy of a single-scale network by 6.84%. The proposed models outperformed other state-of-the-art EEG motor imagery classification methods.


2021 ◽  
Vol 8 (3) ◽  
pp. 601
Author(s):  
Eko Prasetyo ◽  
Rani Purbaningtyas ◽  
Raden Dimas Adityo ◽  
Enrico Tegar Prabowo ◽  
Achmad Irfan Ferdiansyah

<p class="Abstrak">Ikan merupakan salah satu sumber protein hewani dan sangat diminati masyarakat Indonesia, dari survey bahan makanan yang diminati, bandeng peringkat keempat dibanding bahan makanan yang lain. Khususnya ikan bandeng, ikan ini menjadi satu dari enam ikan yang banyak dikonsumsi masyarakat selain tongkol, kembung, teri, mujair dan lele, maka ketelitian masyarakat ketika membeli ikan bandeng menjadi perhatian serius dalam memilih ikan bandeng segar. Deteksi kesegaran dengan menyentuh tubuh ikan dapat mengakibatkan kerusakan tanpa disengaja, maka deteksi kesegaran ikan harus dilakukan tanpa menyentuh ikan bandeng dengan memanfaatkan citra kondisi mata. Dalam riset ini, kami melakukan eksperimen implementasi klasifikasi kesegaran ikan bandeng sangat segar dan tidak segar berdasarkan mata menggunakan transfer learning dari empat CNN, yaitu Xception, MobileNet V1, Resnet50, dan VGG16. Dari hasil eksperimen klasifikasi dua kelas kesegaran ikan bandeng menggunakan 154 citra menunjukkan bahwa VGG16 mencapai kinerja terbaik dibanding arsitektur lainnya dimana akurasi klasifikasi mencapai 0.97. Dengan akurasi lebih tinggi dibanding arsitektur lainnya maka VGG16 relatif lebih tepat digunakan untuk klasifikasi dua kelas kesegaran ikan bandeng.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Fish, one source of animal protein, is an exciting food for Indonesia's people. From a survey of food-ingredients demanded, milkfish are ranked fourth compared to other food-ingredients. Especially for milkfish, this fish is one of the six fish consumed by Indonesia's people besides tuna, bloating, anchovies, tilapia, and catfish, so the exactitude of the people when buying is a severe concern in choosing fresh milkfish. Detection of freshness by touching the fish's body may cause unexpected destruction, so detecting the fish's freshness should be conducted without touching using the eye image. In this research, we conducted an experimental implementation of freshness milkfish classification (vastly fresh and not fresh) based on the eyes using transfer learning from several CNNs, such as Xception, MobileNet V1, Resnet50, and VGG16. The experimental results of the classification of two milkfish freshness classes using 154 images show that VGG16 achieves the best performance compared to other architectures, where the classification accuracy achieves 0.97. With higher accuracy than other architectures, VGG16 is relatively more appropriate for classifying two classes of milkfish freshness.</em></p>


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8453
Author(s):  
Rafia Nishat Toma ◽  
Farzin Piltan ◽  
Jong-Myon Kim

Fault diagnosis and classification for machines are integral to condition monitoring in the industrial sector. However, in recent times, as sensor technology and artificial intelligence have developed, data-driven fault diagnosis and classification have been more widely investigated. The data-driven approach requires good-quality features to attain good fault classification accuracy, yet domain expertise and a fair amount of labeled data are important for better features. This paper proposes a deep auto-encoder (DAE) and convolutional neural network (CNN)-based bearing fault classification model using motor current signals of an induction motor (IM). Motor current signals can be easily and non-invasively collected from the motor. However, the current signal collected from industrial sources is highly contaminated with noise; feature calculation thus becomes very challenging. The DAE is utilized for estimating the nonlinear function of the system with the normal state data, and later, the residual signal is obtained. The subsequent CNN model then successfully classified the types of faults from the residual signals. Our proposed semi-supervised approach achieved very high classification accuracy (more than 99%). The inclusion of DAE was found to not only improve the accuracy significantly but also to be potentially useful when the amount of labeled data is small. The experimental outcomes are compared with some existing works on the same dataset, and the performance of this proposed combined approach is found to be comparable with them. In terms of the classification accuracy and other evaluation parameters, the overall method can be considered as an effective approach for bearing fault classification using the motor current signal.


Sign in / Sign up

Export Citation Format

Share Document