scholarly journals Epileptic Focus Localization Based on iEEG Plot Images by Using Convolutional Neural Network

10.29007/9jmg ◽  
2020 ◽  
Author(s):  
Xuyang Zhao ◽  
Linfeng Sui ◽  
Toshihisa Tanaka ◽  
Jianting Cao ◽  
Qibin Zhao

Patients with epilepsy need to locate the lesion before surgery. Currently, clinical experts diagnose the lesions through visual judgment. In order to reduce the workload of clinical experts, many automatic diagnostic methods have been proposed. Usually, the automatic diagnostic methods often use only one feature as the basis for diagnosis, which has certain limitations. In this paper, we use multiple feature fusion methods for automatic diagnosis. For the cause of epilepsy: abnormal discharge, we use the filter and entropy to capture the energy features of epilepsy discharge. Due to the epilepsy brain waves contain spike and shape waveforms, short time Fourier transform (STFT) is used to analysis the time-frequency features. In feature fusion, we plot the color map of entropy and spectrogram get from STFT together to combine the different types of features. After the feature extraction and fusion steps, each brain signal is converted into an image. Next, we use the visual analysis capabilities of the convolutional neural network (CNN) to classify the plot image. With the visual recognition ability of CNN, in the experiment, we got a classification accuracy of 88.77%. By using automatic diagnostic methods, the workload of clinical experts is greatly reduced in actual clinical practice.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Linfeng Sui ◽  
Xuyang Zhao ◽  
Qibin Zhao ◽  
Toshihisa Tanaka ◽  
Jianting Cao

Epileptic focus localization by analysing intracranial electroencephalogram (iEEG) plays a critical role in successful surgical therapy of resection of the epileptogenic lesion. However, manual analysis and classification of the iEEG signal by clinicians are arduous and time-consuming and excessively depend on the experience. Due to individual differences of patients, the iEEG signal from different patients usually shows very diverse features even if the features belong to the same class. Accordingly, automatic detection of epileptic focus is required to improve the accuracy and to shorten the time for treatment. In this paper, we propose a novel feature fusion-based iEEG classification method, a deep learning model termed Time-Frequency Hybrid Network (TF-HybridNet), in which short-time Fourier transform (STFT) and 1d convolution layers are performed on the input iEEG in parallel to extract features of the time-frequency domain and feature maps. And then, the time-frequency features and feature maps are fused and fed to a 2d convolutional neural network (CNN). We used the Bern-Barcelona iEEG dataset for evaluating the performance of TF-HybridNet, and the experimental results show that our approach is able to differentiate the focal from nonfocal iEEG signal with an average classification accuracy of 94.3% and demonstrates an improved accuracy rate compared to the model using only STFT or one-dimensional convolutional layers as feature extraction.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1999 ◽  
Author(s):  
Donghang Yu ◽  
Qing Xu ◽  
Haitao Guo ◽  
Chuan Zhao ◽  
Yuzhun Lin ◽  
...  

Classifying remote sensing images is vital for interpreting image content. Presently, remote sensing image scene classification methods using convolutional neural networks have drawbacks, including excessive parameters and heavy calculation costs. More efficient and lightweight CNNs have fewer parameters and calculations, but their classification performance is generally weaker. We propose a more efficient and lightweight convolutional neural network method to improve classification accuracy with a small training dataset. Inspired by fine-grained visual recognition, this study introduces a bilinear convolutional neural network model for scene classification. First, the lightweight convolutional neural network, MobileNetv2, is used to extract deep and abstract image features. Each feature is then transformed into two features with two different convolutional layers. The transformed features are subjected to Hadamard product operation to obtain an enhanced bilinear feature. Finally, the bilinear feature after pooling and normalization is used for classification. Experiments are performed on three widely used datasets: UC Merced, AID, and NWPU-RESISC45. Compared with other state-of-art methods, the proposed method has fewer parameters and calculations, while achieving higher accuracy. By including feature fusion with bilinear pooling, performance and accuracy for remote scene classification can greatly improve. This could be applied to any remote sensing image classification task.


2021 ◽  
Author(s):  
Guofa Li ◽  
Yanbo Wang ◽  
Jialong He ◽  
Yongchao Huo

Abstract Tool wear during machining has a great influence on the quality of machined surface and dimensional accuracy. Tool wear monitoring is extremely important to improve machining efficiency and workpiece quality. Multidomain features (time domain, frequency domain and time-frequency domain) can accurately characterise the degree of tool wear. However, manual feature fusion is time consuming and prevents the improvement of monitoring accuracy. A new tool wear prediction method based on multidomain feature fusion by attention-based depth-wise separable convolutional neural network is proposed to solve these problems. In this method, multidomain features of cutting force and vibration signals are extracted and recombined into feature tensors. The proposed hypercomplex position encoding and high dimensional self-attention mechanism are used to calculate the new representation of input feature tensor, which emphasizes the tool wear sensitive information and suppresses large area background noise. The designed depth-wise separable convolutional neural network is used to adaptively extract high-level features that can characterize tool wear from the new representation, and the tool wear is predicted automatically. The proposed method is verified on three sets of tool run-to-failure data sets of three-flute ball nose cemented carbide tool in machining centre. Experimental results show that the prediction accuracy of the proposed method is remarkably higher than other state-of-art methods. Therefore, the proposed tool wear prediction method is beneficial to improve the prediction accuracy and provide effective guidance for decision making in processing.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1248
Author(s):  
Rafia Nishat Toma ◽  
Cheol-Hong Kim ◽  
Jong-Myon Kim

Condition monitoring is used to track the unavoidable phases of rolling element bearings in an induction motor (IM) to ensure reliable operation in domestic and industrial machinery. The convolutional neural network (CNN) has been used as an effective tool to recognize and classify multiple rolling bearing faults in recent times. Due to the nonlinear and nonstationary nature of vibration signals, it is quite difficult to achieve high classification accuracy when directly using the original signal as the input of a convolution neural network. To evaluate the fault characteristics, ensemble empirical mode decomposition (EEMD) is implemented to decompose the signal into multiple intrinsic mode functions (IMFs) in this work. Then, based on the kurtosis value, insignificant IMFs are filtered out and the original signal is reconstructed with the rest of the IMFs so that the reconstructed signal contains the fault characteristics. After that, the 1-D reconstructed vibration signal is converted into a 2-D image using a continuous wavelet transform with information from the damage frequency band. This also transfers the signal into a time-frequency domain and reduces the nonstationary effects of the vibration signal. Finally, the generated images of various fault conditions, which possess a discriminative pattern relative to the types of faults, are used to train an appropriate CNN model. Additionally, with the reconstructed signal, two different methods are used to create an image to compare with our proposed image creation approach. The vibration signal is collected from a self-designed testbed containing multiple bearings of different fault conditions. Two other conventional CNN architectures are compared with our proposed model. Based on the results obtained, it can be concluded that the image generated with fault signatures not only accurately classifies multiple faults with CNN but can also be considered as a reliable and stable method for the diagnosis of fault bearings.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 119
Author(s):  
Tao Wang ◽  
Changhua Lu ◽  
Yining Sun ◽  
Mei Yang ◽  
Chun Liu ◽  
...  

Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG classification method based on Continuous Wavelet Transform (CWT) and Convolutional Neural Network (CNN). CWT is used to decompose ECG signals to obtain different time-frequency components, and CNN is used to extract features from the 2D-scalogram composed of the above time-frequency components. Considering the surrounding R peak interval (also called RR interval) is also useful for the diagnosis of arrhythmia, four RR interval features are extracted and combined with the CNN features to input into a fully connected layer for ECG classification. By testing in the MIT-BIH arrhythmia database, our method achieves an overall performance of 70.75%, 67.47%, 68.76%, and 98.74% for positive predictive value, sensitivity, F1-score, and accuracy, respectively. Compared with existing methods, the overall F1-score of our method is increased by 4.75~16.85%. Because our method is simple and highly accurate, it can potentially be used as a clinical auxiliary diagnostic tool.


Sign in / Sign up

Export Citation Format

Share Document