Automatic diagnosis of cardiovascular disorders by sub images of the ECG signal using multi-feature extraction methods and randomized neural network

2021 ◽  
Vol 64 ◽  
pp. 102260
Author(s):  
Ömer Faruk Ertuğrul ◽  
Emrullah Acar ◽  
Erdoğan Aldemir ◽  
Abdulkerim Öztekin
Author(s):  
G. Rama Janani

The paper is based on classification of respiratory illness like covid 19 and pneumonia by using deep learning. The symptoms of COVID-19 and pneumonia are similar. Due to this, it is often difficult to identify what is causing your condition without being tested for COVID-19 or other respiratory infections. To find out how COVID-19 and pneumonia differs from one another, this paper presents that a novel Convolutional Neural Network in Tensor Flow and Keras based Covid-19 pneumonia classification. The proposed system supported implements CNN using Pneumonia images to classify the Covid-19, normal, pneumonia. The knowledge from these studies can potentially help in diagnosis of the concerned disease. It is predicted that the success of the anticipated results will increase if the CNN method is supported by adding extra feature extraction methods for classifying covid-19 and pneumonia successfully thereby improving the efficacy and potential of using deep CNN to pictures.


2020 ◽  
Vol 10 (11) ◽  
pp. 781
Author(s):  
Abeer Al-Nafjan ◽  
Khulud Alharthi ◽  
Heba Kurdi

Brain–computer interface (BCI) technology provides a direct interface between the brain and an external device. BCIs have facilitated the monitoring of conscious brain electrical activity via electroencephalogram (EEG) signals and the detection of human emotion. Recently, great progress has been made in the development of novel paradigms for EEG-based emotion detection. These studies have also attempted to apply BCI research findings in varied contexts. Interestingly, advances in BCI technologies have increased the interest of scientists because such technologies’ practical applications in human–machine relationships seem promising. This emphasizes the need for a building process for an EEG-based emotion detection system that is lightweight, in terms of a smaller EEG dataset size and no involvement of feature extraction methods. In this study, we investigated the feasibility of using a spiking neural network to build an emotion detection system from a smaller version of the DEAP dataset with no involvement of feature extraction methods while maintaining decent accuracy. The results showed that by using a NeuCube-based spiking neural network, we could detect the valence emotion level using only 60 EEG samples with 84.62% accuracy, which is a comparable accuracy to that of previous studies.


2020 ◽  
Vol 10 (9) ◽  
pp. 3304 ◽  
Author(s):  
Eko Ihsanto ◽  
Kalamullah Ramli ◽  
Dodi Sudiana ◽  
Teddy Surya Gunawan

The electrocardiogram (ECG) is relatively easy to acquire and has been used for reliable biometric authentication. Despite growing interest in ECG authentication, there are still two main problems that need to be tackled, i.e., the accuracy and processing speed. Therefore, this paper proposed a fast and accurate ECG authentication utilizing only two stages, i.e., ECG beat detection and classification. By minimizing time-consuming ECG signal pre-processing and feature extraction, our proposed two-stage algorithm can authenticate the ECG signal around 660 μs. Hamilton’s method was used for ECG beat detection, while the Residual Depthwise Separable Convolutional Neural Network (RDSCNN) algorithm was used for classification. It was found that between six and eight ECG beats were required for authentication of different databases. Results showed that our proposed algorithm achieved 100% accuracy when evaluated with 48 patients in the MIT-BIH database and 90 people in the ECG ID database. These results showed that our proposed algorithm outperformed other state-of-the-art methods.


Author(s):  
Fan Zhang

With the development of computer technology, the simulation authenticity of virtual reality technology is getting higher and higher, and the accurate recognition of human–computer interaction gestures is also the key technology to enhance the authenticity of virtual reality. This article briefly introduced three different gesture feature extraction methods: scale invariant feature transform, local binary pattern and histogram of oriented gradients (HOG), and back-propagation (BP) neural network for classifying and recognizing different gestures. The gesture feature vectors obtained by three feature extraction methods were used as input data of BP neural network respectively and were simulated in MATLAB software. The results showed that the information of feature gesture diagram extracted by HOG was the closest to the original one; the BP neural network that applied HOG extracted feature vectors converged to stability faster and had the smallest error when it was stable; in the aspect of gesture recognition, the BP neural network that applied HOG extracted feature vector had higher accuracy and precision and lower false alarm rate.


Sign in / Sign up

Export Citation Format

Share Document