scholarly journals ECG Identification For Personal Authentication Using LSTM-Based Deep Recurrent Neural Networks

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3069 ◽  
Author(s):  
Beom-Hun Kim ◽  
Jae-Young Pyun

Securing personal authentication is an important study in the field of security. Particularly, fingerprinting and face recognition have been used for personal authentication. However, these systems suffer from certain issues, such as fingerprinting forgery, or environmental obstacles. To address forgery or spoofing identification problems, various approaches have been considered, including electrocardiogram (ECG). For ECG identification, linear discriminant analysis (LDA), support vector machine (SVM), principal component analysis (PCA), deep recurrent neural network (DRNN), and recurrent neural network (RNN) have been conventionally used. Certain studies have shown that the RNN model yields the best performance in ECG identification as compared with the other models. However, these methods require a lengthy input signal for high accuracy. Thus, these methods may not be applied to a real-time system. In this study, we propose using bidirectional long short-term memory (LSTM)-based deep recurrent neural networks (DRNN) through late-fusion to develop a real-time system for ECG-based biometrics identification and classification. We suggest a preprocessing procedure for the quick identification and noise reduction, such as a derivative filter, moving average filter, and normalization. We experimentally evaluated the proposed method using two public datasets: MIT-BIH Normal Sinus Rhythm (NSRDB) and MIT-BIH Arrhythmia (MITDB). The proposed LSTM-based DRNN model shows that in NSRDB, the overall precision was 100%, recall was 100%, accuracy was 100%, and F1-score was 1. For MITDB, the overall precision was 99.8%, recall was 99.8%, accuracy was 99.8%, and F1-score was 0.99. Our experiments demonstrate that the proposed model achieves an overall higher classification accuracy and efficiency compared with the conventional LSTM approach.

Energies ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 2195
Author(s):  
Hasan Rafiq ◽  
Xiaohan Shi ◽  
Hengxu Zhang ◽  
Huimin Li ◽  
Manesh Kumar Ochani

Non-intrusive load monitoring (NILM) is a process of estimating operational states and power consumption of individual appliances, which if implemented in real-time, can provide actionable feedback in terms of energy usage and personalized recommendations to consumers. Intelligent disaggregation algorithms such as deep neural networks can fulfill this objective if they possess high estimation accuracy and lowest generalization error. In order to achieve these two goals, this paper presents a disaggregation algorithm based on a deep recurrent neural network using multi-feature input space and post-processing. First, the mutual information method was used to select electrical parameters that had the most influence on the power consumption of each target appliance. Second, selected steady-state parameters based multi-feature input space (MFS) was used to train the 4-layered bidirectional long short-term memory (LSTM) model for each target appliance. Finally, a post-processing technique was used at the disaggregation stage to eliminate irrelevant predicted sequences, enhancing the classification and estimation accuracy of the algorithm. A comprehensive evaluation was conducted on 1-Hz sampled UKDALE and ECO datasets in a noised scenario with seen and unseen test cases. Performance evaluation showed that the MFS-LSTM algorithm is computationally efficient, scalable, and possesses better estimation accuracy in a noised scenario, and generalized to unseen loads as compared to benchmark algorithms. Presented results proved that the proposed algorithm fulfills practical application requirements and can be deployed in real-time.


2019 ◽  
Vol 9 (16) ◽  
pp. 3391 ◽  
Author(s):  
Santiago Pascual ◽  
Joan Serrà ◽  
Antonio Bonafonte

Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.


2009 ◽  
Vol 21 (11) ◽  
pp. 3214-3227
Author(s):  
James Ting-Ho Lo

By a fundamental neural filtering theorem, a recurrent neural network with fixed weights is known to be capable of adapting to an uncertain environment. This letter reports some mathematical results on the performance of such adaptation for series-parallel identification of a dynamical system as compared with the performance of the best series-parallel identifier possible under the assumption that the precise value of the uncertain environmental process is given. In short, if an uncertain environmental process is observable (not necessarily constant) from the output of a dynamical system or constant (not necessarily observable), then a recurrent neural network exists as a series-parallel identifier of the dynamical system whose output approaches the output of an optimal series-parallel identifier using the environmental process as an additional input.


Author(s):  
PETER STUBBERUD

Unlike feedforward neural networks (FFNN) which can act as universal function approximators, recursive, or recurrent, neural networks can act as universal approximators for multi-valued functions. In this paper, a real time recursive backpropagation (RTRBP) algorithm in a vector matrix form is developed for a two-layer globally recursive neural network that has multiple delays in its feedback path. This algorithm has been evaluated on two GRNNs that approximate both an analytic and nonanalytic periodic multi-valued function that a feedforward neural network is not capable of approximating.


1999 ◽  
Vol 11 (5) ◽  
pp. 1069-1077 ◽  
Author(s):  
Danilo P. Mandic ◽  
Jonathon A. Chambers

A relationship between the learning rate η in the learning algorithm, and the slope β in the nonlinear activation function, for a class of recurrent neural networks (RNNs) trained by the real-time recurrent learning algorithm is provided. It is shown that an arbitrary RNN can be obtained via the referent RNN, with some deterministic rules imposed on its weights and the learning rate. Such relationships reduce the number of degrees of freedom when solving the nonlinear optimization task of finding the optimal RNN parameters.


Sign in / Sign up

Export Citation Format

Share Document