scholarly journals Real-Time Arrhythmia Classification Algorithm Using Time-Domain ECG Feature Based on FFNN and CNN

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Jing Cai ◽  
Ge Zhou ◽  
Mengkun Dong ◽  
Xinlei Hu ◽  
Guangda Liu ◽  
...  

To solve the problem of real-time arrhythmia classification, this paper proposes a real-time arrhythmia classification algorithm using deep learning with low latency, high practicality, and high reliability, which can be easily applied to a real-time arrhythmia classification system. In the algorithm, a classifier detects the QRS complex position in real time for heartbeat segmentation. Then, the ECG_RRR feature is constructed according to the heartbeat segmentation result. Finally, another classifier classifies the arrhythmia in real time using the ECG_RRR feature. This article uses the MIT-BIH arrhythmia database and divides the 44 qualified records into two groups (DS1 and DS2) for training and evaluation, respectively. The result shows that the recall rate, precision rate, and overall accuracy of the algorithm’s interpatient QRS complex position prediction are 98.0%, 99.5%, and 97.6%, respectively. The overall accuracy for 5-class and 13-class interpatient arrhythmia classification is 91.5% and 75.6%, respectively. Furthermore, the real-time arrhythmia classification algorithm proposed in this paper has the advantages of practicability and low latency. It is easy to deploy the algorithm since the input is the original ECG signal with no feature processing required. And, the latency of the arrhythmia classification is only the duration of one heartbeat cycle.

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3715
Author(s):  
Ioan Ungurean ◽  
Nicoleta Cristina Gaitan

In the design and development process of fog computing solutions for the Industrial Internet of Things (IIoT), we need to take into consideration the characteristics of the industrial environment that must be met. These include low latency, predictability, response time, and operating with hard real-time compiling. A starting point may be the reference fog architecture released by the OpenFog Consortium (now part of the Industrial Internet Consortium), but it has a high abstraction level and does not define how to integrate the fieldbuses and devices into the fog system. Therefore, the biggest challenges in the design and implementation of fog solutions for IIoT is the diversity of fieldbuses and devices used in the industrial field and ensuring compliance with all constraints in terms of real-time compiling, low latency, and predictability. Thus, this paper proposes a solution for a fog node that addresses these issues and integrates industrial fieldbuses. For practical implementation, there are specialized systems on chips (SoCs) that provides support for real-time communication with the fieldbuses through specialized coprocessors and peripherals. In this paper, we describe the implementation of the fog node on a system based on Xilinx Zynq UltraScale+ MPSoC ZU3EG A484 SoC.


Author(s):  
Jianhua He ◽  
Guangheng Zhao ◽  
Lu Wang ◽  
Xue Sun ◽  
Lei Yang

AbstractIn this paper, we investigate the secrecy performance of short-packet transmissions in ultra-reliable and low-latency communications (URLLC). We consider the scenario where a multi-antenna source communicates with a single-antenna legitimate receiver requiring ultra-high reliability and low latency, in the presence of a single-antenna eavesdropper. In order to safeguard URLLC, the source transmits the artificial noise (AN) signal together with the confidential signal to confuse the eavesdropper. We adopt a lower bound on the maximal secrecy rate as the secrecy performance metric for short-packet transmissions in URLLC, which takes the target decoding error probabilities at the legitimate receiver and the eavesdropper into account. Using this metric, we first derive a compact expression of the generalized secrecy outage probability (SOP). Then, we formally prove that the generalized SOP is a convex function with respect to the power allocation factor between the confidential signal and the AN signal. We further determine the optimal power allocation factor that minimizes the generalized SOP. The results presented in this work can be useful for designing new secure transmission schemes for URLLC.


Author(s):  
Olivier Jaubert ◽  
Javier Montalt‐Tordera ◽  
Dan Knight ◽  
Gerry J. Coghlan ◽  
Simon Arridge ◽  
...  

Author(s):  
Alexey Shapin ◽  
Kittipong Kittichokechar ◽  
Niklas Andgart ◽  
Marten Sundberg ◽  
Gustav Wikstrom

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.


Author(s):  
Chuyuan Wang ◽  
Linxuan Zhang ◽  
Chongdang Liu

In order to deal with the dynamic production environment with frequent fluctuation of processing time, robotic cell needs an efficient scheduling strategy which meets the real-time requirements. This paper proposes an adaptive scheduling method based on pattern classification algorithm to guide the online scheduling process. The method obtains the scheduling knowledge of manufacturing system from the production data and establishes an adaptive scheduler, which can adjust the scheduling rules according to the current production status. In the process of establishing scheduler, how to choose essential attributes is the main difficulty. In order to solve the low performance and low efficiency problem of embedded feature selection method, based on the application of Extreme Gradient Boosting model (XGBoost) to obtain the adaptive scheduler, an improved hybrid optimization algorithm which integrates Gini impurity of XGBoost model into Particle Swarm Optimization (PSO) is employed to acquire the optimal subset of features. The results based on simulated robotic cell system show that the proposed PSO-XGBoost algorithm outperforms existing pattern classification algorithms and the newly learned adaptive model can improve the basic dispatching rules. At the same time, it can meet the demand of real-time scheduling.


2014 ◽  
Vol 24 (07) ◽  
pp. 1450023 ◽  
Author(s):  
LUNG-CHANG LIN ◽  
CHEN-SEN OUYANG ◽  
CHING-TAI CHIANG ◽  
REI-CHENG YANG ◽  
RONG-CHING WU ◽  
...  

Refractory epilepsy often has deleterious effects on an individual's health and quality of life. Early identification of patients whose seizures are refractory to antiepileptic drugs is important in considering the use of alternative treatments. Although idiopathic epilepsy is regarded as having a significantly lower risk factor of developing refractory epilepsy, still a subset of patients with idiopathic epilepsy might be refractory to medical treatment. In this study, we developed an effective method to predict the refractoriness of idiopathic epilepsy. Sixteen EEG segments from 12 well-controlled patients and 14 EEG segments from 11 refractory patients were analyzed at the time of first EEG recordings before antiepileptic drug treatment. Ten crucial EEG feature descriptors were selected for classification. Three of 10 were related to decorrelation time, and four of 10 were related to relative power of delta/gamma. There were significantly higher values in these seven feature descriptors in the well-controlled group as compared to the refractory group. On the contrary, the remaining three feature descriptors related to spectral edge frequency, kurtosis, and energy of wavelet coefficients demonstrated significantly lower values in the well-controlled group as compared to the refractory group. The analyses yielded a weighted precision rate of 94.2%, and a 93.3% recall rate. Therefore, the developed method is a useful tool in identifying the possibility of developing refractory epilepsy in patients with idiopathic epilepsy.


2021 ◽  
Author(s):  
Luka Vranić ◽  
Tin Nadarević ◽  
Davor Štimac

Background: Barrett’s esophagus (BE) requires surveillance to identify potential neoplasia at early stage. Standard surveillance regimen includes random four-quadrant biopsies by Seattle protocol. Main limitations of random biopsies are high risk of sampling error, difficulties in histology interpretation, common inadequate classification of pathohistological changes, increased risk of bleeding and time necessary to acquire the final diagnosis. Probe-based confocal laser endomicroscopy (pCLE) has emerged as a potential tool with an aim to overcome these obvious limitations. Summary: pCLE represents real-time microscopic imaging method that offers evaluation of epithelial and subepithelial structures with 1000-fold magnification. In theory, pCLE has potential to eliminate the need for biopsy in BE patient. The main advantages would be real-time diagnosis and decision making, greater diagnostic accuracy and to evaluate larger area compared to random biopsies. Clinical pCLE studies in esophagus show high diagnostic accuracy and its high negative predictive value offers high reliability and confidence to exclude dysplastic and neoplastic lesions. However, it still cannot replace histopathology due to lower positive predictive value and sensitivity. Key messages: Despite promising results, its role in routine use in patients with Barrett’s esophagus remains questionable primarily due to lack of well-organized double-blind randomized trials.


Sign in / Sign up

Export Citation Format

Share Document