scholarly journals A Deep-Learning Algorithm (ECG12Net) for Detecting Hypokalemia and Hyperkalemia by Electrocardiography: Algorithm Development

10.2196/15931 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e15931 ◽  
Author(s):  
Chin-Sheng Lin ◽  
Chin Lin ◽  
Wen-Hui Fang ◽  
Chia-Jung Hsu ◽  
Sy-Jou Chen ◽  
...  

Background The detection of dyskalemias—hypokalemia and hyperkalemia—currently depends on laboratory tests. Since cardiac tissue is very sensitive to dyskalemia, electrocardiography (ECG) may be able to uncover clinically important dyskalemias before laboratory results. Objective Our study aimed to develop a deep-learning model, ECG12Net, to detect dyskalemias based on ECG presentations and to evaluate the logic and performance of this model. Methods Spanning from May 2011 to December 2016, 66,321 ECG records with corresponding serum potassium (K+) concentrations were obtained from 40,180 patients admitted to the emergency department. ECG12Net is an 82-layer convolutional neural network that estimates serum K+ concentration. Six clinicians—three emergency physicians and three cardiologists—participated in human-machine competition. Sensitivity, specificity, and balance accuracy were used to evaluate the performance of ECG12Net with that of these physicians. Results In a human-machine competition including 300 ECGs of different serum K+ concentrations, the area under the curve for detecting hypokalemia and hyperkalemia with ECG12Net was 0.926 and 0.958, respectively, which was significantly better than that of our best clinicians. Moreover, in detecting hypokalemia and hyperkalemia, the sensitivities were 96.7% and 83.3%, respectively, and the specificities were 93.3% and 97.8%, respectively. In a test set including 13,222 ECGs, ECG12Net had a similar performance in terms of sensitivity for severe hypokalemia (95.6%) and severe hyperkalemia (84.5%), with a mean absolute error of 0.531. The specificities for detecting hypokalemia and hyperkalemia were 81.6% and 96.0%, respectively. Conclusions A deep-learning model based on a 12-lead ECG may help physicians promptly recognize severe dyskalemias and thereby potentially reduce cardiac events.

2021 ◽  
Author(s):  
Jae-Seung Yun ◽  
Jaesik Kim ◽  
Sang-Hyuk Jung ◽  
Seon-Ah Cha ◽  
Seung-Hyun Ko ◽  
...  

Objective: We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. Research Design and Methods: The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. Results: When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. Conclusions: Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1556-1556
Author(s):  
Alexander S. Rich ◽  
Barry Leybovich ◽  
Melissa Estevez ◽  
Jamie Irvine ◽  
Nisha Singh ◽  
...  

1556 Background: Identifying patients with a particular cancer and determining the date of that diagnosis from EHR data is important for selecting real world research cohorts and conducting downstream analyses. However, cancer diagnoses and their dates are often not accurately recorded in the EHR in a structured form. We developed a unified deep learning model for identifying patients with NSCLC and their initial and advanced diagnosis date(s). Methods: The study used a cohort of 52,834 patients with lung cancer ICD codes from the nationwide deidentified Flatiron Health EHR-derived database. For all patients in the cohort, abstractors used an in-house technology-enabled platform to identify an NSCLC diagnosis, advanced disease, and relevant diagnosis date(s) via chart review. Advanced NSCLC was defined as stage IIIB or IV disease at diagnosis or early stage disease that recurred or progressed. The deep learning model was trained on 38,517 patients, with a separate 14,317 patient test cohort. The model input was a set of sentences containing keywords related to (a)NSCLC, extracted from a patient’s EHR documents. Each sentence was associated with a date, using the document timestamp or, if present, a date mentioned explicitly in the sentence. The sentences were processed by a GRU network, followed by an attentional network that integrated across sentences, outputting a prediction of whether the patient had been diagnosed with (a)NSCLC and the diagnosis date(s) if so. We measured sensitivity and positive predictive value (PPV) of extracting the presence of initial and advanced diagnoses in the test cohort. Among patients with both model-extracted and abstracted diagnosis dates, we also measured 30-day accuracy, defined as the proportion of patients where the dates match to within 30 days. Real world overall survival (rwOS) for patients abstracted vs. model-extracted as advanced was calculated using Kaplan-Meier methods (index date: abstracted vs. model-extracted advanced diagnosis date). Results: Results in the Table show the sensitivity, PPV, and accuracy of the model extracted diagnoses and dates. RwOS was similar using model extracted aNSCLC diagnosis dates (median = 13.7) versus abstracted diagnosis dates (median = 13.3), with a difference of 0.4 months (95% CI = [0.0, 0.8]). Conclusions: Initial and advanced diagnosis of NSCLC and dates of diagnosis can be accurately extracted from unstructured clinical text using a deep learning algorithm. This can further enable the use of EHR data for research on real-world treatment patterns and outcomes analysis, and other applications such as clinical trials matching. Future work should aim to understand the impact of model errors on downstream analyses.[Table: see text]


2021 ◽  
Vol 251 ◽  
pp. 04012
Author(s):  
Simon Akar ◽  
Gowtham Atluri ◽  
Thomas Boettcher ◽  
Michael Peters ◽  
Henry Schreiner ◽  
...  

The locations of proton-proton collision points in LHC experiments are called primary vertices (PVs). Preliminary results of a hybrid deep learning algorithm for identifying and locating these, targeting the Run 3 incarnation of LHCb, have been described at conferences in 2019 and 2020. In the past year we have made significant progress in a variety of related areas. Using two newer Kernel Density Estimators (KDEs) as input feature sets improves the fidelity of the models, as does using full LHCb simulation rather than the “toy Monte Carlo” originally (and still) used to develop models. We have also built a deep learning model to calculate the KDEs from track information. Connecting a tracks-to-KDE model to a KDE-to-hists model used to find PVs provides a proof-of-concept that a single deep learning model can use track information to find PVs with high efficiency and high fidelity. We have studied a variety of models systematically to understand how variations in their architectures affect performance. While the studies reported here are specific to the LHCb geometry and operating conditions, the results suggest that the same approach could be used by the ATLAS and CMS experiments.


2021 ◽  
pp. svn-2020-000647
Author(s):  
Jia-wei Zhong ◽  
Yu-jia Jin ◽  
Zai-jun Song ◽  
Bo Lin ◽  
Xiao-hui Lu ◽  
...  

Background and purposeEarly haematoma expansion is determinative in predicting outcome of intracerebral haemorrhage (ICH) patients. The aims of this study are to develop a novel prediction model for haematoma expansion by applying deep learning model and validate its prediction accuracy.MethodsData of this study were obtained from a prospectively enrolled cohort of patients with primary supratentorial ICH from our centre. We developed a deep learning model to predict haematoma expansion and compared its performance with conventional non-contrast CT (NCCT) markers. To evaluate the predictability of this model, it was also compared with a logistic regression model based on haematoma volume or the BAT score.ResultsA total of 266 patients were finally included for analysis, and 74 (27.8%) of them experienced early haematoma expansion. The deep learning model exhibited highest C statistic as 0.80, compared with 0.64, 0.65, 0.51, 0.58 and 0.55 for hypodensities, black hole sign, blend sign, fluid level and irregular shape, respectively. While the C statistics for swirl sign (0.70; p=0.211) and heterogenous density (0.70; p=0.141) were not significantly higher than that of the deep learning model. Moreover, the predictive value for the deep learning model was significantly superior to that of the logistic model of haematoma volume (0.62; p=0.042) and the BAT score (0.65; p=0.042).ConclusionsCompared with the conventional NCCT markers and BAT predictive model, the deep learning algorithm showed superiority for predicting early haematoma expansion in ICH patients.


2021 ◽  
Vol 53 (2) ◽  
Author(s):  
Sen Yang ◽  
Yaping Zhang ◽  
Siu-Yeung Cho ◽  
Ricardo Correia ◽  
Stephen P. Morgan

AbstractConventional blood pressure (BP) measurement methods have different drawbacks such as being invasive, cuff-based or requiring manual operations. There is significant interest in the development of non-invasive, cuff-less and continual BP measurement based on physiological measurement. However, in these methods, extracting features from signals is challenging in the presence of noise or signal distortion. When using machine learning, errors in feature extraction result in errors in BP estimation, therefore, this study explores the use of raw signals as a direct input to a deep learning model. To enable comparison with the traditional machine learning models which use features from the photoplethysmogram and electrocardiogram, a hybrid deep learning model that utilises both raw signals and physical characteristics (age, height, weight and gender) is developed. This hybrid model performs best in terms of both diastolic BP (DBP) and systolic BP (SBP) with the mean absolute error being 3.23 ± 4.75 mmHg and 4.43 ± 6.09 mmHg respectively. DBP and SBP meet the Grade A and Grade B performance requirements of the British Hypertension Society respectively.


2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaoting Yin ◽  
Xiaosha Tao

Online business has grown exponentially during the last decade, and the industries are focusing on online business more than before. However, just setting up an online store and starting selling might not work. Different machine learning and data mining techniques are needed to know the users’ preferences and know what would be best for business. According to the decision-making needs of online product sales, combined with the influencing factors of online product sales in various industries and the advantages of deep learning algorithm, this paper constructs a sales prediction model suitable for online products and focuses on evaluating the adaptability of the model in different types of online products. In the research process, the full connection model is compared with the training results of CNN, which proves the accuracy and generalization ability of CNN model. By selecting the non-deep learning model as the comparison baseline, the performance advantages of CNN model under different categories of products are proved. In addition, the experiment concludes that the unsupervised pretrained CNN model is more effective and adaptable in sales forecasting.


2020 ◽  
Author(s):  
Shaan Khurshid ◽  
Samuel Friedman ◽  
James P. Pirruccello ◽  
Paolo Di Achille ◽  
Nathaniel Diamant ◽  
...  

ABSTRACTBackgroundCardiac magnetic resonance (CMR) is the gold standard for left ventricular hypertrophy (LVH) diagnosis. CMR-derived LV mass can be estimated using proprietary algorithms (e.g., inlineVF), but their accuracy and availability may be limited.ObjectiveTo develop an open-source deep learning model to estimate CMR-derived LV mass.MethodsWithin participants of the UK Biobank prospective cohort undergoing CMR, we trained two convolutional neural networks to estimate LV mass. The first (ML4Hreg) performed regression informed by manually labeled LV mass (available in 5,065 individuals), while the second (ML4Hseg) performed LV segmentation informed by inlineVF contours. We compared ML4Hreg, ML4Hseg, and inlineVF against manually labeled LV mass within an independent holdout set using Pearson correlation and mean absolute error (MAE). We assessed associations between CMR-derived LVH and prevalent cardiovascular disease using logistic regression adjusted for age and sex.ResultsWe generated CMR-derived LV mass estimates within 38,574 individuals. Among 891 individuals in the holdout set, ML4Hseg reproduced manually labeled LV mass more accurately (r=0.864, 95% CI 0.847-0.880; MAE 10.41g, 95% CI 9.82-10.99) than ML4Hreg (r=0.843, 95% CI 0.823-0.861; MAE 10.51, 95% CI 9.86-11.15, p=0.01) and inlineVF (r=0.795, 95% CI 0.770-0.818; MAE 14.30, 95% CI 13.46-11.01, p<0.01). LVH defined using ML4Hseg demonstrated the strongest associations with hypertension (odds ratio 2.76, 95% CI 2.51-3.04), atrial fibrillation (1.75, 95% CI 1.37-2.20), and heart failure (4.53, 95% CI 3.16-6.33).ConclusionsML4Hseg is an open-source deep learning model providing automated quantification of CMR-derived LV mass. Deep learning models characterizing cardiac structure may facilitate broad cardiovascular discovery.


BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


Sign in / Sign up

Export Citation Format

Share Document