scholarly journals Clinical longitudinal evaluation of COVID-19 patients and prediction of organ specific recovery using artificial intelligence

Author(s):  
Winston T Wang ◽  
Charlotte L Zhang ◽  
Kang Wei ◽  
Ye Sang ◽  
Jun Shen ◽  
...  

Abstract Within COVID-19 there is an urgent unmet need to predict at the time of hospital admission which patients will recover from the disease, and how fast they recover in order to deliver personalized treatments and to properly allocate hospital resources so that healthcare systems do not become overwhelmed. To this end we have combined clinically salient CT imaging data synergistically with laboratory testing data in an integrative machine learning model to predict organ-specific recovery of patients from COVID-19. We trained and validated our model in 285 patients on each separate major organ system impacted by COVID-19 including the renal, pulmonary, immune, cardiac, and hepatic systems. To greatly enhance the speed and utility of our model, we applied an artificial intelligence method to segment and classify regions on CT imaging, from which interpretable data could be directly fed into the predictive machine learning model for overall recovery. Across all organ systems we achieved validation set area under the receiver operator characteristic curve (AUC) values for organ-specific recovery ranging from 0.80 to 0.89, and significant overall recovery prediction in Kaplan-Meier analyses. This demonstrates that the synergistic use of an AI framework applied to CT lung imaging and a machine learning model that integrates laboratory test data with imaging data can accurately predict the overall recovery of COVID-19 patients from baseline characteristics.

Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


2021 ◽  
Vol 132 (2) ◽  
pp. S52
Author(s):  
John L. Jefferies ◽  
Alison K. Spencer ◽  
David G. Warnock ◽  
Heather A. Lau ◽  
Matthew W. Nelson ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0240200
Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-García ◽  
Antonio Sánchez-Puente ◽  
Jesús Sampedro-Gomez ◽  
Raúl Azibeiro ◽  
...  

Background Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. Methods We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. Results A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. Conclusions This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


Author(s):  
Miguel Marcos ◽  
Moncef Belhassen-Garcia ◽  
Antonio Sanchez- Puente ◽  
Jesus Sampedro-Gomez ◽  
Raul Azibeiro ◽  
...  

BACKGROUND: Efficient and early triage of hospitalized Covid-19 patients to detect those with higher risk of severe disease is essential for appropriate case management. METHODS: We trained, validated, and externally tested a machine-learning model to early identify patients who will die or require mechanical ventilation during hospitalization from clinical and laboratory features obtained at admission. A development cohort with 918 Covid-19 patients was used for training and internal validation, and 352 patients from another hospital were used for external testing. Performance of the model was evaluated by calculating the area under the receiver-operating-characteristic curve (AUC), sensitivity and specificity. RESULTS: A total of 363 of 918 (39.5%) and 128 of 352 (36.4%) Covid-19 patients from the development and external testing cohort, respectively, required mechanical ventilation or died during hospitalization. In the development cohort, the model obtained an AUC of 0.85 (95% confidence interval [CI], 0.82 to 0.87) for predicting severity of disease progression. Variables ranked according to their contribution to the model were the peripheral blood oxygen saturation (SpO2)/fraction of inspired oxygen (FiO2) ratio, age, estimated glomerular filtration rate, procalcitonin, C-reactive protein, updated Charlson comorbidity index and lymphocytes. In the external testing cohort, the model performed an AUC of 0.83 (95% CI, 0.81 to 0.85). This model is deployed in an open source calculator, in which Covid-19 patients at admission are individually stratified as being at high or non-high risk for severe disease progression. CONCLUSIONS: This machine-learning model, applied at hospital admission, predicts risk of severe disease progression in Covid-19 patients.


2020 ◽  
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

BACKGROUND Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. OBJECTIVE This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). METHODS The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. RESULTS Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). CONCLUSIONS Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. INTERNATIONAL REGISTERED REPORT RR2-10.2196/resprot.5039


10.2196/22689 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e22689
Author(s):  
Gang Luo ◽  
Claudia L Nau ◽  
William W Crawford ◽  
Michael Schatz ◽  
Robert S Zeiger ◽  
...  

Background Asthma causes numerous hospital encounters annually, including emergency department visits and hospitalizations. To improve patient outcomes and reduce the number of these encounters, predictive models are widely used to prospectively pinpoint high-risk patients with asthma for preventive care via care management. However, previous models do not have adequate accuracy to achieve this goal well. Adopting the modeling guideline for checking extensive candidate features, we recently constructed a machine learning model on Intermountain Healthcare data to predict asthma-related hospital encounters in patients with asthma. Although this model is more accurate than the previous models, whether our modeling guideline is generalizable to other health care systems remains unknown. Objective This study aims to assess the generalizability of our modeling guideline to Kaiser Permanente Southern California (KPSC). Methods The patient cohort included a random sample of 70.00% (397,858/568,369) of patients with asthma who were enrolled in a KPSC health plan for any duration between 2015 and 2018. We produced a machine learning model via a secondary analysis of 987,506 KPSC data instances from 2012 to 2017 and by checking 337 candidate features to project asthma-related hospital encounters in the following 12-month period in patients with asthma. Results Our model reached an area under the receiver operating characteristic curve of 0.820. When the cutoff point for binary classification was placed at the top 10.00% (20,474/204,744) of patients with asthma having the largest predicted risk, our model achieved an accuracy of 90.08% (184,435/204,744), a sensitivity of 51.90% (2259/4353), and a specificity of 90.91% (182,176/200,391). Conclusions Our modeling guideline exhibited acceptable generalizability to KPSC and resulted in a model that is more accurate than those formerly built by others. After further enhancement, our model could be used to guide asthma care management. International Registered Report Identifier (IRRID) RR2-10.2196/resprot.5039


2021 ◽  
Vol 11 (2) ◽  
pp. 529-534
Author(s):  
Kareen Teo ◽  
Ching Wai Yong ◽  
Joon Huang Chuah ◽  
Belinda Pingguan Murphy ◽  
Khin Wee Lai

Hospital readmission shortly after discharge is contributing to rising medical care costs. Attempts have been exerted to reduce readmission rates by predicting patients at high risk of this episode on the basis of unstructured clinical notes. Discharge summary as part of the clinical prose is effective at modeling readmission risk. However, the predictive value of notes written upon discharge offers few opportunities to reduce the chance of readmission because the target patient might have already been discharged. This paper presents the use of early clinical notes in building a machine learning model to predict readmission at 48 h immediately after a patient's admission. Extensive feature engineering, testing multiple algorithms, and algorithm tuning were performed to enhance model performance. A risk scoring framework that combines the data- and knowledge-driven feature scores in risk computation was developed. The proposed predictive model showed better prognostic capability than the machine learning model alone in terms of the ability to detect readmission. In specific, the proposed algorithm showed improvements of 11%–28% in sensitivity and 1%–3% in the area-under-the-receiver operating characteristic curve.


2021 ◽  
Author(s):  
Bon San Koo ◽  
Seongho Eun ◽  
Kichul Shin ◽  
Hyemin Yoon ◽  
Chaelin Hong ◽  
...  

Abstract Background: We developed a model to predict remissions in patients treated with biologic disease-modifying anti-rheumatic drugs (bDMARDs) and to identify important clinical features associated with remission using explainable artificial intelligence (XAI).Methods: We gathered the follow-up data of 1204 patients treated with bDMARDs (etanercept, adalimumab, golimumab, infliximab, abatacept, and tocilizumab) from the Korean College of Rheumatology Biologics and Targeted Therapy Registry. Remission was predicted at one-year follow-up using baseline clinical data obtained at the time of enrollment. Machine learning methods (e.g., lasso, ridge, support vector machine, random forest, and XGBoost) were used for the predictions. The Shapley additive explanation (SHAP) value was used for interpretability of the predictions.Results: The ranges for accuracy and area under the receiver operating characteristic of the newly developed machine learning model for predicting remission were 52.8%–72.9% and 0.511–0.694, respectively. The Shapley plot in XAI showed that the impacts of the variables on predicting remission differed for each bDMARD. The most important features were age for adalimumab, rheumatoid factor for etanercept, erythrocyte sedimentation rate for infliximab and golimumab, disease duration for abatacept, and C-reactive protein for tocilizumab, with mean SHAP values of -0.250, -0.234, -0.514, -0.227, -0.804, and 0.135, respectively.Conclusions: Our proposed machine learning model successfully identified clinical features that were predictive of remission in each of the bDMARDs. This approach may be useful for improving treatment outcomes by identifying clinical information related to remissions in patients with rheumatoid arthritis.


2020 ◽  
pp. 2001104 ◽  
Author(s):  
Guangyao Wu ◽  
Pei Yang ◽  
Yuanliang Xie ◽  
Henry C. Woodruff ◽  
Xiangang Rao ◽  
...  

BackgroundThe outbreak of the coronavirus disease 2019 (COVID-19) has globally strained medical resources and caused significant mortality.ObjectiveTo develop and validate machine-learning model based on clinical features for severity risk assessment and triage for COVID-19 patients at hospital admission.Method725 patients were used to train and validate the model including a retrospective cohort of 299 hospitalised COVID-19 patients at Wuhan, China, from December 23, 2019, to February 13, 2020, and five cohorts with 426 patients from eight centers in China, Italy, and Belgium, from February 20, 2020, to March 21, 2020. The main outcome was the onset of severe or critical illness during hospitalisation. Model performances were quantified using the area under the receiver operating characteristic curve (AUC) and metrics derived from the confusion-matrix.ResultsThe median age was 50.0 years and 137 (45.8%) were men in the retrospective cohort. The median age was 62.0 years and 236 (55.4%) were men in five cohorts. The model was prospectively validated on five cohorts yielding AUCs ranging from 0.84 to 0.89, with accuracies ranging from 74.4% to 87.5%, sensitivities ranging from 75.0% to 96.9%, and specificities ranging from 57.5% to 88.0%, all of which performed better than the pneumonia severity index. The cut-off values of the low, medium, and high-risk probabilities were 0.21 and 0.80. The online-calculators can be found at www.covid19risk.ai.ConclusionThe machine-learning model, nomogram, and online-calculator might be useful to access the onset of severe and critical illness among COVID-19 patients and triage at hospital admission.


Sarcoma ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Ieva Malinauskaite ◽  
Jeremy Hofmeister ◽  
Simon Burgermeister ◽  
Angeliki Neroladaki ◽  
Marion Hamard ◽  
...  

Distinguishing lipoma from liposarcoma is challenging on conventional MRI examination. In case of uncertain diagnosis following MRI, further invasive procedure (percutaneous biopsy or surgery) is often required to allow for diagnosis based on histopathological examination. Radiomics and machine learning allow for several types of pathologies encountered on radiological images to be automatically and reliably distinguished. The aim of the study was to assess the contribution of radiomics and machine learning in the differentiation between soft-tissue lipoma and liposarcoma on preoperative MRI and to assess the diagnostic accuracy of a machine-learning model compared to musculoskeletal radiologists. 86 radiomics features were retrospectively extracted from volume-of-interest on T1-weighted spin-echo 1.5 and 3.0 Tesla MRI of 38 soft-tissue tumors (24 lipomas and 14 liposarcomas, based on histopathological diagnosis). These radiomics features were then used to train a machine-learning classifier to distinguish lipoma and liposarcoma. The generalization performance of the machine-learning model was assessed using Monte-Carlo cross-validation and receiver operating characteristic curve analysis (ROC-AUC). Finally, the performance of the machine-learning model was compared to the accuracy of three specialized musculoskeletal radiologists using the McNemar test. Machine-learning classifier accurately distinguished lipoma and liposarcoma, with a ROC-AUC of 0.926. Notably, it performed better than the three specialized musculoskeletal radiologists reviewing the same patients, who achieved ROC-AUC of 0.685, 0.805, and 0.785. Despite being developed on few cases, the trained machine-learning classifier accurately distinguishes lipoma and liposarcoma on preoperative MRI, with better performance than specialized musculoskeletal radiologists.


Sign in / Sign up

Export Citation Format

Share Document