scholarly journals Can we predict early 7-day readmissions using a standard 30-day hospital readmission risk prediction model?

2019 ◽  
Author(s):  
Sameh N. Saleh ◽  
Anil N. Makam ◽  
Ethan A. Halm ◽  
Oanh Kieu Nguyen

AbstractDespite focus on preventing 30-day readmissions, early readmissions (within 7 days of discharge) may be more preventable than later readmissions (8-30 days). We assessed how well a previously validated 30-day readmission prediction model predicts 7-day readmissions. We re-derived model coefficients for the same predictors as in the original 30-day model to optimize prediction of 7-day readmissions. We compared model performance and compared differences in strength of model factors between the 7-day model to the 30-day model. While there was no substantial change in model performance between the original 30-day and the re-derived 7-day model, there was significant change in strength of predictors. Characteristics at discharge were more predictive of 7-day readmissions, while baseline characteristics were less predictive. Improvements in predicting early 7-day readmissions will likely require new risk factors proximal to the day of discharge.

Author(s):  
Sameh N. Saleh ◽  
Anil N. Makam ◽  
Ethan A. Halm ◽  
Oanh Kieu Nguyen

Abstract Background Despite focus on preventing 30-day readmissions, early readmissions (within 7 days of discharge) may be more preventable than later readmissions (8–30 days). We assessed how well a previously validated 30-day EHR-based readmission prediction model predicts 7-day readmissions and compared differences in strength of predictors. Methods We conducted an observational study on adult hospitalizations from 6 diverse hospitals in North Texas using a 50–50 split-sample derivation and validation approach. We re-derived model coefficients for the same predictors as in the original 30-day model to optimize prediction of 7-day readmissions. We then compared the discrimination and calibration of the 7-day model to the 30-day model to assess model performance. To examine the changes in the point estimates between the two models, we evaluated the percent changes in coefficients. Results Of 32,922 index hospitalizations among unique patients, 4.4% had a 7-day admission and 12.7% had a 30-day readmission. Our original 30-day model had modestly lower discrimination for predicting 7-day vs. any 30-day readmission (C-statistic of 0.66 vs. 0.69, p ≤ 0.001). Our re-derived 7-day model had similar discrimination (C-statistic of 0.66, p = 0.38), but improved calibration. For the re-derived 7-day model, discharge day factors were more predictive of early readmissions, while baseline characteristics were less predictive. Conclusion A previously validated 30-day readmission model can also be used as a stopgap to predict 7-day readmissions as model performance did not substantially change. However, strength of predictors differed between the 7-day and 30-day model; characteristics at discharge were more predictive of 7-day readmissions, while baseline characteristics were less predictive. Improvements in predicting early 7-day readmissions will likely require new risk factors proximal to day of discharge.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaona Jia ◽  
Mirza Mansoor Baig ◽  
Farhaan Mirza ◽  
Hamid GholamHosseini

Background and Objective. Current cardiovascular disease (CVD) risk models are typically based on traditional laboratory-based predictors. The objective of this research was to identify key risk factors that affect the CVD risk prediction and to develop a 10-year CVD risk prediction model using the identified risk factors. Methods. A Cox proportional hazard regression method was applied to generate the proposed risk model. We used the dataset from Framingham Original Cohort of 5079 men and women aged 30-62 years, who had no overt symptoms of CVD at the baseline; among the selected cohort 3189 had a CVD event. Results. A 10-year CVD risk model based on multiple risk factors (such as age, sex, body mass index (BMI), hypertension, systolic blood pressure (SBP), cigarettes per day, pulse rate, and diabetes) was developed in which heart rate was identified as one of the novel risk factors. The proposed model achieved a good discrimination and calibration ability with C-index (receiver operating characteristic (ROC)) being 0.71 in the validation dataset. We validated the model via statistical and empirical validation. Conclusion. The proposed CVD risk prediction model is based on standard risk factors, which could help reduce the cost and time required for conducting the clinical/laboratory tests. Healthcare providers, clinicians, and patients can use this tool to see the 10-year risk of CVD for an individual. Heart rate was incorporated as a novel predictor, which extends the predictive ability of the past existing risk equations.


2018 ◽  
Author(s):  
Anabela Correia Martins ◽  
Juliana Moreira ◽  
Catarina Silva ◽  
Joana Silva ◽  
Cláudia Tonelo ◽  
...  

BACKGROUND Falls are a major health problem among older adults. The risk of falling can be increased by polypharmacy, vision impairment, high blood pressure, environmental home hazards, fear of falling, and changes in the function of musculoskeletal and sensory systems that are associated with aging. Moreover, individuals who experienced previous falls are at higher risk. Nevertheless, falls can be prevented by screening for known risk factors. OBJECTIVE The objective of our study was to develop a multifactorial, instrumented, screening tool for fall risk, according to the key risk factors for falls, among Portuguese community-dwelling adults aged 50 years or over and to prospectively validate a risk prediction model for the risk of falling. METHODS This prospective study, following a convenience sample method, will recruit community-dwelling adults aged 50 years or over, who stand and walk independently with or without walking aids in parish councils, physical therapy clinics, senior’s universities, and other facilities in different regions of continental Portugal. The FallSensing screening tool is a technological solution for fall risk screening that includes software, a pressure platform, and 2 inertial sensors. The screening includes questions about demographic and anthropometric data, health and lifestyle behaviors, a detailed explanation about procedures to accomplish 6 functional tests (grip strength, Timed Up and Go, 30 seconds sit to stand, step test, 4-Stage Balance test “modified,” and 10-meter walking speed), 3 questionnaires concerning environmental home hazards, and an activity and participation profile related to mobility and self-efficacy for exercise. RESULTS The enrollment began in June 2016 and we anticipate study completion by the end of 2018. CONCLUSIONS The FallSensing screening tool is a multifactorial and evidence-based assessment which identifies factors that contribute to fall risk. Establishing a risk prediction model will allow preventive strategies to be implemented, potentially decreasing fall rate. REGISTERED REPORT IDENTIFIER RR1-10.2196/10304


Author(s):  
Masaru Samura ◽  
Naoki Hirose ◽  
Takenori Kurata ◽  
Keisuke Takada ◽  
Fumio Nagumo ◽  
...  

Abstract Background In this study, we investigated the risk factors for daptomycin-associated creatine phosphokinase (CPK) elevation and established a risk score for CPK elevation. Methods Patients who received daptomycin at our hospital were classified into the normal or elevated CPK group based on their peak CPK levels during daptomycin therapy. Univariable and multivariable analyses were performed, and a risk score and prediction model for the incidence probability of CPK elevation were calculated based on logistic regression analysis. Results The normal and elevated CPK groups included 181 and 17 patients, respectively. Logistic regression analysis revealed that concomitant statin use (odds ratio [OR] 4.45, 95% confidence interval [CI] 1.40–14.47, risk score 4), concomitant antihistamine use (OR 5.66, 95% CI 1.58–20.75, risk score 4), and trough concentration (Cmin) between 20 and <30 µg/mL (OR 14.48, 95% CI 2.90–87.13, risk score 5) and ≥30.0 µg/mL (OR 24.64, 95% CI 3.21–204.53, risk score 5) were risk factors for daptomycin-associated CPK elevation. The predicted incidence probabilities of CPK elevation were <10% (low risk), 10%–<25% (moderate risk), and ≥25% (high risk) with the total risk scores of ≤4, 5–6, and ≥8, respectively. The risk prediction model exhibited a good fit (area under the receiving-operating characteristic curve 0.85, 95% CI 0.74–0.95). Conclusions These results suggested that concomitant use of statins with antihistamines and Cmin ≥20 µg/mL were risk factors for daptomycin-associated CPK elevation. Our prediction model might aid in reducing the incidence of daptomycin-associated CPK elevation.


Blood ◽  
2016 ◽  
Vol 128 (22) ◽  
pp. 2399-2399 ◽  
Author(s):  
Chun Chao ◽  
Lanfang Xu ◽  
Leila Family ◽  
Hairong Xu

Abstract Introduction: Chemotherapy induced anemia (CIA) is associated with an array of symptoms that can negatively impact patients' quality of life. The incidence and severity of CIA vary significantly depending on the cancer type and chemotherapy regimen administered. Several patient characteristics, such as age, gender, renal function and pre-treatment hemoglobin (Hb) and albumin level have also been reported to be associated with the risk of CIA. However, a comprehensive risk prediction model for CIA is lacking. Here we sought to develop a risk prediction model for severe CIA (Hb<8 g/dl) in breast cancer patients that accounts for detailed chemotherapy regimens and novel risk factors for anemia. Methods: Women diagnosed with incident breast cancer at age 18 and older between 2000-2012 at Kaiser Permanente Southern California (KPSC)and initiated myelosuppressivechemotherapy before June 30, 2013 were included. Women who did not have any hemoglobin measurement prior or during the course of chemotherapy were excluded. Those who had the following conditions prior to chemotherapy were also excluded: less than 12 months KPSC membership, anemia, transfusion, radiation therapy or bone marrow transplant. Potential predictors considered included established CIA risk factors, such as patient demographic characteristics, cancer stage at diagnosis, chemotherapy regimens, and laboratory measurements (Table 1). In addition, several novel risk factors were also evaluated for their ability to predict severe CIA; these included recent cancer surgery and radiation therapy, chronic comorbidities (Table 1) and mediation use (Table 1).All data were collected from KPSC's electronic health records. The cohort was randomly split into a training set (50%) and a validation set (50%). Logistic regression was used to develop the risk prediction model for severe CIA. Predictors that showed a crude association with severe CIA with an odds ratio > 1.5 or <0.67 (i.e., 1/1.5) or a p-value <0.10 in the training set were included for predictive model selection. A stepwise model selection method was used with a p-value cut-off at 0.05. The model performance of the selected final model was evaluated in the validation set usingHosmer-Lemeshow goodness of fit test and the area underthe receiver operating characteristiccurve (AUC). Results: A total of 11,291 breast cancer patients were included in the study. The mean age at diagnosis was 55 years. The majority of the patients were of non-Hispanic white race/ethnicity (57%). Of these, 3.0% developed severe CIA during chemotherapy. The following factors were positively associated with risk of developing severe anemia in the crude analyses and were thus included for model selection: age >65, advanced stages, length of KPSC membership, time between cancer diagnosis to chemotherapy, prior radiation therapy, vascular disease, renal disease, hypertension, osteoarthritis, use of steroids, use of diuretics, use of calcium channel blockers, use of statins, chemotherapy regimens, prior surgery, anti-coagulant use, calendar periods, and baseline ALP, HCT, HGB, lymphocyte count, MCH, MCV, ANC, platelet, RBC, RDW, WBC and GFR (calculated from creatinine). The final model included age, stage, chemotherapy regimen, corticosteroid use, and baseline Hb, MCV and GFR. The odds ratio and 95% confidence interval estimates of variables in the final model in the training set and the validation set are both shown in Table 2. This prediction model achieved an AUC of 0.76 in the validation set, and passed the goodness-of-fit test (test statistics was 0.17). Conclusion: The risk prediction model incorporating traditional and novel CIA risk factors appeared to perform well and may assist clinicians to increase surveillance for patients at high risk of severe CIA during chemotherapy. Disclosures Chao: Amgen Inc.: Research Funding. Xu:Amgen Inc.: Research Funding. Family:Amgen Inc.: Research Funding. Xu:Amgen Inc.: Research Funding.


10.2196/23128 ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. e23128
Author(s):  
Pan Pan ◽  
Yichao Li ◽  
Yongjiu Xiao ◽  
Bingchao Han ◽  
Longxiang Su ◽  
...  

Background Patients with COVID-19 in the intensive care unit (ICU) have a high mortality rate, and methods to assess patients’ prognosis early and administer precise treatment are of great significance. Objective The aim of this study was to use machine learning to construct a model for the analysis of risk factors and prediction of mortality among ICU patients with COVID-19. Methods In this study, 123 patients with COVID-19 in the ICU of Vulcan Hill Hospital were retrospectively selected from the database, and the data were randomly divided into a training data set (n=98) and test data set (n=25) with a 4:1 ratio. Significance tests, correlation analysis, and factor analysis were used to screen 100 potential risk factors individually. Conventional logistic regression methods and four machine learning algorithms were used to construct the risk prediction model for the prognosis of patients with COVID-19 in the ICU. The performance of these machine learning models was measured by the area under the receiver operating characteristic curve (AUC). Interpretation and evaluation of the risk prediction model were performed using calibration curves, SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), etc, to ensure its stability and reliability. The outcome was based on the ICU deaths recorded from the database. Results Layer-by-layer screening of 100 potential risk factors finally revealed 8 important risk factors that were included in the risk prediction model: lymphocyte percentage, prothrombin time, lactate dehydrogenase, total bilirubin, eosinophil percentage, creatinine, neutrophil percentage, and albumin level. Finally, an eXtreme Gradient Boosting (XGBoost) model established with the 8 important risk factors showed the best recognition ability in the training set of 5-fold cross validation (AUC=0.86) and the verification queue (AUC=0.92). The calibration curve showed that the risk predicted by the model was in good agreement with the actual risk. In addition, using the SHAP and LIME algorithms, feature interpretation and sample prediction interpretation algorithms of the XGBoost black box model were implemented. Additionally, the model was translated into a web-based risk calculator that is freely available for public usage. Conclusions The 8-factor XGBoost model predicts risk of death in ICU patients with COVID-19 well; it initially demonstrates stability and can be used effectively to predict COVID-19 prognosis in ICU patients.


Sign in / Sign up

Export Citation Format

Share Document