scholarly journals Quantifying risk stratification provided by diagnostic tests and risk predictions: Comparison to AUC and decision curve analysis

2019 ◽  
Vol 38 (16) ◽  
pp. 2943-2955 ◽  
Author(s):  
Hormuzd A. Katki
Author(s):  
Andrew J. Vickers ◽  
Ben van Calster ◽  
Ewout W. Steyerberg

Abstract Background Decision curve analysis is a method to evaluate prediction models and diagnostic tests that was introduced in a 2006 publication. Decision curves are now commonly reported in the literature, but there remains widespread misunderstanding of and confusion about what they mean. Summary of commentary In this paper, we present a didactic, step-by-step introduction to interpreting a decision curve analysis and answer some common questions about the method. We argue that many of the difficulties with interpreting decision curves can be solved by relabeling the y-axis as “benefit” and the x-axis as “preference.” A model or test can be recommended for clinical use if it has the highest level of benefit across a range of clinically reasonable preferences. Conclusion Decision curves are readily interpretable if readers and authors follow a few simple guidelines.


2019 ◽  
Vol 32 (11) ◽  
Author(s):  
G Zhang ◽  
B Wu ◽  
X Wang ◽  
J Li

SUMMARY The objective of this study is to estimate the probability of cause-specific mortality using a competing-risks nomogram and recursive partitioning analysis in a large population-based cohort of patients with esophageal neuroendocrine carcinoma. The surveillance, epidemiology and end results database was used to identify 162 patients diagnosed with esophageal neuroendocrine carcinoma from 1998 to 2014. We estimated a cumulative incidence function for cause-specific mortality. A nomogram was constructed by using a proportional subdistribution hazard model, validated using bootstrap cross-validation, and evaluated with a decision curve analysis to assess its clinical utility. Finally, we performed risk stratification using a recursive partitioning analysis to divide patients with esophageal neuroendocrine carcinoma into clinically useful prognostic groups. Tumor location, distant metastasis, surgery, radiotherapy, and chemotherapy were significantly associated with cause-specific mortality. The calibration plots demonstrated good concordance between the predicted and actual outcomes. The discrimination performance of a Fine–Gray model was evaluated by using the c-index, which was 0.723 for cause-specific mortality. The decision curve analysis ranged from 0.268 to 0.968 for the threshold probability at which the risk model provided net clinical benefits relative to hypothetical all-screening and no-screening scenarios. The risk groups stratified by a recursive partitioning analysis allowed significant distinction between cumulative incidence curves. We determined the probability of cause-specific mortality in patients with esophageal neuroendocrine carcinoma and developed a nomogram and recursive partitioning analysis stratification system based on a competing-risks model. The nomogram and recursive partitioning analysis appear to be suitable for risk stratification of cause-specific mortality in patients with esophageal neuroendocrine carcinoma and will help clinicians to identify patients at increased risk of cause-specific mortality to guide treatment and surveillance decisions.


2020 ◽  
Author(s):  
Emily A Kendall ◽  
Nimalan Arinaminpathy ◽  
Jilian A Sacks ◽  
Yukari C Manabe ◽  
Sabine Dittrich ◽  
...  

AbstractBackgroundSARS-CoV-2 antigen-detection rapid diagnostic tests (Ag-RDT) offer the ability to diagnose COVID-19 rapidly and at low cost; however, lower sensitivity has limited adoption of Ag-RDT in clinical settings.MethodsWe compared Ag-RDT, nucleic acid amplification tests (NAAT), and clinical judgment alone for diagnosing COVID-19 among symptomatic patients. We investigated two scenarios: a high-prevalence hospital setting with 24-hour NAAT turnaround, and a lower-prevalence outpatient setting with 3-day NAAT turnaround. We simulated transmission from cases and contacts and relationships between time, viral burden, transmission, and case detection. We used decision curve analysis to compare diagnostic approaches, estimating the time- and infectivity-dependent benefit of each true-positive diagnosis.ResultsIn the primary analysis comparing Ag-RDT and NAAT, greater net benefit was achieved with Ag-RDT in the outpatient setting and with NAAT in the hospital setting. In the hospital setting, Ag-RDT becomes more beneficial if NAAT turnaround times exceed 2 days or Ag-RDT sensitivity increases to at least 95% (relative to NAAT) during acute illness. Similarly, in the outpatient setting, NAAT could be more beneficial when NAAT turnaround time remains under 2 days or patients strictly isolate while awaiting results. Clinical judgment was preferred only if clinical diagnoses generated a robust clinical and public health response and false-positive diagnoses produced minimal harm.ConclusionsFor diagnosing symptomatic COVID-19, Ag-RDT may provide greater net benefit than either NAAT or clinical judgment when NAAT turnaround times are more than two days. NAAT is likely to remain optimal for hospitalized patients with prolonged symptoms prior to admission.


2015 ◽  
Vol 143 (11-12) ◽  
pp. 681-687 ◽  
Author(s):  
Tomislav Pejovic ◽  
Miroslav Stojadinovic

Introduction. Accurate precholecystectomy detection of concurrent asymptomatic common bile duct stones (CBDS) is key in the clinical decision-making process. The standard preoperative methods used to diagnose these patients are often not accurate enough. Objective. The aim of the study was to develop a scoring model that would predict CBDS before open cholecystectomy. Methods. We retrospectively collected preoperative (demographic, biochemical, ultrasonographic) and intraoperative (intraoperative cholangiography) data for 313 patients at the department of General Surgery at Gornji Milanovac from 2004 to 2007. The patients were divided into a derivation (213) and a validation set (100). Univariate and multivariate regression analysis was used to determine independent predictors of CBDS. These predictors were used to develop scoring model. Various measures for the assessment of risk prediction models were determined, such as predictive ability, accuracy, the area under the receiver operating characteristic curve (AUC), calibration and clinical utility using decision curve analysis. Results. In a univariate analysis, seven risk factors displayed significant correlation with CBDS. Total bilirubin, alkaline phosphatase and bile duct dilation were identified as independent predictors of choledocholithiasis. The resultant total possible score in the derivation set ranged from 7.6 to 27.9. Scoring model shows good discriminatory ability in the derivation and validation set (AUC 94.3 and 89.9%, respectively), excellent accuracy (95.5%), satisfactory calibration in the derivation set, similar Brier scores and clinical utility in decision curve analysis. Conclusion. Developed scoring model might successfully estimate the presence of choledocholithiasis in patients planned for elective open cholecystectomy.


BMC Cancer ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Suyu Wang ◽  
Yue Yu ◽  
Wenting Xu ◽  
Xin Lv ◽  
Yufeng Zhang ◽  
...  

Abstract Background The prognostic roles of three lymph node classifications, number of positive lymph nodes (NPLN), log odds of positive lymph nodes (LODDS), and lymph node ratio (LNR) in lung adenocarcinoma are unclear. We aim to find the classification with the strongest predictive power and combine it with the American Joint Committee on Cancer (AJCC) 8th TNM stage to establish an optimal prognostic nomogram. Methods 25,005 patients with T1-4N0–2M0 lung adenocarcinoma after surgery between 2004 to 2016 from the Surveillance, Epidemiology, and End Results database were included. The study cohort was divided into training cohort (13,551 patients) and external validation cohort (11,454 patients) according to different geographic region. Univariate and multivariate Cox regression analyses were performed on the training cohort to evaluate the predictive performance of NPLN (Model 1), LODDS (Model 2), LNR (Model 3) or LODDS+LNR (Model 4) respectively for cancer-specific survival and overall survival. Likelihood-ratio χ2 test, Akaike Information Criterion, Harrell concordance index, integrated discrimination improvement (IDI) and net reclassification improvement (NRI) were used to evaluate the predictive performance of the models. Nomograms were established according to the optimal models. They’re put into internal validation using bootstrapping technique and external validation using calibration curves. Nomograms were compared with AJCC 8th TNM stage using decision curve analysis. Results NPLN, LODDS and LNR were independent prognostic factors for cancer-specific survival and overall survival. LODDS+LNR (Model 4) demonstrated the highest Likelihood-ratio χ2 test, highest Harrell concordance index, and lowest Akaike Information Criterion, and IDI and NRI values suggested Model 4 had better prediction accuracy than other models. Internal and external validations showed that the nomograms combining TNM stage with LODDS+LNR were convincingly precise. Decision curve analysis suggested the nomograms performed better than AJCC 8th TNM stage in clinical practicability. Conclusions We constructed online nomograms for cancer-specific survival and overall survival of lung adenocarcinoma patients after surgery, which may facilitate doctors to provide highly individualized therapy.


2021 ◽  
Author(s):  
Yijun Wu ◽  
Hongzhi Liu ◽  
Jianxing Zeng ◽  
Yifan Chen ◽  
Guoxu Fang ◽  
...  

Abstract Background and Objectives Combined hepatocellular cholangiocarcinoma (cHCC) has a high incidence of early recurrence. The objective of this study is to construct a model predicting very early recurrence (VER)(ie, recurrence within 6 months after surgery) of cHCC. Methods 131 consecutive patients from Eastern Hepatobiliary Surgery Hospital served as a development cohort to construct a nomogram predicting VER by using multivariable logistic regression analysis. The model was internally and externally validated in an validation cohort of 90 patients from Mengchao Hepatobiliary Hospital using the C concordance statistic, calibration analysis and decision curve analysis (DCA). Results The VER nomogram contains microvascular invasion(MiVI), macrovascular invasion(MaVI) and CA19-9>25mAU/mL. The model shows good discrimination with C-indexes of 0.77 (95%CI: 0.69 - 0.85 ) and 0.76 (95%CI:0.66 - 0.86) in the development cohort and validation cohort respectively. Decision curve analysis demonstrated that the model are clinically useful and the calibration of our model was favorable. Our model stratified patients into two different risk groups, which exhibited significantly different VER. Conclusions Our model demonstrated favorable performance in predicting VER in cHCC patients.


Sign in / Sign up

Export Citation Format

Share Document