Development and validation of prediction models for blood concentrations of dioxins and PCBs using dietary intakes

2012 ◽  
Vol 50 ◽  
pp. 15-21 ◽  
Author(s):  
Helen Engelstad Kvalem ◽  
Anne Lise Brantsæter ◽  
Helle Margrete Meltzer ◽  
Hein Stigum ◽  
Cathrine Thomsen ◽  
...  
Metabolism ◽  
2018 ◽  
Vol 85 ◽  
pp. 38-47 ◽  
Author(s):  
Tsai-Chung Li ◽  
Chia-Ing Li ◽  
Chiu-Shong Liu ◽  
Wen-Yuan Lin ◽  
Chih-Hsueh Lin ◽  
...  

2019 ◽  
Vol 98 (10) ◽  
pp. 1088-1095 ◽  
Author(s):  
J. Krois ◽  
C. Graetz ◽  
B. Holtfreter ◽  
P. Brinkmann ◽  
T. Kocher ◽  
...  

Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients’ age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly.


2016 ◽  
Vol 26 (6) ◽  
pp. 906-911 ◽  
Author(s):  
Iris van der Heide ◽  
Ellen Uiters ◽  
Kristine Sørensen ◽  
Florian Röthlin ◽  
Jürgen Pelikan ◽  
...  

2012 ◽  
Vol 33 (2) ◽  
pp. 124-134 ◽  
Author(s):  
Fernando Martín Biscione ◽  
Renato Camargos Couto ◽  
Tânia M. G. Pedrosa

Objective.To assess the benefit of using procedure-specific alternative cutoff points for National Nosocomial Infections Surveillance (NNIS) risk index variables and of extending surgical site infection (SSI) risk prediction models with a postdischarge surveillance indicator.Design.Open, retrospective, validation cohort study.Setting.Five private, nonuniversity Brazilian hospitals.Patients.Consecutive inpatients operated on between January 1993 and May 2006 (other operations of the genitourinary system [n = 20,723], integumentary system [n = 12,408], or musculoskeletal system [n = 15,714] and abdominal hysterectomy [n = 11,847]).Methods.For each procedure category, development and validation samples were defined nonrandomly. In the development samples, alternative SSI prognostic scores were constructed using logistic regression: (i) alternative NNIS scores used NNIS risk index covariates and cutoff points but locally derived SSI risk strata and rates, (ii) revised scores used procedure-specific alternative cutoff points, and (iii) extended scores expanded revised scores with a postdischarge surveillance indicator. Performances were compared in the validation samples using calibration, discrimination, and overall performance measures.Results.The NNIS risk index showed low discrimination, inadequate calibration, and predictions with high variability. The most consistent advantage of alternative NNIS scores was regarding calibration (prevalence and dispersion components). Revised scores performed slightly better than the NNIS risk index for most procedures and measures, mainly in calibration. Extended scores clearly performed better than the NNIS risk index, irrespective of the measure or operative procedure.Conclusions.Locally derived SSI risk strata and rates improved the NNIS risk index's calibration. Alternative cutoff points further improved the specification of the intrinsic SSI risk component. Controlling for incomplete postdischarge SSI surveillance provided consistently more accurate SSI risk adjustment.Infect Control Hosp Epidemiol 2012;33(2):124-134


Sign in / Sign up

Export Citation Format

Share Document