scholarly journals External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges

BMJ ◽  
2016 ◽  
pp. i3140 ◽  
Author(s):  
Richard D Riley ◽  
Joie Ensor ◽  
Kym I E Snell ◽  
Thomas P A Debray ◽  
Doug G Altman ◽  
...  
2018 ◽  
Vol 22 (66) ◽  
pp. 1-294 ◽  
Author(s):  
Rachel Archer ◽  
Emma Hock ◽  
Jean Hamilton ◽  
John Stevens ◽  
Munira Essat ◽  
...  

Background Rheumatoid arthritis (RA) is a chronic, debilitating disease associated with reduced quality of life and substantial costs. It is unclear which tests and assessment tools allow the best assessment of prognosis in people with early RA and whether or not variables predict the response of patients to different drug treatments. Objective To systematically review evidence on the use of selected tests and assessment tools in patients with early RA (1) in the evaluation of a prognosis (review 1) and (2) as predictive markers of treatment response (review 2). Data sources Electronic databases (e.g. MEDLINE, EMBASE, The Cochrane Library, Web of Science Conference Proceedings; searched to September 2016), registers, key websites, hand-searching of reference lists of included studies and key systematic reviews and contact with experts. Study selection Review 1 – primary studies on the development, external validation and impact of clinical prediction models for selected outcomes in adult early RA patients. Review 2 – primary studies on the interaction between selected baseline covariates and treatment (conventional and biological disease-modifying antirheumatic drugs) on salient outcomes in adult early RA patients. Results Review 1 – 22 model development studies and one combined model development/external validation study reporting 39 clinical prediction models were included. Five external validation studies evaluating eight clinical prediction models for radiographic joint damage were also included. c-statistics from internal validation ranged from 0.63 to 0.87 for radiographic progression (different definitions, six studies) and 0.78 to 0.82 for the Health Assessment Questionnaire (HAQ). Predictive performance in external validations varied considerably. Three models [(1) Active controlled Study of Patients receiving Infliximab for the treatment of Rheumatoid arthritis of Early onset (ASPIRE) C-reactive protein (ASPIRE CRP), (2) ASPIRE erythrocyte sedimentation rate (ASPIRE ESR) and (3) Behandelings Strategie (BeSt)] were externally validated using the same outcome definition in more than one population. Results of the random-effects meta-analysis suggested substantial uncertainty in the expected predictive performance of models in a new sample of patients. Review 2 – 12 studies were identified. Covariates examined included anti-citrullinated protein/peptide anti-body (ACPA) status, smoking status, erosions, rheumatoid factor status, C-reactive protein level, erythrocyte sedimentation rate, swollen joint count (SJC), body mass index and vascularity of synovium on power Doppler ultrasound (PDUS). Outcomes examined included erosions/radiographic progression, disease activity, physical function and Disease Activity Score-28 remission. There was statistical evidence to suggest that ACPA status, SJC and PDUS status at baseline may be treatment effect modifiers, but not necessarily that they are prognostic of response for all treatments. Most of the results were subject to considerable uncertainty and were not statistically significant. Limitations The meta-analysis in review 1 was limited by the availability of only a small number of external validation studies. Studies rarely investigated the interaction between predictors and treatment. Suggested research priorities Collaborative research (including the use of individual participant data) is needed to further develop and externally validate the clinical prediction models. The clinical prediction models should be validated with respect to individual treatments. Future assessments of treatment by covariate interactions should follow good statistical practice. Conclusions Review 1 – uncertainty remains over the optimal prediction model(s) for use in clinical practice. Review 2 – in general, there was insufficient evidence that the effect of treatment depended on baseline characteristics. Study registration This study is registered as PROSPERO CRD42016042402. Funding The National Institute for Health Research Health Technology Assessment programme.


2020 ◽  
Vol 35 (1) ◽  
pp. 100-116 ◽  
Author(s):  
M B Ratna ◽  
S Bhattacharya ◽  
B Abdulrahim ◽  
D J McLernon

Abstract STUDY QUESTION What are the best-quality clinical prediction models in IVF (including ICSI) treatment to inform clinicians and their patients of their chance of success? SUMMARY ANSWER The review recommends the McLernon post-treatment model for predicting the cumulative chance of live birth over and up to six complete cycles of IVF. WHAT IS KNOWN ALREADY Prediction models in IVF have not found widespread use in routine clinical practice. This could be due to their limited predictive accuracy and clinical utility. A previous systematic review of IVF prediction models, published a decade ago and which has never been updated, did not assess the methodological quality of existing models nor provided recommendations for the best-quality models for use in clinical practice. STUDY DESIGN, SIZE, DURATION The electronic databases OVID MEDLINE, OVID EMBASE and Cochrane library were searched systematically for primary articles published from 1978 to January 2019 using search terms on the development and/or validation (internal and external) of models in predicting pregnancy or live birth. No language or any other restrictions were applied. PARTICIPANTS/MATERIALS, SETTING, METHODS The PRISMA flowchart was used for the inclusion of studies after screening. All studies reporting on the development and/or validation of IVF prediction models were included. Articles reporting on women who had any treatment elements involving donor eggs or sperm and surrogacy were excluded. The CHARMS checklist was used to extract and critically appraise the methodological quality of the included articles. We evaluated models’ performance by assessing their c-statistics and plots of calibration in studies and assessed correct reporting by calculating the percentage of the TRIPOD 22 checklist items met in each study. MAIN RESULTS AND THE ROLE OF CHANCE We identified 33 publications reporting on 35 prediction models. Seventeen articles had been published since the last systematic review. The quality of models has improved over time with regard to clinical relevance, methodological rigour and utility. The percentage of TRIPOD score for all included studies ranged from 29 to 95%, and the c-statistics of all externally validated studies ranged between 0.55 and 0.77. Most of the models predicted the chance of pregnancy/live birth for a single fresh cycle. Six models aimed to predict the chance of pregnancy/live birth per individual treatment cycle, and three predicted more clinically relevant outcomes such as cumulative pregnancy/live birth. The McLernon (pre- and post-treatment) models predict the cumulative chance of live birth over multiple complete cycles of IVF per woman where a complete cycle includes all fresh and frozen embryo transfers from the same episode of ovarian stimulation. McLernon models were developed using national UK data and had the highest TRIPOD score, and the post-treatment model performed best on external validation. LIMITATIONS, REASONS FOR CAUTION To assess the reporting quality of all included studies, we used the TRIPOD checklist, but many of the earlier IVF prediction models were developed and validated before the formal TRIPOD reporting was published in 2015. It should also be noted that two of the authors of this systematic review are authors of the McLernon model article. However, we feel we have conducted our review and made our recommendations using a fair and transparent systematic approach. WIDER IMPLICATIONS OF THE FINDINGS This study provides a comprehensive picture of the evolving quality of IVF prediction models. Clinicians should use the most appropriate model to suit their patients’ needs. We recommend the McLernon post-treatment model as a counselling tool to inform couples of their predicted chance of success over and up to six complete cycles. However, it requires further external validation to assess applicability in countries with different IVF practices and policies. STUDY FUNDING/COMPETING INTEREST(S) The study was funded by the Elphinstone Scholarship Scheme and the Assisted Reproduction Unit, University of Aberdeen. Both D.J.M. and S.B. are authors of the McLernon model article and S.B. is Editor in Chief of Human Reproduction Open. They have completed and submitted the ICMJE forms for Disclosure of potential Conflicts of Interest. The other co-authors have no conflicts of interest to declare. REGISTRATION NUMBER N/A


2013 ◽  
Vol 32 (18) ◽  
pp. 3158-3180 ◽  
Author(s):  
Thomas P.A. Debray ◽  
Karel G.M. Moons ◽  
Ikhlaaq Ahmed ◽  
Hendrik Koffijberg ◽  
Richard David Riley

Endocrine ◽  
2021 ◽  
Author(s):  
Olivier Zanier ◽  
Matteo Zoli ◽  
Victor E. Staartjes ◽  
Federica Guaraldi ◽  
Sofia Asioli ◽  
...  

Abstract Purpose Biochemical remission (BR), gross total resection (GTR), and intraoperative cerebrospinal fluid (CSF) leaks are important metrics in transsphenoidal surgery for acromegaly, and prediction of their likelihood using machine learning would be clinically advantageous. We aim to develop and externally validate clinical prediction models for outcomes after transsphenoidal surgery for acromegaly. Methods Using data from two registries, we develop and externally validate machine learning models for GTR, BR, and CSF leaks after endoscopic transsphenoidal surgery in acromegalic patients. For the model development a registry from Bologna, Italy was used. External validation was then performed using data from Zurich, Switzerland. Gender, age, prior surgery, as well as Hardy and Knosp classification were used as input features. Discrimination and calibration metrics were assessed. Results The derivation cohort consisted of 307 patients (43.3% male; mean [SD] age, 47.2 [12.7] years). GTR was achieved in 226 (73.6%) and BR in 245 (79.8%) patients. In the external validation cohort with 46 patients, 31 (75.6%) achieved GTR and 31 (77.5%) achieved BR. Area under the curve (AUC) at external validation was 0.75 (95% confidence interval: 0.59–0.88) for GTR, 0.63 (0.40–0.82) for BR, as well as 0.77 (0.62–0.91) for intraoperative CSF leaks. While prior surgery was the most important variable for prediction of GTR, age, and Hardy grading contributed most to the predictions of BR and CSF leaks, respectively. Conclusions Gross total resection, biochemical remission, and CSF leaks remain hard to predict, but machine learning offers potential in helping to tailor surgical therapy. We demonstrate the feasibility of developing and externally validating clinical prediction models for these outcomes after surgery for acromegaly and lay the groundwork for development of a multicenter model with more robust generalization.


Circulation ◽  
2018 ◽  
Vol 138 (Suppl_1) ◽  
Author(s):  
Jenica N Upshaw ◽  
Jason Nelson ◽  
Benjamin Wessler ◽  
Benjamin Koethe ◽  
Christine Lundquist ◽  
...  

Introduction: Most heart failure (HF) clinical prediction models (CPMs] have not been independently externally validated. We sought to test the performance of HF models in a diverse population using a systematic approach. Methods: A systematic review identified CPMs predicting outcomes for patients with HF. Individual patient data from 5 large publicaly available clinical trials enrolling patients with chronic HF were matched to published CPMs based on similarity in populations and available outcome and predictor variables in the clinical trial databases. CPM performance was evaluated for discrimination (c-statistic, % relative change in c-statistic) and calibration (Harrell’s E and E 90 , the mean and the 90% quantile of the error distribution from the smoothed loess observed value) for the original and recalibrated models. Results: Out of 135 HF CPMs reviewed, we identified 45 CPM-trial pairs including 13 unique CPMs. The outcome was mortality for all of the models with a trial match. During external validations, median c-statistic was 0.595 (IQR 0.563 to 0.630) with a median relative decrease in the c-statistic of -57 % (IQR, -49% to -71%) compared to the c-statistic reported in the derivation cohort. Overall, the median Harrell’s E was 0.09 (IQR, 0.04 to 0.135) and E 90 was 0.11 (IQR, 0.07 to 0.21). Recalibration of the intercept and slope led to substantially improved calibration with median change in Harrell’s E of -35% [IQR 0 to -75%] for the intercept and -56% [IQR -17% to -75%] for the intercept and slope. Refitting model covariates improved the median c-statistic by 38% to 0.629 [IQR 0.613 to 0.649]. Conclusion: For HF CPMs, independent external validations demonstrate that CPMs perform significantly worse than originally presented; however with significant heterogeneity. Recalibration of the intercept and slope improved model calibration. These results underscore the need to carefully consider the derivation cohort characteristics when using published CPMs.


Author(s):  
Marcus Taylor ◽  
Bartłomiej Szafron ◽  
Glen P Martin ◽  
Udo Abah ◽  
Matthew Smith ◽  
...  

Abstract OBJECTIVES National guidelines advocate the use of clinical prediction models to estimate perioperative mortality for patients undergoing lung resection. Several models have been developed that may potentially be useful but contemporary external validation studies are lacking. The aim of this study was to validate existing models in a multicentre patient cohort. METHODS The Thoracoscore, Modified Thoracoscore, Eurolung, Modified Eurolung, European Society Objective Score and Brunelli models were validated using a database of 6600 patients who underwent lung resection between 2012 and 2018. Models were validated for in-hospital or 30-day mortality (depending on intended outcome of each model) and also for 90-day mortality. Model calibration (calibration intercept, calibration slope, observed to expected ratio and calibration plots) and discrimination (area under receiver operating characteristic curve) were assessed as measures of model performance. RESULTS Mean age was 66.8 years (±10.9 years) and 49.7% (n = 3281) of patients were male. In-hospital, 30-day, perioperative (in-hospital or 30-day) and 90-day mortality were 1.5% (n = 99), 1.4% (n = 93), 1.8% (n = 121) and 3.1% (n = 204), respectively. Model area under the receiver operating characteristic curves ranged from 0.67 to 0.73. Calibration was inadequate in five models and mortality was significantly overestimated in five models. No model was able to adequately predict 90-day mortality. CONCLUSIONS Five of the validated models were poorly calibrated and had inadequate discriminatory ability. The modified Eurolung model demonstrated adequate statistical performance but lacked clinical validity. Development of accurate models that can be used to estimate the contemporary risk of lung resection is required.


Sign in / Sign up

Export Citation Format

Share Document