scholarly journals 4298 Prediction models for pulmonary tuberculosis treatment outcomes: a systematic review

2020 ◽  
Vol 4 (s1) ◽  
pp. 34-34
Author(s):  
Lauren Saag Peetluk ◽  
Felipe Ridolfi ◽  
Valeria Rolla ◽  
Timothy Sterling

OBJECTIVES/GOALS: Many clinical prediction models have been developed to guide tuberculosis (TB) treatment, but their results and methods have not been formally evaluated. We aimed to identify and synthesize existing models for predicting TB treatment outcomes, including bias and applicability assessment. METHODS/STUDY POPULATION: Our review will adhere to methods that developed specifically for systematic reviews of prediction model studies. We will search PubMed, Embase, Web of Science, and Google Scholar (first 200 citations) to identify studies that internally and/or externally validate a model for TB treatment outcomes (defined as one or multiple of cure, treatment completion, death, treatment failure, relapse, default, and lost to follow-up). Study screening, data extraction, and bias assessment will be conducted independently by two reviewers with a third party to resolve discrepancies. Study quality will be assessed using the Prediction model Risk Of Bias Assessment Tool (PROBAST). RESULTS/ANTICIPATED RESULTS: Our search strategy yielded 6,242 articles in PubMed, 10,585 in Embase, 10,511 in Web of Science, and 200 from Google Scholar, totaling 27,538 articles. After de-duplication, 14,029 articles remain. After screening titles, abstracts, and full-text, we will extract data from relevant studies, including publication details, study characteristics, methods, and results. Data will be summarized with narrative review and in detailed tables with descriptive statistics. We anticipate finding disparate outcome definitions, contrasting predictors across models, and high risk of bias in methods. Meta-analysis of performance measures for model validation studies will be performed if possible. DISCUSSION/SIGNIFICANCE OF IMPACT: TB outcome prediction models are important but existing ones have not been rigorously evaluated. This systematic review will synthesize TB outcome prediction models and serve as guidance to future studies that aim to use or develop TB outcome prediction models.

BMJ Open ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. e044687
Author(s):  
Lauren S. Peetluk ◽  
Felipe M. Ridolfi ◽  
Peter F. Rebeiro ◽  
Dandan Liu ◽  
Valeria C Rolla ◽  
...  

ObjectiveTo systematically review and critically evaluate prediction models developed to predict tuberculosis (TB) treatment outcomes among adults with pulmonary TB.DesignSystematic review.Data sourcesPubMed, Embase, Web of Science and Google Scholar were searched for studies published from 1 January 1995 to 9 January 2020.Study selection and data extractionStudies that developed a model to predict pulmonary TB treatment outcomes were included. Study screening, data extraction and quality assessment were conducted independently by two reviewers. Study quality was evaluated using the Prediction model Risk Of Bias Assessment Tool. Data were synthesised with narrative review and in tables and figures.Results14 739 articles were identified, 536 underwent full-text review and 33 studies presenting 37 prediction models were included. Model outcomes included death (n=16, 43%), treatment failure (n=6, 16%), default (n=6, 16%) or a composite outcome (n=9, 25%). Most models (n=30, 81%) measured discrimination (median c-statistic=0.75; IQR: 0.68–0.84), and 17 (46%) reported calibration, often the Hosmer-Lemeshow test (n=13). Nineteen (51%) models were internally validated, and six (16%) were externally validated. Eighteen (54%) studies mentioned missing data, and of those, half (n=9) used complete case analysis. The most common predictors included age, sex, extrapulmonary TB, body mass index, chest X-ray results, previous TB and HIV. Risk of bias varied across studies, but all studies had high risk of bias in their analysis.ConclusionsTB outcome prediction models are heterogeneous with disparate outcome definitions, predictors and methodology. We do not recommend applying any in clinical settings without external validation, and encourage future researchers adhere to guidelines for developing and reporting of prediction models.Trial registrationThe study was registered on the international prospective register of systematic reviews PROSPERO (CRD42020155782)


2021 ◽  
Author(s):  
Beatriz Garcia Santa Cruz ◽  
Matías Nicolás Bossa ◽  
Jan Sölter ◽  
Andreas Dominik Husch

ABSTRACTComputer-aided-diagnosis for COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. This study provides a systematic evaluation of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias.Only 5 out of 256 identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably almost all of the datasets utilised in 78 papers published in peer-reviewed journals, are not among these 5 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use.This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.


2021 ◽  
Author(s):  
Jamie L. Miller ◽  
Masafumi Tada ◽  
Michihiko Goto ◽  
Nicholas Mohr ◽  
Sangil Lee

ABSTRACTBackgroundThroughout 2020, the coronavirus disease 2019 (COVID-19) has become a threat to public health on national and global level. There has been an immediate need for research to understand the clinical signs and symptoms of COVID-19 that can help predict deterioration including mechanical ventilation, organ support, and death. Studies thus far have addressed the epidemiology of the disease, common presentations, and susceptibility to acquisition and transmission of the virus; however, an accurate prognostic model for severe manifestations of COVID-19 is still needed because of the limited healthcare resources available.ObjectiveThis systematic review aims to evaluate published reports of prediction models for severe illnesses caused COVID-19.MethodsSearches were developed by the primary author and a medical librarian using an iterative process of gathering and evaluating terms. Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE. The data of confirmed COVID-19 patients from randomized control studies, cohort studies, and case-control studies published between January 2020 and July 2020 were retrieved. Studies were independently assessed for risk of bias and applicability using the Prediction Model Risk Of Bias Assessment Tool (PROBAST). We collected study type, setting, sample size, type of validation, and outcome including intubation, ventilation, any other type of organ support, or death. The combination of the prediction model, scoring system, performance of predictive models, and geographic locations were summarized.ResultsA primary review found 292 articles relevant based on title and abstract. After further review, 246 were excluded based on the defined inclusion and exclusion criteria. Forty-six articles were included in the qualitative analysis. Inter observer agreement on inclusion was 0.86 (95% confidence interval: 0.79 - 0.93). When the PROBAST tool was applied, 44 of the 46 articles were identified to have high or unclear risk of bias, or high or unclear concern for applicability. Two studied reported prediction models, 4C Mortality Score from hospital data and QCOVID from general public data from UK, and were rated as low risk of bias and low concerns for applicability.ConclusionSeveral prognostic models are reported in the literature, but many of them had concerning risks of biases and applicability. For most of the studies, caution is needed before use, as many of them will require external validation before dissemination. However, two articles were found to have low risk of bias and low applicability can be useful tools.


2020 ◽  
Vol 99 (4) ◽  
pp. 374-387 ◽  
Author(s):  
M. Du ◽  
D. Haag ◽  
Y. Song ◽  
J. Lynch ◽  
M. Mittinty

Recent efforts to improve the reliability and efficiency of scientific research have caught the attention of researchers conducting prediction modeling studies (PMSs). Use of prediction models in oral health has become more common over the past decades for predicting the risk of diseases and treatment outcomes. Risk of bias and insufficient reporting present challenges to the reproducibility and implementation of these models. A recent tool for bias assessment and a reporting guideline—PROBAST (Prediction Model Risk of Bias Assessment Tool) and TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis)—have been proposed to guide researchers in the development and reporting of PMSs, but their application has been limited. Following the standards proposed in these tools and a systematic review approach, a literature search was carried out in PubMed to identify oral health PMSs published in dental, epidemiologic, and biostatistical journals. Risk of bias and transparency of reporting were assessed with PROBAST and TRIPOD. Among 2,881 papers identified, 34 studies containing 58 models were included. The most investigated outcomes were periodontal diseases (42%) and oral cancers (30%). Seventy-five percent of the studies were susceptible to at least 4 of 20 sources of bias, including measurement error in predictors ( n = 12) and/or outcome ( n = 7), omitting samples with missing data ( n = 10), selecting variables based on univariate analyses ( n = 9), overfitting ( n = 13), and lack of model performance assessment ( n = 24). Based on TRIPOD, at least 5 of 31 items were inadequately reported in 95% of the studies. These items included sampling approaches ( n = 15), participant eligibility criteria ( n = 6), and model-building procedures ( n = 16). There was a general lack of transparent reporting and identification of bias across the studies. Application of the recommendations proposed in PROBAST and TRIPOD can benefit future research and improve the reproducibility and applicability of prediction models in oral health.


Medicina ◽  
2021 ◽  
Vol 57 (6) ◽  
pp. 538
Author(s):  
Alexandru Burlacu ◽  
Adrian Iftene ◽  
Iolanda Valentina Popa ◽  
Radu Crisan-Dabija ◽  
Crischentian Brinza ◽  
...  

Background and objectives: cardiovascular complications (CVC) are the leading cause of death in patients with chronic kidney disease (CKD). Standard cardiovascular disease risk prediction models used in the general population are not validated in patients with CKD. We aim to systematically review the up-to-date literature on reported outcomes of computational methods such as artificial intelligence (AI) or regression-based models to predict CVC in CKD patients. Materials and methods: the electronic databases of MEDLINE/PubMed, EMBASE, and ScienceDirect were systematically searched. The risk of bias and reporting quality for each study were assessed against transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) and the prediction model risk of bias assessment tool (PROBAST). Results: sixteen papers were included in the present systematic review: 15 non-randomized studies and 1 ongoing clinical trial. Twelve studies were found to perform AI or regression-based predictions of CVC in CKD, either through single or composite endpoints. Four studies have come up with computational solutions for other CV-related predictions in the CKD population. Conclusions: the identified studies represent palpable trends in areas of clinical promise with an encouraging present-day performance. However, there is a clear need for more extensive application of rigorous methodologies. Following the future prospective, randomized clinical trials, and thorough external validations, computational solutions will fill the gap in cardiovascular predictive tools for chronic kidney disease.


2021 ◽  
Author(s):  
Esmee Venema ◽  
Benjamin S Wessler ◽  
Jessica K Paulus ◽  
Rehab Salah ◽  
Gowri Raman ◽  
...  

AbstractObjectiveTo assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at external validation.Study Design and SettingWe evaluated risk of bias (ROB) on 102 CPMs from the Tufts CPM Registry, comparing PROBAST to a short form consisting of six PROBAST items anticipated to best identify high ROB. We then applied the short form to all CPMs in the Registry with at least 1 validation and assessed the change in discrimination (dAUC) between the derivation and the validation cohorts (n=1,147).ResultsPROBAST classified 98/102 CPMS as high ROB. The short form identified 96 of these 98 as high ROB (98% sensitivity), with perfect specificity. In the full CPM registry, 529/556 CPMs (95%) were classified as high ROB, 20 (4%) low ROB, and 7 (1%) unclear ROB. Median change in discrimination was significantly smaller in low ROB models (dAUC −0.9%, IQR −6.2%–4.2%) compared to high ROB models (dAUC −11.7%, IQR −33.3%–2.6%; p<0.001).ConclusionHigh ROB is pervasive among published CPMs. It is associated with poor performance at validation, supporting the application of PROBAST or a shorter version in CPM reviews.What is newHigh risk of bias is pervasive among published clinical prediction modelsHigh risk of bias identified with PROBAST is associated with poorer model performance at validationA subset of questions can distinguish between models with high and low risk of bias


2020 ◽  
Author(s):  
Fernanda Gonçalves Silva ◽  
Leonardo Oliveira Pena Costa ◽  
Mark J Hancock ◽  
Gabriele Alves Palomo ◽  
Luciola da Cunha Menezes Costa ◽  
...  

Abstract Background: The prognosis of acute low back pain is generally favourable in terms of pain and disability; however, outcomes vary substantially between individual patients. Clinical prediction models help in estimating the likelihood of an outcome at a certain time point. There are existing clinical prediction models focused on prognosis for patients with low back pain. To date, there is only one previous systematic review summarising the discrimination of validated clinical prediction models to identify the prognosis in patients with low back pain of less than 3 months duration. The aim of this systematic review is to identify existing developed and/or validated clinical prediction models on prognosis of patients with low back pain of less than 3 months duration, and to summarise their performance in terms of discrimination and calibration. Methods: MEDLINE, Embase and CINAHL databases will be searched, from the inception of these databases until January 2020. Eligibility criteria will be: (1) prognostic model development studies with or without external validation, or prognostic external validation studies with or without model updating; (2) with adults aged 18 or over, with ‘recent onset’ low back pain (i.e. less than 3 months duration), with or without leg pain; (3) outcomes of pain, disability, sick leave or days absent from work or return to work status, and self-reported recovery; and (4) study with a follow-up of at least 12 weeks duration. The risk of bias of the included studies will be assessed by the Prediction model Risk Of Bias ASsessment Tool, and the overall quality of evidence will be rated using the Hierarchy of Evidence for Clinical Prediction Rules. Discussion: This systematic review will identify, appraise, and summarize evidence on the performance of existing prediction models for prognosis of low back pain, and may help clinicians to choose the best option of prediction model to better inform patients about their likely prognosis. Systematic review registration: PROSPERO reference number CRD42020160988


BMJ ◽  
2020 ◽  
pp. m1328 ◽  
Author(s):  
Laure Wynants ◽  
Ben Van Calster ◽  
Gary S Collins ◽  
Richard D Riley ◽  
Georg Heinze ◽  
...  

Abstract Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of becoming infected with covid-19 or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 14 217 titles were screened, and 107 studies describing 145 prediction models were included. The review identified four models for identifying people at risk in the general population; 91 diagnostic models for detecting covid-19 (60 were based on medical imaging, nine to diagnose disease severity); and 50 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequently reported predictors of diagnosis and prognosis of covid-19 are age, body temperature, lymphocyte count, and lung imaging features. Flu-like symptoms and neutrophil count are frequently predictive in diagnostic models, while comorbidities, sex, C reactive protein, and creatinine are frequent prognostic factors. C index estimates ranged from 0.73 to 0.81 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.68 to 0.99 in prognostic models. All models were rated at high risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and vague reporting. Most reports did not include any description of the study population or intended use of the models, and calibration of the model predictions was rarely assessed. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic. Hence, we do not recommend any of these reported prediction models for use in current practice. Immediate sharing of well documented individual participant data from covid-19 studies and collaboration are urgently needed to develop more rigorous prediction models, and validate promising ones. The predictors identified in included models should be considered as candidate predictors for new models. Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, studies should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/ , registration https://osf.io/wy245 . Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 2 of the original article published on 7 April 2020 ( BMJ 2020;369:m1328), and previous updates can be found as data supplements ( https://www.bmj.com/content/369/bmj.m1328/related#datasupp ).


BMJ Open ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. e038832
Author(s):  
Constanza L Andaur Navarro ◽  
Johanna A A G Damen ◽  
Toshihiko Takada ◽  
Steven W J Nijman ◽  
Paula Dhiman ◽  
...  

IntroductionStudies addressing the development and/or validation of diagnostic and prognostic prediction models are abundant in most clinical domains. Systematic reviews have shown that the methodological and reporting quality of prediction model studies is suboptimal. Due to the increasing availability of larger, routinely collected and complex medical data, and the rising application of Artificial Intelligence (AI) or machine learning (ML) techniques, the number of prediction model studies is expected to increase even further. Prediction models developed using AI or ML techniques are often labelled as a ‘black box’ and little is known about their methodological and reporting quality. Therefore, this comprehensive systematic review aims to evaluate the reporting quality, the methodological conduct, and the risk of bias of prediction model studies that applied ML techniques for model development and/or validation.Methods and analysisA search will be performed in PubMed to identify studies developing and/or validating prediction models using any ML methodology and across all medical fields. Studies will be included if they were published between January 2018 and December 2019, predict patient-related outcomes, use any study design or data source, and available in English. Screening of search results and data extraction from included articles will be performed by two independent reviewers. The primary outcomes of this systematic review are: (1) the adherence of ML-based prediction model studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD), and (2) the risk of bias in such studies as assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). A narrative synthesis will be conducted for all included studies. Findings will be stratified by study type, medical field and prevalent ML methods, and will inform necessary extensions or updates of TRIPOD and PROBAST to better address prediction model studies that used AI or ML techniques.Ethics and disseminationEthical approval is not required for this study because only available published data will be analysed. Findings will be disseminated through peer-reviewed publications and scientific conferences.Systematic review registrationPROSPERO, CRD42019161764.


Sign in / Sign up

Export Citation Format

Share Document