A quantitative bias analysis of the confounding effects due to smoking on the association between fluoroquinolones and risk of aortic aneurysm

2020 ◽  
Vol 29 (8) ◽  
pp. 958-961
Author(s):  
Mingfeng Zhang ◽  
Monique Falconer ◽  
Lockwood Taylor
Author(s):  
Paul Gustafson

Abstract The article by Jiang et al (Am J. Epidemiol.) extends quantitative bias analysis from the realm of statistical models to the realm of machine learning algorithms. Given the rooting of statistical models in the spirit of explanation and the rooting of machine learning algorithms in the spirt of prediction, this extension is thought provoking indeed. Some such thoughts are expounded here.


Author(s):  
Samantha Wilkinson ◽  
Alind Gupta ◽  
Eric Mackay ◽  
Paul Arora ◽  
Kristian Thorlund ◽  
...  

IntroductionThe German health technology assessment (HTA) rejected additional benefit of alectinib for second line (2L) ALK+ NSCLC, citing possible biases from missing ECOG performance status data and unmeasured confounding in real-world evidence (RWE) for 2L ceritinib that was submitted as a comparator to the single arm alectinib trial. Alectinib was approved in the US and therefore US post-launch RWE can be used to evaluate this HTA decision.MethodsWe compared the real-world effectiveness of alectinib with ceritinib in 2L post-crizotinib ALK+ NSCLC using the nationwide Flatiron Health electronic health record (EHR)-derived de-identified database. Using quantitative bias analysis (QBA), we estimated the strength of (i) unmeasured confounding and (ii) deviation from missing-at-random (MAR) assumptions needed to nullify any overall survival (OS) benefit.ResultsAlectinib had significantly longer median OS than ceritinib in complete case analysis. The estimated effect size (Hazard Ratio: 0.55) was robust to risk ratios of unmeasured confounder-outcome and confounder-exposure associations of <2.4.Based on tipping point analysis, missing baseline ECOG performance status for ceritinib-treated patients (49% missing) would need to be more than 3.4-times worse than expected under MAR to nullify the OS benefit observed for alectinib.ConclusionsOnly implausible levels of bias reversed our conclusions. These methods could provide a framework to explore uncertainty and aid decision-making for HTAs to enable patient access to innovative therapies.


2013 ◽  
Vol 24 (6) ◽  
pp. 1243-1255 ◽  
Author(s):  
Tony Blakely ◽  
Jan J. Barendregt ◽  
Rachel H. Foster ◽  
Sarah Hill ◽  
June Atkinson ◽  
...  

2019 ◽  
Vol 26 (12) ◽  
pp. 1664-1674 ◽  
Author(s):  
Sophia R Newcomer ◽  
Stan Xu ◽  
Martin Kulldorff ◽  
Matthew F Daley ◽  
Bruce Fireman ◽  
...  

Abstract Objective In health informatics, there have been concerns with reuse of electronic health data for research, including potential bias from incorrect or incomplete outcome ascertainment. In this tutorial, we provide a concise review of predictive value–based quantitative bias analysis (QBA), which comprises epidemiologic methods that use estimates of data quality accuracy to quantify the bias caused by outcome misclassification. Target Audience Health informaticians and investigators reusing large, electronic health data sources for research. Scope When electronic health data are reused for research, validation of outcome case definitions is recommended, and positive predictive values (PPVs) are the most commonly reported measure. Typically, case definitions with high PPVs are considered to be appropriate for use in research. However, in some studies, even small amounts of misclassification can cause bias. In this tutorial, we introduce methods for quantifying this bias that use predictive values as inputs. Using epidemiologic principles and examples, we first describe how multiple factors influence misclassification bias, including outcome misclassification levels, outcome prevalence, and whether outcome misclassification levels are the same or different by exposure. We then review 2 predictive value–based QBA methods and why outcome PPVs should be stratified by exposure for bias assessment. Using simulations, we apply and evaluate the methods in hypothetical electronic health record–based immunization schedule safety studies. By providing an overview of predictive value–based QBA, we hope to bridge the disciplines of health informatics and epidemiology to inform how the impact of data quality issues can be quantified in research using electronic health data sources.


Author(s):  
James C Doidge ◽  
Katie L Harron

Abstract Linked data are increasingly being used for epidemiological research, to enhance primary research, and in planning, monitoring and evaluating public policy and services. Linkage error (missed links between records that relate to the same person or false links between unrelated records) can manifest in many ways: as missing data, measurement error and misclassification, unrepresentative sampling, or as a special combination of these that is specific to analysis of linked data: the merging and splitting of people that can occur when two hospital admission records are counted as one person admitted twice if linked and two people admitted once if not. Through these mechanisms, linkage error can ultimately lead to information bias and selection bias; so identifying relevant mechanisms is key in quantitative bias analysis. In this article we introduce five key concepts and a study classification system for identifying which mechanisms are relevant to any given analysis. We provide examples and discuss options for estimating parameters for bias analysis. This conceptual framework provides the ‘links’ between linkage error, information bias and selection bias, and lays the groundwork for quantitative bias analysis for linkage error.


2017 ◽  
Vol 99 ◽  
pp. 245-254 ◽  
Author(s):  
Christopher D. Ruark ◽  
Gina Song ◽  
Miyoung Yoon ◽  
Marc-André Verner ◽  
Melvin E. Andersen ◽  
...  

Author(s):  
James Doidge ◽  
Joan Morris ◽  
Katie Harron ◽  
Sarah Stevens ◽  
Ruth Gilbert

Background with rationalePatient registers and electronic health records are both valuable resources for disease surveillance but can be limited by variation in data quality over time. Variation may stem from changes in data collection methods, in the accuracy or completeness of clinical information, or in the quality of patient identifiers and the linkage that relies on these. Main AimBy linking the National Down Syndrome Cytogenetic Register (NDSCR) to Hospital Episode Statistics for England (HES), we aimed to assess the quality of each and establish a consistent approach for analysis of trends in prevalence of Down’s syndrome among live births in England. Methods/ApproachProbabilistic record linkage of NDSCR to HES for the period 1998–2013, supported by linkage of babies to mothers within HES. Comparison of prevalence estimates in England using NDSCR only, HES data only, and linked data. Capture-recapture analysis and quantitative bias analysis were used to account for potential errors, including false positive diagnostic codes, unrecorded diagnoses, and linkage error. ResultsAnalyses of single-source data indicated increasing live birth prevalence of Down’s syndrome, particularly steep in analysis of HES. Linked data indicated a contrastingly stable prevalence of 12.3 cases per 10,000 live births, with a plausible range of 11.6–12.7 cases per 10,000 live births allowing for potential errors. Conclusion Case ascertainment in NDSCR improved slightly over time, creating a picture of slowly increasing prevalence. The emerging epidemic suggested by HES primarily reflects improving linkage within HES (assignment of unique patient identifiers to hospital episodes). Administrative data are valuable but trends should be interpreted with caution, and with assessment of data quality over time. Linked data with quantitative bias analysis can provide more robust estimation and, in this case, reassurance that prevalence of Down’s syndrome is not increasing. Routine linkage of administrative and register data can enhance the value of each.


2019 ◽  
Author(s):  
Sam Harper

Observational studies are ambiguous, difficult, and necessary for epidemiology. Presently there are concerns that the evidence produced by most observational studies in epidemiology is not credible and contributes to research waste. I argue that observational epidemiology could be improved by focusing greater attention on: 1) defining questions that make clear whether the inferential goal is descriptive or causal; 2) greater utilization of quantitative bias analysis and alternative research designs that aim to decrease the strength of assumptions needed to estimate causal effects; and 3) promoting, experimenting, and perhaps institutionalizing reproducible research standards as well as replication studies to evaluate the fragility of study findings in epidemiology. Greater clarity, credibility, and transparency in observational epidemiology will help to provide reliable evidence that can serve as a basis for making decisions about clinical or population health interventions.


2020 ◽  
Vol 17 (1) ◽  
pp. 80-84
Author(s):  
Brigid M. Lynch ◽  
Suzanne C. Dixon-Suen ◽  
Andrea Ramirez Varela ◽  
Yi Yang ◽  
Dallas R. English ◽  
...  

Background: It is not always clear whether physical activity is causally related to health outcomes, or whether the associations are induced through confounding or other biases. Randomized controlled trials of physical activity are not feasible when outcomes of interest are rare or develop over many years. Thus, we need methods to improve causal inference in observational physical activity studies. Methods: We outline a range of approaches that can improve causal inference in observational physical activity research, and also discuss the impact of measurement error on results and methods to minimize this. Results: Key concepts and methods described include directed acyclic graphs, quantitative bias analysis, Mendelian randomization, and potential outcomes approaches which include propensity scores, g methods, and causal mediation. Conclusions: We provide a brief overview of some contemporary epidemiological methods that are beginning to be used in physical activity research. Adoption of these methods will help build a stronger body of evidence for the health benefits of physical activity.


Sign in / Sign up

Export Citation Format

Share Document