A simple method to estimate prediction intervals and predictive distributions: Summarizing meta‐analyses beyond means and confidence intervals

Author(s):  
Chia‐Chun Wang ◽  
Wen‐Chung Lee
2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e18600-e18600
Author(s):  
Maryam Alasfour ◽  
Salman Alawadi ◽  
Malak AlMojel ◽  
Philippos Apolinario Costa ◽  
Priscila Barreto Coelho ◽  
...  

e18600 Background: Patients with coronavirus disease 2019 (COVID-19) and cancer have worse clinical outcomes compared to those without cancer. Primary studies have examined this population, but most had small sample sizes and conflicting results. Prior meta-analyses exclude most US and European data or only examine mortality. The present meta-analysis evaluates the prevalence of several clinical outcomes in cancer patients with COVID-19, including new emerging data from Europe and the US. Methods: A systematic search of PubMED, medRxiv, JMIR and Embase by two independent investigators included peer-reviewed papers and preprints up to July 8, 2020. The primary outcome was mortality. Other outcomes were ICU and non-ICU admission, mild, moderate and severe complications, ARDS, invasive ventilation, stable, and clinically improved rates. Study quality was assessed through the Newcastle–Ottawa scale. Random effects model was used to derive prevalence rates, their 95% confidence intervals (CI) and 95% prediction intervals (PI). Results: Thirty-four studies (N = 4,371) were included in the analysis. The mortality prevalence rate was 25.2% (95% CI: 21.1–29.7; 95% PI: 9.8-51.1; I 2 = 85.4), with 11.9% ICU admissions (95% CI: 9.2-15.4; 95% PI: 4.3-28.9; I 2= 77.8) and 25.2% clinically stable (95% CI: 21.1-29.7; 95% PI: 9.8-51.1; I 2 = 85.4). Furthermore, 42.5% developed severe complications (95% CI: 30.4-55.7; 95% PI: 8.2-85.9; I 2 = 94.3), with 22.7% developing ARDS (95% CI: 15.4-32.2; 95% PI: 5.8-58.6; I 2 = 82.4), and 11.3% needing invasive ventilation (95% CI: 6.7-18.4; 95% PI: 2.3-41.1; I 2 = 79.8). Post-follow up, 49% clinically improved (95% CI: 35.6-62.6; 95% PI: 9.8-89.4; I 2 = 92.5). All outcomes had large I 2 , suggesting high levels of heterogeneity among studies, and wide PIs indicating high variability within outcomes. Despite this variability, the mortality rate in cancer patients with COVID-19, even at the lower end of the PI (9.8%), is higher than the 2% mortality rate of the non-cancer with COVID-19 population, but not as high as what other meta-analyses conclude, which is around 25%. Conclusions: Patients with cancer who develop COVID-19 have a higher probability of mortality compared to the general population with COVID-19, but possibly not as high as previous studies have shown. A large proportion of them developed severe complications, but a larger proportion recovered. Prevalence of mortality and other outcomes published in prior meta-analyses did not report prediction intervals, which compromises the clinical utilization of such results.


2007 ◽  
Vol 22 (3) ◽  
pp. 637-650 ◽  
Author(s):  
Ian T. Jolliffe

Abstract When a forecast is assessed, a single value for a verification measure is often quoted. This is of limited use, as it needs to be complemented by some idea of the uncertainty associated with the value. If this uncertainty can be quantified, it is then possible to make statistical inferences based on the value observed. There are two main types of inference: confidence intervals can be constructed for an underlying “population” value of the measure, or hypotheses can be tested regarding the underlying value. This paper will review the main ideas of confidence intervals and hypothesis tests, together with the less well known “prediction intervals,” concentrating on aspects that are often poorly understood. Comparisons will be made between different methods of constructing confidence intervals—exact, asymptotic, bootstrap, and Bayesian—and the difference between prediction intervals and confidence intervals will be explained. For hypothesis testing, multiple testing will be briefly discussed, together with connections between hypothesis testing, prediction intervals, and confidence intervals.


2021 ◽  
Vol 8 ◽  
Author(s):  
Tie-Ning Zhang ◽  
Qi-Jun Wu ◽  
Ya-Shu Liu ◽  
Jia-Le Lv ◽  
Hui Sun ◽  
...  

Background: The etiology of congenital heart disease (CHD) has been extensively studied in the past decades. Therefore, it is critical to clarify clear hierarchies of evidence between types of environmental factors and CHD.Methods: Electronic searches in PubMed, Embase, Web of Science, Cochrane database were conducted from inception to April 20, 2020 for meta-analyses investigating the aforementioned topic.Results: Overall, 41 studies including a total of 165 meta-analyses of different environmental factors and CHD were examined, covering a wide range of risk factors. The summary random effects estimates were significant at P < 0.05 in 63 meta-analyses (38%), and 15 associations (9%) were significant at P < 10−6. Of these meta-analyses, eventually one risk factor (severe obesity; relative risk: 1.38, 95% confidence interval: 1.30–1.47) had significant summary associations at P < 10−6, included more than 1,000 cases, had 95% prediction intervals excluding the null value, and were not suggestive of large heterogeneity (I2 < 50%), small-study effects (P-value for Egger's test > 0.10), or excess significance (P > 0.10). Eight associations (5%) (including maternal lithium exposure, maternal obesity, maternal alcohol consumption, and maternal fever) had results that were significant at P < 10−6, included more than 1,000 cases, and had 95% prediction intervals excluding the null value (highly suggestive).Conclusion: This umbrella review shows that many environmental factors have substantial evidence in relation to the risk of developing CHD. More and better-designed studies are needed to establish robust evidence between environmental factors and CHD.Systematic Review Registration: [PROSPERO], identifier [CRD42020193381].


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Don van Ravenzwaaij ◽  
John P. A. Ioannidis

Abstract Background Until recently a typical rule that has often been used for the endorsement of new medications by the Food and Drug Administration has been the existence of at least two statistically significant clinical trials favoring the new medication. This rule has consequences for the true positive (endorsement of an effective treatment) and false positive rates (endorsement of an ineffective treatment). Methods In this paper, we compare true positive and false positive rates for different evaluation criteria through simulations that rely on (1) conventional p-values; (2) confidence intervals based on meta-analyses assuming fixed or random effects; and (3) Bayes factors. We varied threshold levels for statistical evidence, thresholds for what constitutes a clinically meaningful treatment effect, and number of trials conducted. Results Our results show that Bayes factors, meta-analytic confidence intervals, and p-values often have similar performance. Bayes factors may perform better when the number of trials conducted is high and when trials have small sample sizes and clinically meaningful effects are not small, particularly in fields where the number of non-zero effects is relatively large. Conclusions Thinking about realistic effect sizes in conjunction with desirable levels of statistical evidence, as well as quantifying statistical evidence with Bayes factors may help improve decision-making in some circumstances.


2020 ◽  
Vol 26 (4) ◽  
pp. 325-334
Author(s):  
Ahad Malekzadeh ◽  
Seyed Mahdi Mahmoudi

AbstractIn this paper, to construct a confidence interval (general and shortest) for quantiles of normal distribution in one population, we present a pivotal quantity that has non-central t distribution. In the case of two independent normal populations, we propose a confidence interval for the ratio of quantiles based on the generalized pivotal quantity, and we introduce a simple method for extracting its percentiles, based on which a shorter confidence interval can be created. Also, we provide general and shorter confidence intervals using the method of variance estimate recovery. The performance of five proposed methods will be examined by using simulation and examples.


2009 ◽  
Vol 217 (1) ◽  
pp. 15-26 ◽  
Author(s):  
Geoff Cumming ◽  
Fiona Fidler

Most questions across science call for quantitative answers, ideally, a single best estimate plus information about the precision of that estimate. A confidence interval (CI) expresses both efficiently. Early experimental psychologists sought quantitative answers, but for the last half century psychology has been dominated by the nonquantitative, dichotomous thinking of null hypothesis significance testing (NHST). The authors argue that psychology should rejoin mainstream science by asking better questions – those that demand quantitative answers – and using CIs to answer them. They explain CIs and a range of ways to think about them and use them to interpret data, especially by considering CIs as prediction intervals, which provide information about replication. They explain how to calculate CIs on means, proportions, correlations, and standardized effect sizes, and illustrate symmetric and asymmetric CIs. They also argue that information provided by CIs is more useful than that provided by p values, or by values of Killeen’s prep, the probability of replication.


2008 ◽  
Vol 65 (3) ◽  
pp. 437-447 ◽  
Author(s):  
Tim J Haxton ◽  
C Scott Findlay

Systematic meta-analyses were conducted on the ecological impacts of water management, including effects of (i) dewatering on macroinvertebrates, (ii) a hypolimnetic release on downstream aquatic fish and macro invertebrate communities, and (iii) flow modification on fluvial and habitat generalists. Our meta-analysis indicates, in general, that (i) macroinvertebrate abundance is lower in zones or areas that have been dewatered as a result of water fluctuations or low flows (overall effect size, –1.64; 95% confidence intervals (CIs), –2.51, –0.77), (ii) hypolimnetic draws are associated with reduced abundance of aquatic (fish and macroinvertebrates) communities (overall effect size, –0.84; 95% CIs, –1.38, –0.33) and macroinvertebrates (overall effect size, –0.73; 95% CIs, –1.24, –0.22) downstream of a dam, and (iii) altered flows are associated with reduced abundance of fluvial specialists (–0.42; 95% CIs, –0.81, –0.02) but not habitat generalists (overall effect size, –0.14; 95% CIs, –0.61, 0.32). Publication bias is evident in several of the meta-analyses; however, multiple experiments from a single study may be contributing to this bias. Fail-safe Ns suggest that many (>100) studies showing positive or no effects of water management on the selected endpoints would be required to qualitatively change the results of the meta-analysis, which in turn suggests that the conclusions are reasonably robust.


Sign in / Sign up

Export Citation Format

Share Document