scholarly journals Best Estimate Selection Bias in the Value of a Statistical Life

2017 ◽  
Vol 9 (2) ◽  
pp. 205-246 ◽  
Author(s):  
W. Kip Viscusi

Selection of the best estimates of economic parameters frequently relies on the “best estimates” or a meta-analysis of the “best set” of parameter estimates from the literature. Using an all-set dataset consisting of all reported estimates of the value of a statistical life (VSL) as well as a best-set sample of the best estimates from these studies, this article estimates statistically significant publication selection biases in each case. Biases are much greater for the best-set sample, as one might expect, given the subjective nature of the best-set selection process. For the all-set sample, the mean bias-corrected estimate of the VSL for the preferred specification is $8.1 million for the whole sample and $11.4 million based on the CFOI data, while for the best-set results, the whole sample value is $3.5 million, and the CFOI data estimate is $4.4 million. Previous estimates of huge publication selection biases in the VSL estimates are attributable to these studies’ reliance on best-set samples.

2021 ◽  
Vol 34 (Supplement_1) ◽  
Author(s):  
Marina Orlandini ◽  
Maria Carolina Serafim ◽  
Letícia Datrino ◽  
Clara Santos ◽  
Luca Tristão ◽  
...  

Abstract   Megaesophagus progress to sigmoid megaesophagus (SM) in 10–15% of patients, presenting tortuosity and sigmoid colon aspect. Esophagectomy is the choice treatment but is associated with high complications and mortality rates. To avoid the esophagectomy inherent morbidity, several authors recommend Heller myotomy (HM) with pull-down technique for SM, mainly for patients with comorbidities and the elderly. This systematic review and meta-analysis is the first to analyze the effectiveness of HM for treating SM. Methods A systematic review was conducted in PubMed, Embase, Cochrane Library Central, Lilacs (BVS), and manual search of references. Inclusion criteria were: a) clinical trials, cohort studies, case series; b) patients with SM and esophageal diameter ≥ 6 cm; and c) patients undergoing primary myotomy. The exclusion criteria were: a) reviews, case reports, cross-sectional studies, editorials, letters, congress abstracts, full-text unavailability; b) animal studies, c) previous surgical treatment for achalasia; and d) pediatric studies. There were no restrictions on language or date of publication, and no filters were applied for the selection process. Random model and a 95% confidence interval (CI) were used. Results Sixteen articles were selected, encompassing 231 patients. The mean age ranged from 36 to 61 years old, and the mean follow-up ranged from 16 to 109 months. The analyzed outcomes include mortality, complications (pneumonia, pneumothorax, gastroesophageal reflux), need for reintervention (remyotomy, dilation and esophagectomy), and results classified as ‘good’ and ‘excellent’. Mortality rate was 0.035 (CI: 0.017–0.07; p < 0.01). Complications rate was 0.08 (CI: 0.04–0.153; p = 0.01). Need for retreatment rate was 0.161 (CI: 0.053–0.399; p < 0.01). Probability of good or excellent outcomes after myotomy was 0.762 (CI: 0.693–0.819; p < 0,01). Conclusion Heller myotomy is an option for avoiding esophagectomy in achalasia, with a low morbimortality rate and good results. It is effective for most patients but will fail in a minority of patients and demand retreatment, be it a remyotomy, endoscopic treatment or esophagectomy.


2020 ◽  
Vol 3 (2) ◽  
pp. 200-215
Author(s):  
Max Hinne ◽  
Quentin F. Gronau ◽  
Don van den Bergh ◽  
Eric-Jan Wagenmakers

Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion and then learning about the parameters of this selected model. Crucially, however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model-selection process is ignored. An alternative is to learn the parameters for all candidate models and then combine the estimates according to the posterior probabilities of the associated models. This approach is known as Bayesian model averaging (BMA). BMA has several important advantages over all-or-none selection methods, but has been used only sparingly in the social sciences. In this conceptual introduction, we explain the principles of BMA, describe its advantages over all-or-none model selection, and showcase its utility in three examples: analysis of covariance, meta-analysis, and network analysis.


2019 ◽  
Author(s):  
Max Hinne ◽  
Quentin Frederik Gronau ◽  
Don van den Bergh ◽  
Eric-Jan Wagenmakers

Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion, and then learning about the parameters of this selected model. Crucially however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model selection process is ignored. An alternative is to learn the parameters for allcandidate models, and then combine the estimates according to the posterior probabilities of the associated models. The result is known as Bayesian model averaging (BMA). BMA has several important advantages over all-or-none selection methods, but has been used only sparingly in the social sciences. In this conceptual introduction we explain the principles of BMA, describe its advantages over all-or-none model selection, and showcase its utility for three examples: ANCOVA, meta-analysis, and network analysis.


2020 ◽  
Author(s):  
Jonathan Z Bakdash ◽  
Laura Ranee Marusich ◽  
Jared Kenworthy ◽  
Elyssa Twedt ◽  
Erin Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


2021 ◽  
Author(s):  
Jonathan Z Bakdash ◽  
Laura Ranee Marusich ◽  
Jared Kenworthy ◽  
Elyssa Twedt ◽  
Erin Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


Author(s):  
Ken Williams ◽  
Patrick Lawler ◽  
Allan D Sniderman

Our aim was to compare the implications of targeting LDL-lowering treatment to LDL-C, non-HDL-C, or apoB based on a recent meta-analysis of all published epidemiological studies with all three markers' vascular risk associations which found overall per standard deviation relative risk ratios (RRR) of 1.25 for LDL-C, 1.31 for non-HDL-C, and 1.41 for apoB. Our approach was to project the 10-year incidence of CHD events from NHANES 2005-2006 with 1697 subjects representing over 190 million adult US residents under different scenarios defined by the target marker and the percentage of people treated. Framingham equations were used to estimate each subject's 10-year CHD risk. We estimated each subject's risk if treated by dividing their initial risk estimate by the marker's RRR exponentiated to the number of standard deviations (LDL-C: 35 mg/dl, non-HDL-C: 42 mg/dl, apoB: 27 mg/dl) in 40% of the marker's measured level. The potential number of CHD cases prevented by the treatment was calculated by multiplying the difference between initial and treated risk by the number of people represented. The mean 10-year CHD risk was 7.00% indicating 13.3 million incident CHD cases would be expected over the subsequent 10 years with no treatment change. The expected numbers of incident CHD cases prevented under different treatment scenarios are shown in the figure. These results support recommendations to use apoB in clinical practice to identify candidates for LDL-lowering and to target their treatment.


2020 ◽  
Vol 11 ◽  
Author(s):  
Jonathan Z. Bakdash ◽  
Laura R. Marusich ◽  
Jared B. Kenworthy ◽  
Elyssa Twedt ◽  
Erin G. Zaroukian

Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices.


2019 ◽  
Author(s):  
Max Hinne ◽  
Quentin Frederik Gronau ◽  
Don van den Bergh ◽  
Eric-Jan Wagenmakers

Many statistical scenarios initially involve several candidate models that describe the data-generating process. Analysis often proceeds by first selecting the best model according to some criterion, and then learning about the parameters of this selected model. Crucially however, in this approach the parameter estimates are conditioned on the selected model, and any uncertainty about the model selection process is ignored. An alternative is to learn the parameters for all candidate models, and then combine the estimates according to the posterior probabilities of the associated models. The result is known as Bayesian model averaging (BMA). BMA has several important advantages over all-or-none selection methods, but has been used only sparingly in the social sciences. In this conceptual introduction we explain the principles of BMA, describe its advantages over all-or-none model selection, and showcase its utility for three examples: ANCOVA, meta-analysis, and network analysis.


2020 ◽  
Vol 228 (1) ◽  
pp. 14-24 ◽  
Author(s):  
Tanja Burgard ◽  
Michael Bošnjak ◽  
Nadine Wedderhoff

Abstract. A meta-analysis was performed to determine whether response rates to online psychology surveys have decreased over time and the effect of specific design characteristics (contact mode, burden of participation, and incentives) on response rates. The meta-analysis is restricted to samples of adults with depression or general anxiety disorder. Time and study design effects are tested using mixed-effects meta-regressions as implemented in the metafor package in R. The mean response rate of the 20 studies fulfilling our meta-analytic inclusion criteria is approximately 43%. Response rates are lower in more recently conducted surveys and in surveys employing longer questionnaires. Furthermore, we found that personal invitations, for example, via telephone or face-to-face contacts, yielded higher response rates compared to e-mail invitations. As predicted by sensitivity reinforcement theory, no effect of incentives on survey participation in this specific group (scoring high on neuroticism) could be observed.


1995 ◽  
Vol 74 (04) ◽  
pp. 1064-1070 ◽  
Author(s):  
Marco Cattaneo ◽  
Alan S Harris ◽  
Ulf Strömberg ◽  
Pier Mannuccio Mannucci

SummaryThe effect of desmopressin (DDAVP) on reducing postoperative blood loss after cardiac surgery has been studied in several randomized clinical trials, with conflicting outcomes. Since most trials had insufficient statistical power to detect true differences in blood loss, we performed a meta-analysis of data from relevant studies. Seventeen randomized, double-blind, placebo-controlled trials were analyzed, which included 1171 patients undergoing cardiac surgery for various indications; 579 of them were treated with desmopressin and 592 with placebo. Efficacy parameters were blood loss volumes and transfusion requirements. Desmopressin significantly reduced postoperative blood loss by 9%, but had no statistically significant effect on transfusion requirements. A subanalysis revealed that desmopressin had no protective effects in trials in which the mean blood loss in placebo-treated patients fell in the lower and middle thirds of distribution of blood losses (687-1108 ml/24 h). In contrast, in trials in which the mean blood loss in placebo-treated patients fell in the upper third of distribution (>1109 ml/24 h), desmopressin significantly decreased postoperative blood loss by 34%. Insufficient data were available to perform a sub-analysis on transfusion requirements. Therefore, desmopressin significantly reduces blood loss only in cardiac operations which induce excessive blood loss. Further studies are called to validate the results of this meta-analysis and to identify predictors of excessive blood loss after cardiac surgery.


Sign in / Sign up

Export Citation Format

Share Document