scholarly journals Estimating the Value of a Statistical Life: The Importance of Omitted Variables and Publication Bias

2004 ◽  
Author(s):  
Orley C. Ashenfelter ◽  
Michael Greenstone
2019 ◽  
Vol 11 (4) ◽  
pp. 350-374 ◽  
Author(s):  
Jonathan M. Lee ◽  
Laura O. Taylor

The value of a statistical life (VSL) is a critical driver of estimated benefits for federal policies designed to improve human health, safety, and environmental exposures. The vast majority of empirical evidence on the magnitude of the VSL arises from hedonic wage models that have been plagued by measurement error and omitted variables. To address these limitations, this paper employs randomly assigned workplace safety inspections to instrument for plant-level risks in a quasi-experimental design. We provide credible causal evidence for the existence of compensating wages for fatality risks and estimate a VSL between $(2016)8 million and $(2016)10 million. (JEL J17, J28, J31, K32)


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2015 ◽  
pp. 62-85 ◽  
Author(s):  
T. Zhuravleva

This paper surveys the literature on public-private sector wage differentials for Russian labor market. We give an overview of the main results and problems of the existing research. The authors unanimously confirm that in Russia private sector workers receive higher wages relative to their public sector counterparts. According to different estimates the "premium" varies between 7 and 40%. A correct evaluation of this "premium" is subject to debate and is a particular case of a more general econometric problem of wage differentials estimation. The main difficulties are related to data limitations, self-selection and omitted variables. Reasons for the existence of a stable private sector "premium" in Russia are not fully investigated.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2017 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen ◽  
Heather Lench

The facial feedback hypothesis suggests that an individual’s experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant, but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and three were associated with differences in effect sizes. 1. Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all three methods revealed evidence of publication bias in studies examining affective judgments. 2. Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). 3. Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis’ central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous.


Sign in / Sign up

Export Citation Format

Share Document