scholarly journals Meta-Analysis with Few Studies and Binary Data: A Bayesian Model Averaging Approach

Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2159
Author(s):  
Francisco-José Vázquez-Polo ◽  
Miguel-Ángel Negrín-Hernández ◽  
María Martel-Escobar

In meta-analysis, the existence of between-sample heterogeneity introduces model uncertainty, which must be incorporated into the inference. We argue that an alternative way to measure this heterogeneity is by clustering the samples and then determining the posterior probability of the cluster models. The meta-inference is obtained as a mixture of all the meta-inferences for the cluster models, where the mixing distribution is the posterior model probabilities. When there are few studies, the number of cluster configurations is manageable, and the meta-inferences can be drawn with BMA techniques. Although this topic has been relatively neglected in the meta-analysis literature, the inference thus obtained accurately reflects the cluster structure of the samples used. In this paper, illustrative examples are given and analysed, using real binary data.

Author(s):  
Miguel-Angel Negrín-Hernández ◽  
María Martel-Escobar ◽  
Francisco-José Vázquez-Polo

In meta-analysis, the structure of the between-sample heterogeneity plays a crucial role in estimating the meta-parameter. A Bayesian meta-analysis for binary data has recently been proposed that measures this heterogeneity by clustering the samples and then determining the posterior probability of the cluster models through model selection. The meta-parameter is then estimated using Bayesian model averaging techniques. Although an objective Bayesian meta-analysis is proposed for each type of heterogeneity, we concentrate the attention of this paper on priors over the models. We consider four alternative priors which are motivated by reasonable but different assumptions. A frequentist validation with simulated data has been carried out to analyze the properties of each prior distribution for a set of different number of studies and sample sizes. The results show the importance of choosing an adequate model prior as the posterior probabilities for the models are very sensitive to it. The hierarchical Poisson prior and the hierarchical uniform prior show a good performance when the real model is the homogeneity, or when the sample sizes are high enough. However, the uniform prior can detect the true model when it is an intermediate model (neither homogeneity nor heterogeneity) even for small sample sizes and few studies. An illustrative example with real data is also given, showing the sensitivity of the estimation of the meta-parameter to the model prior.


2018 ◽  
Vol 3 (2) ◽  
pp. 79-92
Author(s):  
Rolando Gonzales Martínez

The sensitivity of the wage curve to sample-selection and model uncertainty was evaluated with Bayesian methods. More than 8000 Heckit wage curves were estimated using data from the 2017 household survey of Bolivia. After averaging the estimates with the posterior probability of each model being true, the wage curve elasticity in Bolivia is close to -0.01. This result suggests that in this country the wage curve is inelastic and does not follow the international statistical regularity of wage curves. 


2021 ◽  
Author(s):  
Carlos R Oliveira ◽  
Eugene D Shapiro ◽  
Daniel M Weinberger

Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Although susceptible to confounding, the test-negative case-control study design is the most efficient method to assess VE post-licensure. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When a large number of potential confounders are being considered, it can be challenging to know which variables need to be included in the final model. This paper highlights the importance of considering model uncertainty by re-analyzing a Lyme VE study using several confounder selection methods. We propose an intuitive Bayesian Model Averaging (BMA) framework for this task and compare the performance of BMA to that of traditional single-best-model-selection methods. We demonstrate how BMA can be advantageous in situations when there is uncertainty about model selection by systematically considering alternative models and increasing transparency.


2019 ◽  
Vol 220 (2) ◽  
pp. 1368-1378
Author(s):  
M Bertin ◽  
S Marin ◽  
C Millet ◽  
C Berge-Thierry

SUMMARY In low-seismicity areas such as Europe, seismic records do not cover the whole range of variable configurations required for seismic hazard analysis. Usually, a set of empirical models established in such context (the Mediterranean Basin, northeast U.S.A., Japan, etc.) is considered through a logic-tree-based selection process. This approach is mainly based on the scientist’s expertise and ignores the uncertainty in model selection. One important and potential consequence of neglecting model uncertainty is that we assign more precision to our inference than what is warranted by the data, and this leads to overly confident decisions and precision. In this paper, we investigate the Bayesian model averaging (BMA) approach, using nine ground-motion prediction equations (GMPEs) issued from several databases. The BMA method has become an important tool to deal with model uncertainty, especially in empirical settings with large number of potential models and relatively limited number of observations. Two numerical techniques, based on the Markov chain Monte Carlo method and the maximum likelihood estimation approach, for implementing BMA are presented and applied together with around 1000 records issued from the RESORCE-2013 database. In the example considered, it is shown that BMA provides both a hierarchy of GMPEs and an improved out-of-sample predictive performance.


2020 ◽  
Vol 58 (3) ◽  
pp. 644-719 ◽  
Author(s):  
Mark F. J. Steel

The method of model averaging has become an important tool to deal with model uncertainty, for example in situations where a large amount of different theories exist, as are common in economics. Model averaging is a natural and formal response to model uncertainty in a Bayesian framework, and most of the paper deals with Bayesian model averaging. The important role of the prior assumptions in these Bayesian procedures is highlighted. In addition, frequentist model averaging methods are also discussed. Numerical techniques to implement these methods are explained, and I point the reader to some freely available computational resources. The main focus is on uncertainty regarding the choice of covariates in normal linear regression models, but the paper also covers other, more challenging, settings, with particular emphasis on sampling models commonly used in economics. Applications of model averaging in economics are reviewed and discussed in a wide range of areas including growth economics, production modeling, finance and forecasting macroeconomic quantities. (JEL C11, C15, C20, C52, O47).


2019 ◽  
Vol 51 (02) ◽  
pp. 249-266
Author(s):  
Nicholas D. Payne ◽  
Berna Karali ◽  
Jeffrey H. Dorfman

AbstractBasis forecasting is important for producers and consumers of agricultural commodities in their risk management decisions. However, the best performing forecasting model found in previous studies varies substantially. Given this inconsistency, we take a Bayesian approach, which addresses model uncertainty by combining forecasts from different models. Results show model performance differs by location and forecast horizon, but the forecast from the Bayesian approach often performs favorably. In some cases, however, the simple moving averages have lower forecast errors. Besides the nearby basis, we also examine basis in a specific month and find that regression-based models outperform others in longer horizons.


Sign in / Sign up

Export Citation Format

Share Document