- Bayesian Hypothesis Testing and the Bayes Factor

2014 ◽  
pp. 252-291
2019 ◽  
Author(s):  
Donald Ray Williams ◽  
Joris Mulder

Gaussian graphical models (GGM) allow for learning conditional independence structures that are encoded by partial correlations. Whereas there are several \proglang{R} packages for classical (i.e., frequentist) methods, there are only two that implement a Bayesian approach. These are exclusively focused on identifying the graphical structure; that is, detecting non-zero effects. The \proglang{R} package \pkg{BGGM} not only fills this gap, but it also includes novel Bayesian methodology for extending inference beyond identifying non-zero relations. \pkg{BGGM} is built around two Bayesian approaches for inference--estimation and hypothesis testing. The former focuses on the posterior distribution and includes extensions to assess predictability, as well as methodology to compare partial correlations. The latter includes methods for Bayesian hypothesis testing, in both exploratory and confirmatory contexts, with the novel matrix-$F$ prior distribution. This allows for testing order and equality constrained hypotheses, as well as a combination of both with the Bayes factor. Further, there are two approaches for comparing any number of GGMs with either the posterior predictive distribution or Bayesian hypothesis testing. This work describes the software implementation of these methods. We end by discussing future directions for \pkg{BGGM}.


2019 ◽  
Author(s):  
Don van Ravenzwaaij ◽  
Eric-Jan Wagenmakers

Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component –the Bayes factor– that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of eleven ‘issues’ of NHBT. In this commentary, we provide context to each issue and conclude that many issues may in fact be conceived as pronounced advantages of NHBT.


Author(s):  
M. D. Edge

Bayesian methods allow researchers to combine precise descriptions of prior beliefs with new data in a principled way. The main object of interest in Bayesian statistics is the posterior distribution, which describes the uncertainty associated with parameters given prior beliefs about them and the observed data. The posterior can be difficult to compute mathematically, but computational methods can give arbitrarily good approximations in most cases. Bayesian point and interval estimates are features of the posterior, such as measures of its central tendency or intervals into which the parameter falls with specified probability. Bayesian hypothesis testing is complicated and controversial, but one relevant tool is the Bayes factor, which compares the probability of observing the data under a pair of distinct hypotheses.


Author(s):  
Alexander Ly ◽  
Eric-Jan Wagenmakers

AbstractThe “Full Bayesian Significance Test e-value”, henceforth FBST ev, has received increasing attention across a range of disciplines including psychology. We show that the FBST ev leads to four problems: (1) the FBST ev cannot quantify evidence in favor of a null hypothesis and therefore also cannot discriminate “evidence of absence” from “absence of evidence”; (2) the FBST ev is susceptible to sampling to a foregone conclusion; (3) the FBST ev violates the principle of predictive irrelevance, such that it is affected by data that are equally likely to occur under the null hypothesis and the alternative hypothesis; (4) the FBST ev suffers from the Jeffreys-Lindley paradox in that it does not include a correction for selection. These problems also plague the frequentist p-value. We conclude that although the FBST ev may be an improvement over the p-value, it does not provide a reasonable measure of evidence against the null hypothesis.


2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.


Sign in / Sign up

Export Citation Format

Share Document