Reference analysis of non-regular models and nonparametric Bayes modeling of large data

2019 ◽  
Author(s):  
◽  
Chetkar Jha

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Bayesian analysis is a principled approach, which makes inference about the parameter, by combining the information gained from the data and the prior belief about the parameter. There's no convergence on the choice of priors, and often different motivations for prior lead to different areas of study in Bayesian statistics. This work is motivated by two such choices, namely: reference priors and nonparametric priors. Reference priors arise out of the need to specify priors in a non-subjective manner, i.e. objective manner. Reference priors maximize the amount of information gained from the data about the parameter, in information theoretical sense. The appeal of reference priors lies in the fact that it has nice frequentist properties even for small sample size and often avoids marginalization paradoxes in Bayesian analysis. However, reference prior algorithms are typically available when the posterior is asymptotically normal and Fisher's information matrix is well-defined. In statistical parlance, such models are called regular case or regular model. Recently, Berger et al. (2009) [1] proposed a general expression of reference prior for single continuous parameter model, which is applicable for both regular and non-regular case. Motivated by Berger et al. (2009) [1], we explore reference prior methodology for a general model. Specifically, we derive expression of reference prior for single continuous parameter truncated exponential family and a general expression of conditional reference prior for multi group continuous parameter model. Furthermore, we demonstrate the usefulness of our work by deriving reference priors for models which have no known existing reference priors. We also extend Datta and Ghosh (1996) [2]'s invariance result for reference prior of regular model to general model. Nonparametric priors arise out of the need to specify priors over a large support.

2018 ◽  
Author(s):  
◽  
John Christian Snyder

In Bayesian analysis, the “objective” Bayesian approach seeks to select a prior distribution not by using (often subjective) scientific belief or by mathematical convenience, but rather by deriving it under a pre-specified criteria. This approach takes the decision of prior selection out of the hands of the researcher. Ideally, for a given data model, we would like to have a prior which represents a "neutral" prior belief in the phenomenon we are studying. In categorical data analysis, the odds ratio is one of several approaches to quantify how strongly the presence or absence of one property is associated with the presence or absence of another property. In this project, we present a Reference prior for the odds ratio of an unrestricted 2 x 2 table. Posterior simulation can be conducted without MCMC and is implemented on a GPU via the CUDA extensions for C. Simulation results indicate that the proposed approach to this problem is far superior to the widely used Frequentist approaches that dominate this area. Real data examples also typically yield much more sensible results, especially for small sample sizes or for tables that contain zeros. An R package is also presented to allow for easy implementation of this methodology. Next, we develop an approximate reference prior for the negative binomial distribution, applying this methodology to a continuous parameterization often used for modeling over-dispersed count data as well as the typical discrete case. Results indicate that the developed prior equals the performance of the MLE in estimating the mean of the distribution but is far superior when estimating the dispersion parameter.


2017 ◽  
Vol 46 (21) ◽  
pp. 10507-10517
Author(s):  
Katiane S. Conceição ◽  
Vera Tomazella ◽  
Marinho G. Andrade ◽  
Francisco Louzada

2003 ◽  
Vol 15 (5) ◽  
pp. 1013-1033 ◽  
Author(s):  
Sumio Watanabe ◽  
Shun-ichi Amari

Hierarchical learning machines such as layered neural networks have singularities in their parameter spaces. At singularities, the Fisher information matrix becomes degenerate, with the result that the conventional learning theory of regular statistical models does not hold. Recently, it was proved that if the parameter of the true distribution is contained in the singularities of the learning machine, the generalization error in Bayes estimation is asymptotically equal toλ/n, where 2λ is smaller than the dimension of the parameter andn is the number of training samples. However, the constantλ strongly depends on the local geometrical structure of singularities; hence, the generalization error is not yet clarified when the true distribution is almost but not completely contained in the singularities. In this article, in order to analyze such cases, we study the Bayes generalization error under the condition that the Kullback distance of the true distribution from the distribution represented by singularities is in proportion to 1/n and show two results. First, if the dimension of the parameter from inputs to hidden units is not larger than three, then there exists a region of true parameters such that the generalization error is larger than that of the corresponding regular model. Second, if the dimension from inputs to hidden units is larger than three, then for arbitrary true distribution, the generalization error is smaller than that of the corresponding regular model.


2020 ◽  
pp. bmjspcare-2019-002160
Author(s):  
Richard A Parker ◽  
Tonje A Sande ◽  
Barry Laird ◽  
Peter Hoskin ◽  
Marie Fallon ◽  
...  

ObjectiveTo show how a simple Bayesian analysis method can be used to improve the evidence base in patient populations where recruitment and retention are challenging.MethodsA Bayesian conjugate analysis method was applied to binary data from the Thermal testing in Bone Pain (TiBoP) study: a prospective diagnostic accuracy/predictive study in patients with cancer-induced bone pain (CIBP). This study aimed to evaluate the clinical utility of a simple bedside tool to identify who was most likely to benefit from palliative radiotherapy (XRT) for CIBP.ResultsRecruitment and retention of patients were challenging due to the frail population, with only 27 patients available for the primary analysis. The Bayesian method allowed us to make use of prior work done in this area and combine it with the TiBoP data to maximise the informativeness of the results. Positive and negative predictive values were estimated with greater precision, and interpretation of results was facilitated by use of direct probability statements. In particular, there was only 7% probability that the true positive predictive value was above 80%.ConclusionsSeveral advantages of using Bayesian analysis are illustrated in this article. The Bayesian method allowed us to gain greater confidence in our interpretation of the results despite the small sample size by allowing us to incorporate data from a previous similar study. We suggest that this method is likely to be useful for the analysis of small diagnostic or predictive studies when prior information is available.


Sign in / Sign up

Export Citation Format

Share Document