scholarly journals Bayesian inference for high-dimensional decomposable graphs

2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Kyoungjae Lee ◽  
Xuan Cao
Biostatistics ◽  
2010 ◽  
Vol 11 (2) ◽  
pp. 317-336 ◽  
Author(s):  
Sylvia Frühwirth-Schnatter ◽  
Saumyadipta Pyne

Abstract Skew-normal and skew-t distributions have proved to be useful for capturing skewness and kurtosis in data directly without transformation. Recently, finite mixtures of such distributions have been considered as a more general tool for handling heterogeneous data involving asymmetric behaviors across subpopulations. We consider such mixture models for both univariate as well as multivariate data. This allows robust modeling of high-dimensional multimodal and asymmetric data generated by popular biotechnological platforms such as flow cytometry. We develop Bayesian inference based on data augmentation and Markov chain Monte Carlo (MCMC) sampling. In addition to the latent allocations, data augmentation is based on a stochastic representation of the skew-normal distribution in terms of a random-effects model with truncated normal random effects. For finite mixtures of skew normals, this leads to a Gibbs sampling scheme that draws from standard densities only. This MCMC scheme is extended to mixtures of skew-t distributions based on representing the skew-t distribution as a scale mixture of skew normals. As an important application of our new method, we demonstrate how it provides a new computational framework for automated analysis of high-dimensional flow cytometric data. Using multivariate skew-normal and skew-t mixture models, we could model non-Gaussian cell populations rigorously and directly without transformation or projection to lower dimensions.


2018 ◽  
Vol 17 (1) ◽  
pp. 118-151
Author(s):  
Hoang Nguyen ◽  
M Concepción Ausín ◽  
Pedro Galeano

Author(s):  
Wen-Hao Zhang ◽  
Tai Sing Lee ◽  
Brent Doiron ◽  
Si Wu

AbstractThe brain performs probabilistic inference to interpret the external world, but the underlying neuronal mechanisms remain not well understood. The stimulus structure of natural scenes exists in a high-dimensional feature space, and how the brain represents and infers the joint posterior distribution in this rich, combinatorial space is a challenging problem. There is added difficulty when considering the neuronal mechanics of this representation, since many of these features are computed in parallel by distributed neural circuits. Here, we present a novel solution to this problem. We study continuous attractor neural networks (CANNs), each representing and inferring a stimulus attribute, where attractor coupling supports sampling-based inference on the multivariate posterior of the high-dimensional stimulus features. Using perturbative analysis, we show that the dynamics of coupled CANNs realizes Langevin sampling on the stimulus feature manifold embedded in neural population responses. In our framework, feedforward inputs convey the likelihood, reciprocal connections encode the stimulus correlational priors, and the internal Poisson variability of the neurons generate the correct random walks for sampling. Our model achieves high-dimensional joint probability representation and Bayesian inference in a distributed manner, where each attractor network infers the marginal posterior of the corresponding stimulus feature. The stimulus feature can be read out simply with a linear decoder based only on local activities of each network. Simulation experiments confirm our theoretical analysis. The study provides insight into the fundamental neural mechanisms for realizing efficient high-dimensional probabilistic inference.


Bernoulli ◽  
2019 ◽  
Vol 25 (4A) ◽  
pp. 2854-2882 ◽  
Author(s):  
Alain Durmus ◽  
Éric Moulines

Author(s):  
Dimitris Korobilis ◽  
Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.


Sign in / Sign up

Export Citation Format

Share Document