Fast Algorithms for Bayesian Uncertainty Quantification in Large-Scale Linear Inverse Problems Based on Low-Rank Partial Hessian Approximations

2011 ◽  
Vol 33 (1) ◽  
pp. 407-432 ◽  
Author(s):  
H. P. Flath ◽  
L. C. Wilcox ◽  
V. Akçelik ◽  
J. Hill ◽  
B. van Bloemen Waanders ◽  
...  
2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


2019 ◽  
Author(s):  
Leandro de Figueiredo ◽  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mauro Roisenberg ◽  
Bruno Rodrigues

2022 ◽  
Vol 4 ◽  
Author(s):  
Kaiqi Zhang ◽  
Cole Hawkins ◽  
Zheng Zhang

A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks.


2015 ◽  
Vol 37 (6) ◽  
pp. A2451-A2487 ◽  
Author(s):  
Alessio Spantini ◽  
Antti Solonen ◽  
Tiangang Cui ◽  
James Martin ◽  
Luis Tenorio ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document