random walk metropolis algorithm
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 31 (6) ◽  
Author(s):  
John Moriarty ◽  
Jure Vogrinc ◽  
Alessandro Zocca

AbstractWe aim to improve upon the exploration of the general-purpose random walk Metropolis algorithm when the target has non-convex support $$A\subset {\mathbb {R}}^d$$ A ⊂ R d , by reusing proposals in $$A^c$$ A c which would otherwise be rejected. The algorithm is Metropolis-class and under standard conditions the chain satisfies a strong law of large numbers and central limit theorem. Theoretical and numerical evidence of improved performance relative to random walk Metropolis are provided. Issues of implementation are discussed and numerical examples, including applications to global optimisation and rare event sampling, are presented.


2019 ◽  
Vol 2019 ◽  
pp. 1-24 ◽  
Author(s):  
Mylène Bédard

We obtain weak convergence and optimal scaling results for the random walk Metropolis algorithm with a Gaussian proposal distribution. The sampler is applied to hierarchical target distributions, which form the building block of many Bayesian analyses. The global asymptotically optimal proposal variance derived may be computed as a function of the specific target distribution considered. We also introduce the concept of locally optimal tunings, i.e., tunings that depend on the current position of the Markov chain. The theorems are proved by studying the generator of the first and second components of the algorithm and verifying their convergence to the generator of a modified RWM algorithm and a diffusion process, respectively. The rate at which the algorithm explores its state space is optimized by studying the speed measure of the limiting diffusion process. We illustrate the theory with two examples. Applications of these results on simulated and real data are also presented.


2018 ◽  
Vol 55 (4) ◽  
pp. 1186-1202
Author(s):  
Daniel Rudolf ◽  
Mario Ullrich

Abstract Different Markov chains can be used for approximate sampling of a distribution given by an unnormalized density function with respect to the Lebesgue measure. The hit-and-run, (hybrid) slice sampler, and random walk Metropolis algorithm are popular tools to simulate such Markov chains. We develop a general approach to compare the efficiency of these sampling procedures by the use of a partial ordering of their Markov operators, the covariance ordering. In particular, we show that the hit-and-run and the simple slice sampler are more efficient than a hybrid slice sampler based on hit-and-run, which, itself, is more efficient than a (lazy) random walk Metropolis algorithm.


2018 ◽  
Vol 28 (5) ◽  
pp. 2966-3001 ◽  
Author(s):  
Alexandros Beskos ◽  
Gareth Roberts ◽  
Alexandre Thiery ◽  
Natesh Pillai

2017 ◽  
Vol 54 (4) ◽  
pp. 1233-1260 ◽  
Author(s):  
Alain Durmus ◽  
Sylvain Le Corff ◽  
Eric Moulines ◽  
Gareth O. Roberts

Abstract In this paper we consider the optimal scaling of high-dimensional random walk Metropolis algorithms for densities differentiable in the Lp mean but which may be irregular at some points (such as the Laplace density, for example) and/or supported on an interval. Our main result is the weak convergence of the Markov chain (appropriately rescaled in time and space) to a Langevin diffusion process as the dimension d goes to ∞. As the log-density might be nondifferentiable, the limiting diffusion could be singular. The scaling limit is established under assumptions which are much weaker than the one used in the original derivation of Roberts et al. (1997). This result has important practical implications for the use of random walk Metropolis algorithms in Bayesian frameworks based on sparsity inducing priors.


2017 ◽  
Vol 54 (2) ◽  
pp. 638-654 ◽  
Author(s):  
K. Kamatani

Abstract We describe the ergodic properties of some Metropolis–Hastings algorithms for heavy-tailed target distributions. The results of these algorithms are usually analyzed under a subgeometric ergodic framework, but we prove that the mixed preconditioned Crank–Nicolson (MpCN) algorithm has geometric ergodicity even for heavy-tailed target distributions. This useful property comes from the fact that, under a suitable transformation, the MpCN algorithm becomes a random-walk Metropolis algorithm.


2016 ◽  
Vol 11 (1) ◽  
pp. 14
Author(s):  
Madaki Umar Yusuf ◽  
Mohd Rizam Abu Bakar ◽  
Qasim Nasir Husain ◽  
Noor Akma Ibrahim ◽  
Jayanthi Arasan

Log-gamma distribution is the extension of gamma distribution which is more flexible, versatile and provides a great fit to some skewed and censored data. Problem/Objective: In this paper we introduce a solution to closed forms of its survival function of the model which shows the suitability and flexibility towards modelling real life data. Methods/Analysis: Alternatively, Bayesian estimation by MCMC simulation using the Random-walk Metropolis algorithm was applied, using AIC and BIC comparison makes it the smallest and great choice for fitting the survival models and simulations by Markov Chain Monte Carlo Methods. Findings/Conclusion: It shows that this procedure and methods are better option in modelling Bayesian regression and survival/reliability analysis integrations in applied statistics, which based on the comparison criterion log-gamma model have the least values. However, the results of the censored data have been clarified with the simulation results.


2016 ◽  
Vol 53 (2) ◽  
pp. 410-420 ◽  
Author(s):  
Gareth O. Roberts ◽  
Jeffrey S. Rosenthal

Abstract We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.


Sign in / Sign up

Export Citation Format

Share Document