reinforced random walks
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 4)

H-INDEX

9
(FIVE YEARS 0)

Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of interacting chains where the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and  Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with $a(n)$ particles at time $0$ and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are $a(n)$ particles at time $n$. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that $a(n)=o(n)$. Exploratory numerical results are presented to illustrate the performance.


2021 ◽  
Vol 185 (1) ◽  
Author(s):  
Manuel González-Navarrete ◽  
Ranghely Hernández

Author(s):  
Roland Bauerschmidt ◽  
Nicholas Crawford ◽  
Tyler Helmuth ◽  
Andrew Swan

AbstractWe study (unrooted) random forests on a graph where the probability of a forest is multiplicatively weighted by a parameter $$\beta >0$$ β > 0 per edge. This is called the arboreal gas model, and the special case when $$\beta =1$$ β = 1 is the uniform forest model. The arboreal gas can equivalently be defined to be Bernoulli bond percolation with parameter $$p=\beta /(1+\beta )$$ p = β / ( 1 + β ) conditioned to be acyclic, or as the limit $$q\rightarrow 0$$ q → 0 with $$p=\beta q$$ p = β q of the random cluster model. It is known that on the complete graph $$K_{N}$$ K N with $$\beta =\alpha /N$$ β = α / N there is a phase transition similar to that of the Erdős–Rényi random graph: a giant tree percolates for $$\alpha > 1$$ α > 1 and all trees have bounded size for $$\alpha <1$$ α < 1 . In contrast to this, by exploiting an exact relationship between the arboreal gas and a supersymmetric sigma model with hyperbolic target space, we show that the forest constraint is significant in two dimensions: trees do not percolate on $${\mathbb {Z}}^2$$ Z 2 for any finite $$\beta >0$$ β > 0 . This result is a consequence of a Mermin–Wagner theorem associated to the hyperbolic symmetry of the sigma model. Our proof makes use of two main ingredients: techniques previously developed for hyperbolic sigma models related to linearly reinforced random walks and a version of the principle of dimensional reduction.


Author(s):  
Jean Bertoin

Abstract Let $$X_1, X_2, \ldots $$ X 1 , X 2 , … be i.i.d. copies of some real random variable X. For any deterministic $$\varepsilon _2, \varepsilon _3, \ldots $$ ε 2 , ε 3 , … in $$\{0,1\}$$ { 0 , 1 } , a basic algorithm introduced by H.A. Simon yields a reinforced sequence $$\hat{X}_1, \hat{X}_2 , \ldots $$ X ^ 1 , X ^ 2 , … as follows. If $$\varepsilon _n=0$$ ε n = 0 , then $$ \hat{X}_n$$ X ^ n is a uniform random sample from $$\hat{X}_1, \ldots , \hat{X}_{n-1}$$ X ^ 1 , … , X ^ n - 1 ; otherwise $$ \hat{X}_n$$ X ^ n is a new independent copy of X. The purpose of this work is to compare the scaling exponent of the usual random walk $$S(n)=X_1+\cdots + X_n$$ S ( n ) = X 1 + ⋯ + X n with that of its step reinforced version $$\hat{S}(n)=\hat{X}_1+\cdots + \hat{X}_n$$ S ^ ( n ) = X ^ 1 + ⋯ + X ^ n . Depending on the tail of X and on asymptotic behavior of the sequence $$(\varepsilon _n)$$ ( ε n ) , we show that step reinforcement may speed up the walk, or at the contrary slow it down, or also does not affect the scaling exponent at all. Our motivation partly stems from the study of random walks with memory, notably the so-called elephant random walk and its variations.


2020 ◽  
Vol 178 (3-4) ◽  
pp. 1173-1192 ◽  
Author(s):  
Jean Bertoin

Abstract A reinforcement algorithm introduced by Simon (Biometrika 42(3/4):425–440, 1955) produces a sequence of uniform random variables with long range memory as follows. At each step, with a fixed probability $$p\in (0,1)$$ p ∈ ( 0 , 1 ) , $${\hat{U}}_{n+1}$$ U ^ n + 1 is sampled uniformly from $${\hat{U}}_1, \ldots , {\hat{U}}_n$$ U ^ 1 , … , U ^ n , and with complementary probability $$1-p$$ 1 - p , $${\hat{U}}_{n+1}$$ U ^ n + 1 is a new independent uniform variable. The Glivenko–Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when $$p<1/2$$ p < 1 / 2 , and that a further rescaling is needed when $$p>1/2$$ p > 1 / 2 and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.


2019 ◽  
Vol 24 (0) ◽  
Author(s):  
Jiro Akahori ◽  
Andrea Collevecchio ◽  
Masato Takei

Sign in / Sign up

Export Citation Format

Share Document