Rates of convergence of stochastically monotone and continuous time Markov models

2000 ◽  
Vol 37 (2) ◽  
pp. 359-373 ◽  
Author(s):  
G. O. Roberts ◽  
R. L. Tweedie

In this paper we give bounds on the total variation distance from convergence of a continuous time positive recurrent Markov process on an arbitrary state space, based on Foster-Lyapunov drift and minorisation conditions. Considerably improved bounds are given in the stochastically monotone case, for both discrete and continuous time models, even in the absence of a reachable minimal element. These results are applied to storage models and to diffusion processes.

2000 ◽  
Vol 37 (02) ◽  
pp. 359-373 ◽  
Author(s):  
G. O. Roberts ◽  
R. L. Tweedie

In this paper we give bounds on the total variation distance from convergence of a continuous time positive recurrent Markov process on an arbitrary state space, based on Foster-Lyapunov drift and minorisation conditions. Considerably improved bounds are given in the stochastically monotone case, for both discrete and continuous time models, even in the absence of a reachable minimal element. These results are applied to storage models and to diffusion processes.


1997 ◽  
Vol 29 (01) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean[N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.


1976 ◽  
Vol 13 (3) ◽  
pp. 455-465
Author(s):  
D. I. Saunders

For the age-dependent branching process with arbitrary state space let M(x, t, A) be the expected number of individuals alive at time t with states in A given an initial individual at x. Subject to various conditions it is shown that M(x, t, A)e–at converges to a non-trivial limit where α is the Malthusian parameter (α = 0 for the critical case, and is negative in the subcritical case). The method of proof also yields rates of convergence.


1994 ◽  
Vol 26 (03) ◽  
pp. 775-798 ◽  
Author(s):  
Pekka Tuominen ◽  
Richard L. Tweedie

Let Φ = {Φ n } be an aperiodic, positive recurrent Markov chain on a general state space, π its invariant probability measure and f ≧ 1. We consider the rate of (uniform) convergence of E x [g(Φ n )] to the stationary limit π (g) for |g| ≦ f: specifically, we find conditions under which as n →∞, for suitable subgeometric rate functions r. We give sufficient conditions for this convergence to hold in terms of (i) the existence of suitably regular sets, i.e. sets on which (f, r)-modulated hitting time moments are bounded, and (ii) the existence of (f, r)-modulated drift conditions (Foster–Lyapunov conditions). The results are illustrated for random walks and for more general state space models.


1994 ◽  
Vol 26 (3) ◽  
pp. 775-798 ◽  
Author(s):  
Pekka Tuominen ◽  
Richard L. Tweedie

Let Φ = {Φ n} be an aperiodic, positive recurrent Markov chain on a general state space, π its invariant probability measure and f ≧ 1. We consider the rate of (uniform) convergence of Ex[g(Φ n)] to the stationary limit π (g) for |g| ≦ f: specifically, we find conditions under which as n →∞, for suitable subgeometric rate functions r. We give sufficient conditions for this convergence to hold in terms of(i) the existence of suitably regular sets, i.e. sets on which (f, r)-modulated hitting time moments are bounded, and(ii) the existence of (f, r)-modulated drift conditions (Foster–Lyapunov conditions).The results are illustrated for random walks and for more general state space models.


2005 ◽  
Vol 42 (03) ◽  
pp. 698-712
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu ◽  
Hanjun Zhang

Let (Φ t ) t∈ℝ+ be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure π. We investigate the rates of convergence of the transition function P t (x, ·) to π; specifically, we find conditions under which r(t)||P t (x, ·) − π|| → 0 as t → ∞, for suitable subgeometric rate functions r(t), where ||·|| denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.


1979 ◽  
Vol 11 (2) ◽  
pp. 397-421 ◽  
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


1995 ◽  
Vol 27 (01) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose P i (T < ∞) ≡ 1 and (*) lim i→∞ P i (T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff E i (e ∊T ) < ∞ for some ∊ > 0. The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


Sign in / Sign up

Export Citation Format

Share Document