scholarly journals Subgeometric rates of convergence for a class of continuous-time Markov process

2005 ◽  
Vol 42 (03) ◽  
pp. 698-712
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu ◽  
Hanjun Zhang

Let (Φ t ) t∈ℝ+ be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure π. We investigate the rates of convergence of the transition function P t (x, ·) to π; specifically, we find conditions under which r(t)||P t (x, ·) − π|| → 0 as t → ∞, for suitable subgeometric rate functions r(t), where ||·|| denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.

2005 ◽  
Vol 42 (3) ◽  
pp. 698-712 ◽  
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu ◽  
Hanjun Zhang

Let (Φt)t∈ℝ+ be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure π. We investigate the rates of convergence of the transition function Pt(x, ·) to π; specifically, we find conditions under which r(t)||Pt(x, ·) − π|| → 0 as t → ∞, for suitable subgeometric rate functions r(t), where ||·|| denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.


1994 ◽  
Vol 26 (03) ◽  
pp. 775-798 ◽  
Author(s):  
Pekka Tuominen ◽  
Richard L. Tweedie

Let Φ = {Φ n } be an aperiodic, positive recurrent Markov chain on a general state space, π its invariant probability measure and f ≧ 1. We consider the rate of (uniform) convergence of E x [g(Φ n )] to the stationary limit π (g) for |g| ≦ f: specifically, we find conditions under which as n →∞, for suitable subgeometric rate functions r. We give sufficient conditions for this convergence to hold in terms of (i) the existence of suitably regular sets, i.e. sets on which (f, r)-modulated hitting time moments are bounded, and (ii) the existence of (f, r)-modulated drift conditions (Foster–Lyapunov conditions). The results are illustrated for random walks and for more general state space models.


1994 ◽  
Vol 26 (3) ◽  
pp. 775-798 ◽  
Author(s):  
Pekka Tuominen ◽  
Richard L. Tweedie

Let Φ = {Φ n} be an aperiodic, positive recurrent Markov chain on a general state space, π its invariant probability measure and f ≧ 1. We consider the rate of (uniform) convergence of Ex[g(Φ n)] to the stationary limit π (g) for |g| ≦ f: specifically, we find conditions under which as n →∞, for suitable subgeometric rate functions r. We give sufficient conditions for this convergence to hold in terms of(i) the existence of suitably regular sets, i.e. sets on which (f, r)-modulated hitting time moments are bounded, and(ii) the existence of (f, r)-modulated drift conditions (Foster–Lyapunov conditions).The results are illustrated for random walks and for more general state space models.


2013 ◽  
Vol 50 (4) ◽  
pp. 960-968 ◽  
Author(s):  
Konstantin Avrachenkov ◽  
Alexey Piunovskiy ◽  
Yi Zhang

We consider a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. We provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. We give closed-form expressions for the invariant probability measure of the modified process. When the process evolves on the Euclidean space, there is also a closed-form expression for the moments of the modified process. We show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or greater than) the rate of restarts. Finally, we illustrate the general results by the standard and geometric Brownian motions.


1996 ◽  
Vol 33 (2) ◽  
pp. 340-356 ◽  
Author(s):  
Michael Voit

The distributions of nearest neighbour random walks on hypercubes in continuous time t 0 can be expressed in terms of binomial distributions; their limit behaviour for t, N → ∞ is well-known. We study here these random walks in discrete time and derive explicit bounds for the deviation of their distribution from their counterparts in continuous time with respect to the total variation norm. Our results lead to a recent asymptotic result of Diaconis, Graham and Morrison for the deviation from uniformity for N →∞. Our proofs use Krawtchouk polynomials and a version of the Diaconis–Shahshahani upper bound lemma. We also apply our methods to certain birth-and-death random walks associated with Krawtchouk polynomials.


2016 ◽  
Vol 53 (1) ◽  
pp. 187-202 ◽  
Author(s):  
Nguyen Huu Du ◽  
Dang Hai Nguyen ◽  
G. George Yin

Abstract In this paper we derive sufficient conditions for the permanence and ergodicity of a stochastic predator–prey model with a Beddington–DeAngelis functional response. The conditions obtained are in fact very close to the necessary conditions. Both nondegenerate and degenerate diffusions are considered. One of the distinctive features of our results is that they enable the characterization of the support of a unique invariant probability measure. It proves the convergence in total variation norm of the transition probability to the invariant measure. Comparisons to the existing literature and matters related to other stochastic predator–prey models are also given.


2000 ◽  
Vol 37 (02) ◽  
pp. 359-373 ◽  
Author(s):  
G. O. Roberts ◽  
R. L. Tweedie

In this paper we give bounds on the total variation distance from convergence of a continuous time positive recurrent Markov process on an arbitrary state space, based on Foster-Lyapunov drift and minorisation conditions. Considerably improved bounds are given in the stochastically monotone case, for both discrete and continuous time models, even in the absence of a reachable minimal element. These results are applied to storage models and to diffusion processes.


1996 ◽  
Vol 33 (02) ◽  
pp. 340-356 ◽  
Author(s):  
Michael Voit

The distributions of nearest neighbour random walks on hypercubesin continuous timet0 can be expressed in terms of binomial distributions; their limit behaviour fort, N →∞ is well-known. We study here these random walks in discrete time and derive explicit bounds for the deviation of their distribution from their counterparts in continuous time with respect to the total variation norm. Our results lead to a recent asymptotic result of Diaconis, Graham and Morrison for the deviation from uniformity forN →∞.Our proofs use Krawtchouk polynomials and a version of the Diaconis–Shahshahani upper bound lemma. We also apply our methods to certain birth-and-death random walks associated with Krawtchouk polynomials.


2000 ◽  
Vol 37 (2) ◽  
pp. 359-373 ◽  
Author(s):  
G. O. Roberts ◽  
R. L. Tweedie

In this paper we give bounds on the total variation distance from convergence of a continuous time positive recurrent Markov process on an arbitrary state space, based on Foster-Lyapunov drift and minorisation conditions. Considerably improved bounds are given in the stochastically monotone case, for both discrete and continuous time models, even in the absence of a reachable minimal element. These results are applied to storage models and to diffusion processes.


1998 ◽  
Vol 35 (02) ◽  
pp. 313-324 ◽  
Author(s):  
Bret Larget

A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.


Sign in / Sign up

Export Citation Format

Share Document