On Quasi-Stationary distributions in absorbing discrete-time finite Markov chains

1965 ◽  
Vol 2 (1) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.

1965 ◽  
Vol 2 (01) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.


Author(s):  
Richard J. Boucherie

AbstractThis note introduces quasi-local-balance for discrete-time Markov chains with absorbing states. From quasi-local-balance product-form quasi-stationary distributions are derived by analogy with product-form stationary distributions for Markov chains that satisfy local balance.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


2015 ◽  
Vol 47 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


1992 ◽  
Vol 29 (01) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n= 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩+= {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix𝐏= (pij), and letpij(n) =P∈Xn=j, Xk∈ 𝒩+fork= 0, 1, ···,n|X0=i],i, j𝒩+. The prime concern of this paper is conditions for the existence of the limits,qijsay, ofasn →∞. Ifthe distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vectorx= (xi) satisfyingrxT=xT𝐏andexists, whereris the convergence norm of𝐏, i.e.r=R–1andand T denotes transpose, then it is unique, positive elementwise, andqij(n) necessarily converge toxjasn →∞.Unlike existing results in the literature, our results can be applied even to theR-null andR-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix isR-transient is discussed to demonstrate the usefulness of our results.


2015 ◽  
Vol 47 (01) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .


Sign in / Sign up

Export Citation Format

Share Document