Augmented truncations of infinite stochastic matrices

1987 ◽  
Vol 24 (3) ◽  
pp. 600-608 ◽  
Author(s):  
Diana Gibson ◽  
E. Seneta

We consider the problem of approximating the stationary distribution of a positive-recurrent Markov chain with infinite transition matrix P, by stationary distributions computed from (n × n) stochastic matrices formed by augmenting the entries of the (n × n) northwest corner truncations of P, as n →∞.

1987 ◽  
Vol 24 (03) ◽  
pp. 600-608 ◽  
Author(s):  
Diana Gibson ◽  
E. Seneta

We consider the problem of approximating the stationary distribution of a positive-recurrent Markov chain with infinite transition matrix P, by stationary distributions computed from (n × n) stochastic matrices formed by augmenting the entries of the (n × n) northwest corner truncations of P, as n →∞.


1998 ◽  
Vol 35 (03) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n) P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n) P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.


1998 ◽  
Vol 35 (3) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n)P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n)P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.


1998 ◽  
Vol 30 (03) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.


1998 ◽  
Vol 30 (3) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.


1965 ◽  
Vol 2 (1) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.


1992 ◽  
Vol 29 (02) ◽  
pp. 374-383 ◽  
Author(s):  
Sophia Kalpazidou

The asymptotic behaviour of sequences of Markov processes whose finite distributions depend upon the sample paths ω of a positive recurrent Markov chain ξ is studied. The existence of such sequences depends upon the existence of a unique class of directed weighted circuits having a probabilistic interpretation in terms of the directed circuits occurring along the sample paths of ξ. An application to multiple Markov chains is given.


1965 ◽  
Vol 2 (01) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Di Zhao ◽  
Hongyi Li ◽  
Donglin Su

The transition matrix, which characterizes a discrete time homogeneous Markov chain, is a stochastic matrix. A stochastic matrix is a special nonnegative matrix with each row summing up to 1. In this paper, we focus on the computation of the stationary distribution of a transition matrix from the viewpoint of the Perron vector of a nonnegative matrix, based on which an algorithm for the stationary distribution is proposed. The algorithm can also be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix. Furthermore, a numerical example is given to demonstrate the validity of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document