Spectral representations of the transition probability matrices for continuous time finite Markov chains

1996 ◽  
Vol 33 (1) ◽  
pp. 28-33 ◽  
Author(s):  
Nan Fu Peng

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.

1996 ◽  
Vol 33 (01) ◽  
pp. 28-33 ◽  
Author(s):  
Nan Fu Peng

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1975 ◽  
Vol 12 (03) ◽  
pp. 498-506 ◽  
Author(s):  
G. G. S. Pegram

A model for the transition probability matrices (t.p.m.'s) of finite discrete Markov chains is suggested which may help those who wish to use a larger number of states than would seem reasonable with the data available in the current estimation situation. The model is especially useful in that a finite t.p.m. of arbitrary size can be specified by as few as two parameters. An example of the model's estimation and use is presented, showing it in a fair light in comparison with the conventional method of t.p.m. specification.


1975 ◽  
Vol 12 (3) ◽  
pp. 498-506 ◽  
Author(s):  
G. G. S. Pegram

A model for the transition probability matrices (t.p.m.'s) of finite discrete Markov chains is suggested which may help those who wish to use a larger number of states than would seem reasonable with the data available in the current estimation situation. The model is especially useful in that a finite t.p.m. of arbitrary size can be specified by as few as two parameters. An example of the model's estimation and use is presented, showing it in a fair light in comparison with the conventional method of t.p.m. specification.


2018 ◽  
Vol 55 (4) ◽  
pp. 1025-1036 ◽  
Author(s):  
Dario Bini ◽  
Jeffrey J. Hunter ◽  
Guy Latouche ◽  
Beatrice Meini ◽  
Peter Taylor

Abstract In their 1960 book on finite Markov chains, Kemeny and Snell established that a certain sum is invariant. The value of this sum has become known as Kemeny’s constant. Various proofs have been given over time, some more technical than others. We give here a very simple physical justification, which extends without a hitch to continuous-time Markov chains on a finite state space. For Markov chains with denumerably infinite state space, the constant may be infinite and even if it is finite, there is no guarantee that the physical argument will hold. We show that the physical interpretation does go through for the special case of a birth-and-death process with a finite value of Kemeny’s constant.


2002 ◽  
Vol 16 (4) ◽  
pp. 403-426 ◽  
Author(s):  
Mouad Ben Mamoun ◽  
Nihal Pekergin

We propose a particular class of transition probability matrices for discrete-time Markov chains with a closed form to compute the stationary distribution. The stochastic monotonicity properties of this class are established. We give algorithms to construct monotone, bounding matrices belonging to the proposed class for the variability orders. The accuracy of bounds with respect to the underlying matrix structure is discussed through an example.


1967 ◽  
Vol 4 (01) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1994 ◽  
Vol 31 (1) ◽  
pp. 59-75 ◽  
Author(s):  
Peter Buchholz

Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact determination of several stationary and transient results for the original chain. We show which quantities can be determined without an error from the aggregated process and describe methods to calculate bounds on the remaining results. Furthermore, the concept of lumpability is extended to near lumpability yielding approximative aggregation.


Sign in / Sign up

Export Citation Format

Share Document