Exact and ordinary lumpability in finite Markov chains

1994 ◽  
Vol 31 (1) ◽  
pp. 59-75 ◽  
Author(s):  
Peter Buchholz

Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact determination of several stationary and transient results for the original chain. We show which quantities can be determined without an error from the aggregated process and describe methods to calculate bounds on the remaining results. Furthermore, the concept of lumpability is extended to near lumpability yielding approximative aggregation.

1994 ◽  
Vol 31 (01) ◽  
pp. 59-75 ◽  
Author(s):  
Peter Buchholz

Exact and ordinary lumpability in finite Markov chains is considered. Both concepts naturally define an aggregation of the Markov chain yielding an aggregated chain that allows the exact determination of several stationary and transient results for the original chain. We show which quantities can be determined without an error from the aggregated process and describe methods to calculate bounds on the remaining results. Furthermore, the concept of lumpability is extended to near lumpability yielding approximative aggregation.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


2003 ◽  
Vol 4 (6) ◽  
pp. 601-608 ◽  
Author(s):  
Ilya Shmulevich ◽  
Ilya Gluhovsky ◽  
Ronaldo F. Hashimoto ◽  
Edward R. Dougherty ◽  
Wei Zhang

Probabilistic Boolean networks (PBNs) have recently been introduced as a promising class of models of genetic regulatory networks. The dynamic behaviour of PBNs can be analysed in the context of Markov chains. A key goal is the determination of the steady-state (long-run) behaviour of a PBN by analysing the corresponding Markov chain. This allows one to compute the long-term influence of a gene on another gene or determine the long-term joint probabilistic behaviour of a few selected genes. Because matrix-based methods quickly become prohibitive for large sizes of networks, we propose the use of Monte Carlo methods. However, the rate of convergence to the stationary distribution becomes a central issue. We discuss several approaches for determining the number of iterations necessary to achieve convergence of the Markov chain corresponding to a PBN. Using a recently introduced method based on the theory of two-state Markov chains, we illustrate the approach on a sub-network designed from human glioma gene expression data and determine the joint steadystate probabilities for several groups of genes.


1996 ◽  
Vol 33 (02) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


1996 ◽  
Vol 33 (2) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


1996 ◽  
Vol 33 (1) ◽  
pp. 28-33 ◽  
Author(s):  
Nan Fu Peng

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.


Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .


Sign in / Sign up

Export Citation Format

Share Document