scholarly journals Open Markov Chains: Cumulant Dynamics, Fluctuations and Correlations

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 256
Author(s):  
Raúl Salgado-García

In this work we propose a model for open Markov chains that can be interpreted as a system of non-interacting particles evolving according to the rules of a Markov chain. The number of particles in the system is not constant, because we allow the particles to arrive or leave the state space according to prescribed protocols. We describe this system by looking at the population of particles on every state by establishing the rules of time-evolution of the distribution of particles. We show that it is possible to describe the distribution of particles over the state space through the corresponding moment generating function. This description is given through the dynamics ruling the behavior of such a moment generating function and we prove that the system is able to attain the stationarity under some conditions. We also show that it is possible to describe the dynamics of the two first cumulants of the distribution of particles, which in some way is a simpler technique to obtain useful information of the open Markov chain for practical purposes. Finally we also study the behavior of the time-dependent correlation functions of the number of particles present in the system. We give some simple examples of open chains that either, can be fully described through the moment generating function or partially described through the exact solution of the cumulant dynamics.

1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1977 ◽  
Vol 99 (2) ◽  
pp. 103-108 ◽  
Author(s):  
Arthur Mayer

The state vector of a linear system responding to gaussian noise satisfies a Langevin equation, and the moment-generating function of the probability distribution of the state vector satisfies a partial differential equation. The logarithm of the moment-generating function is expanded in a power series, whose coefficients are organized into a sequence of symmetric tensors. These are the generalized cumulants of the time-dependent distribution of the state vector. They separately satisfy an infinite sequence of uncoupled ordinary differential tensor equations. The normal modes of each of the generalized cumulants are given by an easy formula. This specifies transient response and proves that all cumulants of a stable system are stable. Also, all cumulants of an unstable system are unstable. As an example, a particular non-gaussian initial distribution is assumed for the state vector of a second-order tracking system, and the transient fourth cumulant is calculated.


2009 ◽  
Vol 46 (03) ◽  
pp. 812-826
Author(s):  
Saul Jacka

Motivated by Feller's coin-tossing problem, we consider the problem of conditioning an irreducible Markov chain never to wait too long at 0. Denoting by τ the first time that the chain,X, waits for at least one unit of time at the origin, we consider conditioning the chain on the event (τ›T). We show that there is a weak limit asT→∞ in the cases where either the state space is finite orXis transient. We give sufficient conditions for the existence of a weak limit in other cases and show that we have vague convergence to a defective limit if the time to hit zero has a lighter tail than τ and τ is subexponential.


1989 ◽  
Vol 26 (03) ◽  
pp. 446-457 ◽  
Author(s):  
Gerardo Rubino

We analyse the conditions under which the aggregated process constructed from an homogeneous Markov chain over a given partition of the state space is also Markov homogeneous. The past work on the subject is revised and new properties are obtained.


1983 ◽  
Vol 20 (03) ◽  
pp. 505-512
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


1983 ◽  
Vol 20 (3) ◽  
pp. 505-512 ◽  
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


2009 ◽  
Vol 09 (02) ◽  
pp. 187-204
Author(s):  
THOMAS R. BOUCHER ◽  
DAREN B. H. CLINE

The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.


1976 ◽  
Vol 8 (4) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


Sign in / Sign up

Export Citation Format

Share Document