Superposition of Interacting Aggregated Continuous-Time Markov Chains

1997 ◽  
Vol 29 (1) ◽  
pp. 56-91 ◽  
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Ian D. Tame ◽  
Geoffrey F. Yeo

Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.

1997 ◽  
Vol 29 (01) ◽  
pp. 56-91
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Ian D. Tame ◽  
Geoffrey F. Yeo

Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1992 ◽  
Vol 6 (1) ◽  
pp. 127-131 ◽  
Author(s):  
Masaaki Kijima

An external uniformization technique was developed by Ross [4] to obtain approximations of transition probabilities of finite Markov chains in continuous time. Yoon and Shanthikumar [7] then reported through extensive numerical experiments that this technique performs quite well compared to other existing methods. In this paper, we show that external uniformization results from the strong law of large numbers whose underlying distributions are exponential. Based on this observation, some remarks regarding properties of the approximation are given.


2011 ◽  
Vol 48 (02) ◽  
pp. 322-332 ◽  
Author(s):  
Amine Asselah ◽  
Pablo A. Ferrari ◽  
Pablo Groisman

Consider a continuous-time Markov process with transition rates matrixQin the state space Λ ⋃ {0}. In the associated Fleming-Viot processNparticles evolve independently in Λ with transition rates matrixQuntil one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges asN→ ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process withNparticles converges asN→ ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 /N.


2005 ◽  
Vol 42 (2) ◽  
pp. 303-320 ◽  
Author(s):  
Xianping Guo ◽  
Onésimo Hernández-Lerma

In this paper, we study two-person nonzero-sum games for continuous-time Markov chains with discounted payoff criteria and Borel action spaces. The transition rates are possibly unbounded, and the payoff functions might have neither upper nor lower bounds. We give conditions that ensure the existence of Nash equilibria in stationary strategies. For the zero-sum case, we prove the existence of the value of the game, and also provide arecursiveway to compute it, or at least to approximate it. Our results are applied to a controlled queueing system. We also show that if the transition rates areuniformly bounded, then a continuous-time game is equivalent, in a suitable sense, to a discrete-time Markov game.


1996 ◽  
Vol 33 (1) ◽  
pp. 28-33 ◽  
Author(s):  
Nan Fu Peng

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.


2002 ◽  
Vol 39 (01) ◽  
pp. 197-212 ◽  
Author(s):  
F. Javier López ◽  
Gerardo Sanz

Let (X t ) and (Y t ) be continuous-time Markov chains with countable state spaces E and F and let K be an arbitrary subset of E x F. We give necessary and sufficient conditions on the transition rates of (X t ) and (Y t ) for the existence of a coupling which stays in K. We also show that when such a coupling exists, it can be chosen to be Markovian and give a way to construct it. In the case E=F and K ⊆ E x E, we see how the problem of construction of the coupling can be simplified. We give some examples of use and application of our results, including a new concept of lumpability in Markov chains.


1993 ◽  
Vol 7 (4) ◽  
pp. 529-543 ◽  
Author(s):  
P. K. Pollett ◽  
P. G. Taylor

We consider the problem of establishing the existence of stationary distributions for continuous-time Markov chains directly from the transition rates Q. Given an invariant probability distribution m for Q, we show that a necessary and sufficient condition for m to be a stationary distribution for the minimal process is that Q be regular. We provide sufficient conditions for the regularity of Q that are simple to verify in practice, thus allowing one to easily identify stationary distributions for a variety of models. To illustrate our results, we shall consider three classes of multidimensional Markov chains, namely, networks of queues with batch movements, semireversible queues, and partially balanced Markov processes.


1996 ◽  
Vol 33 (01) ◽  
pp. 28-33 ◽  
Author(s):  
Nan Fu Peng

Using an easy linear-algebraic method, we obtain spectral representations, without the need for eigenvector determination, of the transition probability matrices for completely general continuous time Markov chains with finite state space. Comparing the proof presented here with that of Brown (1991), who provided a similar result for a special class of finite Markov chains, we observe that ours is more concise.


2003 ◽  
Vol 40 (02) ◽  
pp. 327-345 ◽  
Author(s):  
Xianping Guo ◽  
Onésimo Hernández-Lerma

This paper is a first study of two-person zero-sum games for denumerable continuous-time Markov chains determined by given transition rates, with an average payoff criterion. The transition rates are allowed to be unbounded, and the payoff rates may have neither upper nor lower bounds. In the spirit of the ‘drift and monotonicity’ conditions for continuous-time Markov processes, we give conditions on the controlled system's primitive data under which the existence of the value of the game and a pair of strong optimal stationary strategies is ensured by using the Shapley equations. Also, we present a ‘martingale characterization’ of a pair of strong optimal stationary strategies. Our results are illustrated with a birth-and-death game.


Sign in / Sign up

Export Citation Format

Share Document