Average optimal switching of a Markov chain with a Borel state space

2002 ◽  
Vol 55 (1) ◽  
pp. 143-159 ◽  
Author(s):  
Alexander Yushkevich ◽  
Evgueni Gordienko
2005 ◽  
Vol 37 (4) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φt)t≥0 be an additive functional defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1997 ◽  
Vol 29 (01) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean[N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.


1985 ◽  
Vol 22 (01) ◽  
pp. 138-147 ◽  
Author(s):  
Wojciech Szpankowski

Some sufficient conditions for non-ergodicity are given for a Markov chain with denumerable state space. These conditions generalize Foster's results, in that unbounded Lyapunov functions are considered. Our criteria directly extend the conditions obtained in Kaplan (1979), in the sense that a class of Lyapunov functions is studied. Applications are presented through some examples; in particular, sufficient conditions for non-ergodicity of a multidimensional Markov chain are given.


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1985 ◽  
Vol 22 (1) ◽  
pp. 138-147 ◽  
Author(s):  
Wojciech Szpankowski

Some sufficient conditions for non-ergodicity are given for a Markov chain with denumerable state space. These conditions generalize Foster's results, in that unbounded Lyapunov functions are considered. Our criteria directly extend the conditions obtained in Kaplan (1979), in the sense that a class of Lyapunov functions is studied. Applications are presented through some examples; in particular, sufficient conditions for non-ergodicity of a multidimensional Markov chain are given.


Sign in / Sign up

Export Citation Format

Share Document