scholarly journals Sojourn times in finite Markov processes

1989 ◽  
Vol 26 (4) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.

1989 ◽  
Vol 26 (04) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1993 ◽  
Vol 7 (4) ◽  
pp. 441-464 ◽  
Author(s):  
V. Anantharam ◽  
M. Benchekroun

Consider a large number of interacting queues with symmetric interactions. In the asymptotic limit, the interactions between any fixed finite subcollection become negligible, and the overall effect of interactions can be replaced by an empirical rate. The evolution of each queue is given by a time inhomogeneous Markov process. This may be considered a technique for writing dynamic Erlang fixed-point equations. We explore this as a tool to approximate sojourn time distributions.


2001 ◽  
Vol 38 (1) ◽  
pp. 195-208 ◽  
Author(s):  
Sophie Bloch-Mercier

We consider a repairable system with a finite state space which evolves in time according to a Markov process as long as it is working. We assume that this system is getting worse and worse while running: if the up-states are ranked according to their degree of increasing degradation, this is expressed by the fact that the Markov process is assumed to be monotone with respect to the reversed hazard rate and to have an upper triangular generator. We study this kind of process and apply the results to derive some properties of the stationary availability of the system. Namely, we show that, if the duration of the repair is independent of its completeness degree, then the more complete the repair, the higher the stationary availability, where the completeness degree of the repair is measured with the reversed hazard rate ordering.


1998 ◽  
Vol 35 (02) ◽  
pp. 313-324 ◽  
Author(s):  
Bret Larget

A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.


2001 ◽  
Vol 38 (01) ◽  
pp. 195-208 ◽  
Author(s):  
Sophie Bloch-Mercier

We consider a repairable system with a finite state space which evolves in time according to a Markov process as long as it is working. We assume that this system is getting worse and worse while running: if the up-states are ranked according to their degree of increasing degradation, this is expressed by the fact that the Markov process is assumed to be monotone with respect to the reversed hazard rate and to have an upper triangular generator. We study this kind of process and apply the results to derive some properties of the stationary availability of the system. Namely, we show that, if the duration of the repair is independent of its completeness degree, then the more complete the repair, the higher the stationary availability, where the completeness degree of the repair is measured with the reversed hazard rate ordering.


Author(s):  
M. Saburov

A linear Markov chain is a discrete time stochastic process whose transitions depend only on the current state of the process. A nonlinear Markov chain is a discrete time stochastic process whose transitions may depend on both the current state and the current distribution of the process. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes. The nonlinear Markov processes were introduced by McKean and have been extensively studied in the context of nonlinear Chapman-Kolmogorov equations as well as nonlinear Fokker-Planck equations. The nonlinear Markov chain over a finite state space can be identified by a continuous mapping (a nonlinear Markov operator) defined on a set of all probability distributions (which is a simplex) of the finite state space and by a family of transition matrices depending on occupation probability distributions of states. Particularly, a linear Markov operator is a linear operator associated with a square stochastic matrix. It is well-known that a linear Markov operator is a surjection of the simplex if and only if it is a bijection. The similar problem was open for a nonlinear Markov operator associated with a stochastic hyper-matrix. We solve it in this paper. Namely, we show that a nonlinear Markov operator associated with a stochastic hyper-matrix is a surjection of the simplex if and only if it is a permutation of the Lotka-Volterra operator.


1998 ◽  
Vol 35 (2) ◽  
pp. 313-324 ◽  
Author(s):  
Bret Larget

A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.


Sign in / Sign up

Export Citation Format

Share Document