Estimation of the stationary distribution of semi-Markov processes with Borel state space

2006 ◽  
Vol 76 (14) ◽  
pp. 1536-1542 ◽  
Author(s):  
N. Limnios
2004 ◽  
Vol 41 (3) ◽  
pp. 746-757 ◽  
Author(s):  
Guy Latouche ◽  
Tetsuya Takine

We consider a fluid queue controlled by a semi-Markov process and we apply the Markov-renewal approach developed earlier in the context of quasi-birth-and-death processes and of Markovian fluid queues. We analyze two subfamilies of semi-Markov processes. In the first family, we assume that the intervals during which the input rate is negative have an exponential distribution. In the second family, we take the complementary case and assume that the intervals during which the input rate is positive have an exponential distribution. We thoroughly characterize the structure of the stationary distribution in both cases.


1991 ◽  
Vol 5 (4) ◽  
pp. 477-498 ◽  
Author(s):  
Peter J. Haas ◽  
Gerald S. Shedler

Generalized semi-Markov processes and stochastic Petri nets provide building blocks for specification of discrete event system simulations on a finite or countable state space. The two formal systems differ, however, in the event scheduling (clock-setting) mechanism, the state transition mechanism, and the form of the state space. We have shown previously that stochastic Petri nets have at least the modeling power of generalized semi-Markov processes. In this paper we show that stochastic Petri nets and generalized semi-Markov processes, in fact, have the same modeling power. Combining this result with known results for generalized semi-Markov processes, we also obtain conditions for time-average convergence and convergence in distribution along with a central limit theorem for the marking process of a stochastic Petri net.


1970 ◽  
Vol 7 (02) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij (t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


2004 ◽  
Vol 41 (03) ◽  
pp. 746-757
Author(s):  
Guy Latouche ◽  
Tetsuya Takine

We consider a fluid queue controlled by a semi-Markov process and we apply the Markov-renewal approach developed earlier in the context of quasi-birth-and-death processes and of Markovian fluid queues. We analyze two subfamilies of semi-Markov processes. In the first family, we assume that the intervals during which the input rate is negative have an exponential distribution. In the second family, we take the complementary case and assume that the intervals during which the input rate is positive have an exponential distribution. We thoroughly characterize the structure of the stationary distribution in both cases.


2012 ◽  
Vol 26 (4) ◽  
pp. 483-508 ◽  
Author(s):  
Michael N. Katehakis ◽  
Laurens C. Smit

A class of Markov chains we call successively lumbaple is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Finally, we will study applications of successively lumbaple Markov chains to classical reliability and queueing models.


1970 ◽  
Vol 7 (2) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij(t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


1986 ◽  
Vol 23 (1) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


1989 ◽  
Vol 3 (3) ◽  
pp. 405-415 ◽  
Author(s):  
Panagiotis Konstantopoulos ◽  
Jean Walrand

This paper is concerned with a certain property of the stationary distribution of a generalized semi-Markov process (GSMP) known as insensitivity. It is well-known that the so-called Matthes' conditions form a necessary and sufficient algebraic criterion for insensitivity. Most proofs of these conditions are basically algebraic. By interpreting a GSMP as a simple queueing network, we are able to show that Matthes' conditions are equivalent to the quasi-reversibility of the network, thus obtaining another simple proof of the sufficiency of these conditions. Furthermore, we apply our method to find a simple criterion for the insensitivity of GSMP's with generalized routing (in a sense that is introduced in the paper).


Sign in / Sign up

Export Citation Format

Share Document