scholarly journals Entropic Equilibria Selection of Stationary Extrema in Finite Populations

Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 631
Author(s):  
Marc Harper ◽  
Dashiell Fryer

We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of Markov processes. We apply this definition of stability to local maxima and minima of the stationary distribution of the Moran process with mutation and show that variations in population size, mutation rate, and strength of selection all affect the stability of the stationary extrema.

2004 ◽  
Vol 41 (3) ◽  
pp. 746-757 ◽  
Author(s):  
Guy Latouche ◽  
Tetsuya Takine

We consider a fluid queue controlled by a semi-Markov process and we apply the Markov-renewal approach developed earlier in the context of quasi-birth-and-death processes and of Markovian fluid queues. We analyze two subfamilies of semi-Markov processes. In the first family, we assume that the intervals during which the input rate is negative have an exponential distribution. In the second family, we take the complementary case and assume that the intervals during which the input rate is positive have an exponential distribution. We thoroughly characterize the structure of the stationary distribution in both cases.


2004 ◽  
Vol 41 (03) ◽  
pp. 746-757
Author(s):  
Guy Latouche ◽  
Tetsuya Takine

We consider a fluid queue controlled by a semi-Markov process and we apply the Markov-renewal approach developed earlier in the context of quasi-birth-and-death processes and of Markovian fluid queues. We analyze two subfamilies of semi-Markov processes. In the first family, we assume that the intervals during which the input rate is negative have an exponential distribution. In the second family, we take the complementary case and assume that the intervals during which the input rate is positive have an exponential distribution. We thoroughly characterize the structure of the stationary distribution in both cases.


1986 ◽  
Vol 23 (1) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


1989 ◽  
Vol 3 (3) ◽  
pp. 405-415 ◽  
Author(s):  
Panagiotis Konstantopoulos ◽  
Jean Walrand

This paper is concerned with a certain property of the stationary distribution of a generalized semi-Markov process (GSMP) known as insensitivity. It is well-known that the so-called Matthes' conditions form a necessary and sufficient algebraic criterion for insensitivity. Most proofs of these conditions are basically algebraic. By interpreting a GSMP as a simple queueing network, we are able to show that Matthes' conditions are equivalent to the quasi-reversibility of the network, thus obtaining another simple proof of the sufficiency of these conditions. Furthermore, we apply our method to find a simple criterion for the insensitivity of GSMP's with generalized routing (in a sense that is introduced in the paper).


1995 ◽  
Vol 27 (01) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose P i (T < ∞) ≡ 1 and (*) lim i→∞ P i (T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff E i (e ∊T ) < ∞ for some ∊ > 0. The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


1989 ◽  
Vol 21 (4) ◽  
pp. 842-860
Author(s):  
John A. Gubner ◽  
B. Gopinath ◽  
S. R. S. Varadhan

We prove a theorem which can be used to show that the expectation of a non-negative function of the state of a time-homogeneous Markov process is uniformly bounded in time. This is reminiscent of the classical theory of non-negative supermartingales, except that our analog of the supermartingale inequality need not hold almost surely. Consequently, the theorem is suitable for establishing the stability of systems that evolve in a stabilizing mode in most states, though from certain states they may jump to a less stable state. We use this theorem to show that ‘joining the shortest queue' can bound the expected sum of the squares of the differences between all pairs among N queues, even under arbitrarily heavy traffic.


1995 ◽  
Vol 27 (1) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose Pi(T < ∞) ≡ 1 and (*) limi→∞Pi(T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff Ei (e∊T) < ∞ for some ∊ > 0.The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


2001 ◽  
Vol 38 (03) ◽  
pp. 621-634
Author(s):  
David Aldous ◽  
Masakiyo Miyazawa ◽  
Tomasz Rolski

We study a service system in which, in each service period, the server performs the current set B of tasks as a batch, taking time s(B), where the function s(·) is subadditive. A natural definition of ‘traffic intensity under congestion’ in this setting is ρ := lim t→∞ t -1Es (all tasks arriving during time [0,t]). We show that ρ &gt; 1 and a finite mean of individual service times are necessary and sufficient to imply stability of the system. A key observation is that the numbers of arrivals during successive service periods form a Markov chain {A n }, enabling us to apply classical regenerative techniques and to express the stationary distribution of the process in terms of the stationary distribution of {A n }.


1989 ◽  
Vol 21 (04) ◽  
pp. 842-860
Author(s):  
John A. Gubner ◽  
B. Gopinath ◽  
S. R. S. Varadhan

We prove a theorem which can be used to show that the expectation of a non-negative function of the state of a time-homogeneous Markov process is uniformly bounded in time. This is reminiscent of the classical theory of non-negative supermartingales, except that our analog of the supermartingale inequality need not hold almost surely. Consequently, the theorem is suitable for establishing the stability of systems that evolve in a stabilizing mode in most states, though from certain states they may jump to a less stable state. We use this theorem to show that ‘joining the shortest queue' can bound the expected sum of the squares of the differences between all pairs among N queues, even under arbitrarily heavy traffic.


1986 ◽  
Vol 23 (01) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


Sign in / Sign up

Export Citation Format

Share Document