scholarly journals Bounding functions of Markov processes and the shortest queue problem

1989 ◽  
Vol 21 (4) ◽  
pp. 842-860
Author(s):  
John A. Gubner ◽  
B. Gopinath ◽  
S. R. S. Varadhan

We prove a theorem which can be used to show that the expectation of a non-negative function of the state of a time-homogeneous Markov process is uniformly bounded in time. This is reminiscent of the classical theory of non-negative supermartingales, except that our analog of the supermartingale inequality need not hold almost surely. Consequently, the theorem is suitable for establishing the stability of systems that evolve in a stabilizing mode in most states, though from certain states they may jump to a less stable state. We use this theorem to show that ‘joining the shortest queue' can bound the expected sum of the squares of the differences between all pairs among N queues, even under arbitrarily heavy traffic.

1989 ◽  
Vol 21 (04) ◽  
pp. 842-860
Author(s):  
John A. Gubner ◽  
B. Gopinath ◽  
S. R. S. Varadhan

We prove a theorem which can be used to show that the expectation of a non-negative function of the state of a time-homogeneous Markov process is uniformly bounded in time. This is reminiscent of the classical theory of non-negative supermartingales, except that our analog of the supermartingale inequality need not hold almost surely. Consequently, the theorem is suitable for establishing the stability of systems that evolve in a stabilizing mode in most states, though from certain states they may jump to a less stable state. We use this theorem to show that ‘joining the shortest queue' can bound the expected sum of the squares of the differences between all pairs among N queues, even under arbitrarily heavy traffic.


1999 ◽  
Vol 36 (01) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let S n denote the partial sum S n = θ(ξ1) + … + θ(ξ n ), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼S N = [lim n→∞𝔼θ(ξ n )]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξ N ). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξ N ) = o(𝔼N).


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 631
Author(s):  
Marc Harper ◽  
Dashiell Fryer

We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of Markov processes. We apply this definition of stability to local maxima and minima of the stationary distribution of the Moran process with mutation and show that variations in population size, mutation rate, and strength of selection all affect the stability of the stationary extrema.


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.


1992 ◽  
Vol 6 (4) ◽  
pp. 425-438 ◽  
Author(s):  
Steven Jaffe

A 2-by-2 buffered switch is the basic element of certain parallel data-processing networks. For a switch fed by two independent Bernoulli input streams, we find the joint distribution of the number of messages waiting in the two buffers at equilibrium, in the form of a bivariate generating function. The derivation uses complex-variable techniques developed by Kingman and by Flatto and McKean for the “shortest queue problem.” A number of asymptotic results are given, the principal one being the variance of the total number of waiting messages in the heavy-traffic limit.


1999 ◽  
Vol 10 (5) ◽  
pp. 497-509 ◽  
Author(s):  
CHARLES KNESSL

We consider the classic shortest queue problem in the heavy traffic limit. We assume that the second server works slowly and that the service rate of the first server is nearly equal to the arrival rate. Solving for the (asymptotic) joint steady state queue length distribution involves analyzing a backward parabolic partial differential equation, together with appropriate side conditions. We explicitly solve this problem. We thus obtain a two-dimensional approximation for the steady state queue length probabilities.


1985 ◽  
Vol 22 (4) ◽  
pp. 865-878 ◽  
Author(s):  
Shlomo Halfin

A Poisson stream of customers arrives at a service center which consists of two single-server queues in parallel. The service times of the customers are exponentially distributed, and both servers serve at the same rate. Arriving customers join the shortest of the two queues, with ties broken in any plausible manner. No jockeying between the queues is allowed. Employing linear programming techniques, we calculate bounds for the probability distribution of the number of customers in the system, and its expected value in equilibrium. The bounds are asymptotically tight in heavy traffic.


1999 ◽  
Vol 36 (1) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let Sn denote the partial sum Sn = θ(ξ1) + … + θ(ξn), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼SN = [limn→∞𝔼θ(ξn)]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξN). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξN) = o(𝔼N).


1994 ◽  
Vol 31 (3) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


Sign in / Sign up

Export Citation Format

Share Document