Asymptotic expansion of the absorption-time distribution for a markov chain

Cybernetics ◽  
1975 ◽  
Vol 9 (4) ◽  
pp. 694-697
Author(s):  
V. S. Korolyuk ◽  
I. P. Penev ◽  
A. F. Turbin
1997 ◽  
Vol 34 (02) ◽  
pp. 340-345
Author(s):  
Tommy Norberg

The sojourn time that a Markov chain spends in a subset E of its state space has a distribution that depends on the hitting distribution on E and the probabilities (resp. rates in the continuous-time case) that govern the transitions within E. In this note we characterise the set of all hitting distributions for which the sojourn time distribution is geometric (resp. exponential).


2019 ◽  
Vol 23 ◽  
pp. 739-769
Author(s):  
Paweł Lorek

For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.


2007 ◽  
Vol 24 (03) ◽  
pp. 293-312 ◽  
Author(s):  
VALENTINA I. KLIMENOK ◽  
DMITRY S. ORLOVSKY ◽  
ALEXANDER N. DUDIN

A multi-server queueing model with a Batch Markovian Arrival Process, phase-type service time distribution and impatient repeated customers is analyzed. After any unsuccessful attempt, the repeated customer leaves the system with the fixed probability. The behavior of the system is described in terms of continuous time multi-dimensional Markov chain. Stability condition and an algorithm for calculating the stationary state distribution of this Markov chain are obtained. Main performance measures of the system are calculated. Numerical results are presented.


2012 ◽  
Vol 49 (02) ◽  
pp. 451-471
Author(s):  
Barlas Oğuz ◽  
Venkat Anantharam

A positive recurrent, aperiodic Markov chain is said to be long-range dependent (LRD) when the indicator function of a particular state is LRD. This happens if and only if the return time distribution for that state has infinite variance. We investigate the question of whether other instantaneous functions of the Markov chain also inherit this property. We provide conditions under which the function has the same degree of long-range dependence as the chain itself. We illustrate our results through three examples in diverse fields: queueing networks, source compression, and finance.


2012 ◽  
Vol 49 (2) ◽  
pp. 451-471 ◽  
Author(s):  
Barlas Oğuz ◽  
Venkat Anantharam

A positive recurrent, aperiodic Markov chain is said to be long-range dependent (LRD) when the indicator function of a particular state is LRD. This happens if and only if the return time distribution for that state has infinite variance. We investigate the question of whether other instantaneous functions of the Markov chain also inherit this property. We provide conditions under which the function has the same degree of long-range dependence as the chain itself. We illustrate our results through three examples in diverse fields: queueing networks, source compression, and finance.


1997 ◽  
Vol 34 (2) ◽  
pp. 340-345 ◽  
Author(s):  
Tommy Norberg

The sojourn time that a Markov chain spends in a subset E of its state space has a distribution that depends on the hitting distribution on E and the probabilities (resp. rates in the continuous-time case) that govern the transitions within E. In this note we characterise the set of all hitting distributions for which the sojourn time distribution is geometric (resp. exponential).


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Wenwen Qin ◽  
Meiping Yun

Despite the wide application of Floating Car Data (FCD) in urban link travel time and congestion estimation, the sparsity of observations from a low penetration rate of GPS-equipped floating cars make it difficult to estimate travel time distribution (TTD), especially when the travel times may have multimodal distributions that are associated with the underlying traffic states. In this case, the study develops a Bayesian approach based on particle filter framework for link TTD estimation using real-time and historical travel time observations from FCD. First, link travel times are classified by different traffic states according to the levels of vehicle delays. Then, a state-transition function is represented as a Transition Probability Matrix of the Markov chain between upstream and current links with historical observations. Using the state-transition function, an importance distribution is constructed as the summation of historical link TTDs conditional on states weighted by the current link state probabilities. Further, a sampling strategy is developed to address the sparsity problem of observations by selecting the particles with larger weights in terms of the importance distribution and a Gaussian likelihood function. Finally, the current link TTD can be reconstructed by a generic Markov Chain Monte Carlo algorithm incorporating high weighted particles. The proposed approach is evaluated using real-world FCD. The results indicate that the proposed approach provides good accurate estimations, which are very close to the empirical distributions. In addition, the approach with different percentage of floating cars is tested. The results are encouraging, even when multimodal distributions and very few or no observations exist.


Cybernetics ◽  
1974 ◽  
Vol 8 (2) ◽  
pp. 193-196
Author(s):  
V. S. Korolyuk ◽  
I. P. Penev ◽  
A. F. Turbin

Sign in / Sign up

Export Citation Format

Share Document