On the move-to-front scheme with Markov dependent requests

1997 ◽  
Vol 34 (3) ◽  
pp. 790-794 ◽  
Author(s):  
R. M. Phatarfod ◽  
A. J. Pryde ◽  
David Dyte

In this paper we consider the operation of the move-to-front scheme where the requests form a Markov chain of N states with transition probability matrix P. It is shown that the configurations of items at successive requests form a Markov chain, and its transition probability matrix has eigenvalues that are the eigenvalues of all the principal submatrices of P except those of order N—1. We also show that the multiplicity of the eigenvalues of submatrices of order m is the number of derangements of N — m objects. The last result is shown to be true even if P is not a stochastic matrix.

1997 ◽  
Vol 34 (03) ◽  
pp. 790-794 ◽  
Author(s):  
R. M. Phatarfod ◽  
A. J. Pryde ◽  
David Dyte

In this paper we consider the operation of the move-to-front scheme where the requests form a Markov chain of N states with transition probability matrix P . It is shown that the configurations of items at successive requests form a Markov chain, and its transition probability matrix has eigenvalues that are the eigenvalues of all the principal submatrices of P except those of order N—1. We also show that the multiplicity of the eigenvalues of submatrices of order m is the number of derangements of N — m objects. The last result is shown to be true even if P is not a stochastic matrix.


1996 ◽  
Vol 33 (04) ◽  
pp. 974-985 ◽  
Author(s):  
F. Simonot ◽  
Y. Q. Song

Let P be an infinite irreducible stochastic matrix, recurrent positive and stochastically monotone and Pn be any n × n stochastic matrix with Pn ≧ Tn , where Tn denotes the n × n northwest corner truncation of P. These assumptions imply the existence of limit distributions π and π n for P and Pn respectively. We show that if the Markov chain with transition probability matrix P meets the further condition of geometric recurrence then the exact convergence rate of π n to π can be expressed in terms of the radius of convergence of the generating function of π. As an application of the preceding result, we deal with the random walk on a half line and prove that the assumption of geometric recurrence can be relaxed. We also show that if the i.i.d. input sequence (A(m)) is such that we can find a real number r 0 > 1 with , then the exact convergence rate of π n to π is characterized by r 0. Moreover, when the generating function of A is not defined for |z| > 1, we derive an upper bound for the distance between π n and π based on the moments of A.


1996 ◽  
Vol 33 (03) ◽  
pp. 623-629 ◽  
Author(s):  
Y. Quennel Zhao ◽  
Danielle Liu

Computationally, when we solve for the stationary probabilities for a countable-state Markov chain, the transition probability matrix of the Markov chain has to be truncated, in some way, into a finite matrix. Different augmentation methods might be valid such that the stationary probability distribution for the truncated Markov chain approaches that for the countable Markov chain as the truncation size gets large. In this paper, we prove that the censored (watched) Markov chain provides the best approximation in the sense that, for a given truncation size, the sum of errors is the minimum and show, by examples, that the method of augmenting the last column only is not always the best.


2018 ◽  
Vol 28 (5) ◽  
pp. 1552-1563 ◽  
Author(s):  
Tunny Sebastian ◽  
Visalakshi Jeyaseelan ◽  
Lakshmanan Jeyaseelan ◽  
Shalini Anandan ◽  
Sebastian George ◽  
...  

Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as ‘Low’, ‘Moderate’ and ‘High’ with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.


2019 ◽  
Vol 1 (2) ◽  
pp. 5-10
Author(s):  
Muhammad Azka

The problem proposed in this research is about the amount rainy day per a month at Balikpapan city and discretetime markov chain. The purpose is finding the probability of rainy day with the frequency rate of rainy at the next month if given the frequency rate of rainy at the prior month. The applied method in this research is classifying the amount of rainy day be three frequency levels, those are, high, medium, and low. If a month, the amount of rainy day is less than 11 then the frequency rate for the month is classified low, if a month, the amount of rainy day between 10 and 20, then it is classified medium and if it is more than 20, then it is classified high. The result is discrete-time markov chain represented with the transition probability matrix, and the transition diagram.


2019 ◽  
Vol 3 (1) ◽  
pp. 13-22
Author(s):  
Bijan Bidabad ◽  
Behrouz Bidabad

This note discusses the existence of "complex probability" in the real world sensible problems. By defining a measure more general than the conventional definition of probability, the transition probability matrix of discrete Markov chain is broken to the periods shorter than a complete step of the transition. In this regard, the complex probability is implied.


1982 ◽  
Vol 19 (A) ◽  
pp. 321-326 ◽  
Author(s):  
J. Gani

A direct proof of the expression for the limit probability generating function (p.g.f.) of the sum of Markov Bernoulli random variables is outlined. This depends on the larger eigenvalue of the transition probability matrix of their Markov chain.


1960 ◽  
Vol 12 ◽  
pp. 278-288 ◽  
Author(s):  
John Lamperti

Throughout this paper, the symbol P = [Pij] will represent the transition probability matrix of an irreducible, null-recurrent Markov process in discrete time. Explanation of this terminology and basic facts about such chains may be found in (6, ch. 15). It is known (3) that for each such matrix P there is a unique (except for a positive scalar multiple) positive vector Q = {qi} such that QP = Q, or1this vector is often called the "invariant measure" of the Markov chain.The first problem to be considered in this paper is that of determining for which vectors U(0) = {μi(0)} the vectors U(n) converge, or are summable, to the invariant measure Q, where U(n) = U(0)Pn has components2In § 2, this problem is attacked for general P. The main result is a negative one, and shows how to form U(0) for which U(n) will not be (termwise) Abel summable.


Sign in / Sign up

Export Citation Format

Share Document