On the probability generating function of the sum of Markov Bernoulli random variables

1982 ◽  
Vol 19 (A) ◽  
pp. 321-326 ◽  
Author(s):  
J. Gani

A direct proof of the expression for the limit probability generating function (p.g.f.) of the sum of Markov Bernoulli random variables is outlined. This depends on the larger eigenvalue of the transition probability matrix of their Markov chain.

1982 ◽  
Vol 19 (A) ◽  
pp. 321-326 ◽  
Author(s):  
J. Gani

A direct proof of the expression for the limit probability generating function (p.g.f.) of the sum of Markov Bernoulli random variables is outlined. This depends on the larger eigenvalue of the transition probability matrix of their Markov chain.


1992 ◽  
Vol 22 (2) ◽  
pp. 217-223 ◽  
Author(s):  
Heikki Bonsdorff

AbstractUnder certain conditions, a Bonus-Malus system can be interpreted as a Markov chain whose n-step transition probabilities converge to a limit probability distribution. In this paper, the rate of the convergence is studied by means of the eigenvalues of the transition probability matrix of the Markov chain.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


1996 ◽  
Vol 33 (04) ◽  
pp. 974-985 ◽  
Author(s):  
F. Simonot ◽  
Y. Q. Song

Let P be an infinite irreducible stochastic matrix, recurrent positive and stochastically monotone and Pn be any n × n stochastic matrix with Pn ≧ Tn , where Tn denotes the n × n northwest corner truncation of P. These assumptions imply the existence of limit distributions π and π n for P and Pn respectively. We show that if the Markov chain with transition probability matrix P meets the further condition of geometric recurrence then the exact convergence rate of π n to π can be expressed in terms of the radius of convergence of the generating function of π. As an application of the preceding result, we deal with the random walk on a half line and prove that the assumption of geometric recurrence can be relaxed. We also show that if the i.i.d. input sequence (A(m)) is such that we can find a real number r 0 > 1 with , then the exact convergence rate of π n to π is characterized by r 0. Moreover, when the generating function of A is not defined for |z| > 1, we derive an upper bound for the distance between π n and π based on the moments of A.


1991 ◽  
Vol 28 (01) ◽  
pp. 1-8 ◽  
Author(s):  
J. Gani ◽  
Gy. Michaletzky

This paper considers a carrier-borne epidemic in continuous time with m + 1 > 2 stages of infection. The carriers U(t) follow a pure death process, mixing homogeneously with susceptibles X 0(t), and infectives Xi (t) in stages 1≦i≦m of infection. The infectives progress through consecutive stages of infection after each contact with the carriers. It is shown that under certain conditions {X 0(t), X 1(t), · ··, Xm (t) U(t); t≧0} is an (m + 2)-variate Markov chain, and the partial differential equation for its probability generating function derived. This can be solved after a transfomation of variables, and the probability of survivors at the end of the epidemic found.


1996 ◽  
Vol 33 (03) ◽  
pp. 623-629 ◽  
Author(s):  
Y. Quennel Zhao ◽  
Danielle Liu

Computationally, when we solve for the stationary probabilities for a countable-state Markov chain, the transition probability matrix of the Markov chain has to be truncated, in some way, into a finite matrix. Different augmentation methods might be valid such that the stationary probability distribution for the truncated Markov chain approaches that for the countable Markov chain as the truncation size gets large. In this paper, we prove that the censored (watched) Markov chain provides the best approximation in the sense that, for a given truncation size, the sum of errors is the minimum and show, by examples, that the method of augmenting the last column only is not always the best.


2018 ◽  
Vol 28 (5) ◽  
pp. 1552-1563 ◽  
Author(s):  
Tunny Sebastian ◽  
Visalakshi Jeyaseelan ◽  
Lakshmanan Jeyaseelan ◽  
Shalini Anandan ◽  
Sebastian George ◽  
...  

Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as ‘Low’, ‘Moderate’ and ‘High’ with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.


2019 ◽  
Vol 1 (2) ◽  
pp. 5-10
Author(s):  
Muhammad Azka

The problem proposed in this research is about the amount rainy day per a month at Balikpapan city and discretetime markov chain. The purpose is finding the probability of rainy day with the frequency rate of rainy at the next month if given the frequency rate of rainy at the prior month. The applied method in this research is classifying the amount of rainy day be three frequency levels, those are, high, medium, and low. If a month, the amount of rainy day is less than 11 then the frequency rate for the month is classified low, if a month, the amount of rainy day between 10 and 20, then it is classified medium and if it is more than 20, then it is classified high. The result is discrete-time markov chain represented with the transition probability matrix, and the transition diagram.


2019 ◽  
Vol 3 (1) ◽  
pp. 13-22
Author(s):  
Bijan Bidabad ◽  
Behrouz Bidabad

This note discusses the existence of "complex probability" in the real world sensible problems. By defining a measure more general than the conventional definition of probability, the transition probability matrix of discrete Markov chain is broken to the periods shorter than a complete step of the transition. In this regard, the complex probability is implied.


Sign in / Sign up

Export Citation Format

Share Document