A note on the inverse problem for reducible Markov chains

1977 ◽  
Vol 14 (3) ◽  
pp. 621-625 ◽  
Author(s):  
A. O. Pittenger

Suppose a physical process is modelled by a Markov chain with transition probability on S1 ∪ S2, S1 denoting the transient states and S2 a set of absorbing states.If v denotes the output distribution on S2, the question arises as to what input distributions (of raw materials) on S1 produce v. In this note we give an alternative to the formulation of Ray and Margo [2] and reduce the problem to one system of linear inequalities. An application to random walk is given and the equiprobability case examined in detail.

1977 ◽  
Vol 14 (03) ◽  
pp. 621-625
Author(s):  
A. O. Pittenger

Suppose a physical process is modelled by a Markov chain with transition probability on S 1 ∪ S 2, S 1 denoting the transient states and S 2 a set of absorbing states. If v denotes the output distribution on S 2, the question arises as to what input distributions (of raw materials) on S 1 produce v. In this note we give an alternative to the formulation of Ray and Margo [2] and reduce the problem to one system of linear inequalities. An application to random walk is given and the equiprobability case examined in detail.


2021 ◽  
Vol 58 (1) ◽  
pp. 177-196
Author(s):  
Servet Martínez

AbstractWe consider a strictly substochastic matrix or a stochastic matrix with absorbing states. By using quasi-stationary distributions we show that there is an associated canonical Markov chain that is built from the resurrected chain, the absorbing states, and the hitting times, together with a random walk on the absorbing states, which is necessary for achieving time stationarity. Based upon the 2-stringing representation of the resurrected chain, we supply a stationary representation of the killed and the absorbed chains. The entropies of these representations have a clear meaning when one identifies the probability measure of natural factors. The balance between the entropies of these representations and the entropy of the canonical chain serves to check the correctness of the whole construction.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1981 ◽  
Vol 18 (3) ◽  
pp. 747-751
Author(s):  
Stig I. Rosenlund

For a time-homogeneous continuous-parameter Markov chain we show that as t → 0 the transition probability pn,j (t) is at least of order where r(n, j) is the minimum number of jumps needed for the chain to pass from n to j. If the intensities of passage are bounded over the set of states which can be reached from n via fewer than r(n, j) jumps, this is the exact order.


1991 ◽  
Vol 4 (4) ◽  
pp. 293-303
Author(s):  
P. Todorovic

Let {ξn} be a non-decreasing stochastically monotone Markov chain whose transition probability Q(.,.) has Q(x,{x})=β(x)>0 for some function β(.) that is non-decreasing with β(x)↑1 as x→+∞, and each Q(x,.) is non-atomic otherwise. A typical realization of {ξn} is a Markov renewal process {(Xn,Tn)}, where ξj=Xn, for Tn consecutive values of j, Tn geometric on {1,2,…} with parameter β(Xn). Conditions are given for Xn, to be relatively stable and for Tn to be weakly convergent.


1976 ◽  
Vol 13 (01) ◽  
pp. 49-56 ◽  
Author(s):  
W. D. Ray ◽  
F. Margo

The equilibrium probability distribution over the set of absorbing states of a reducible Markov chain is specified a priori and it is required to obtain the constrained sub-space or feasible region for all possible initial probability distributions over the set of transient states. This is called the inverse problem. It is shown that a feasible region exists for the choice of equilibrium distribution. Two different cases are studied: Case I, where the number of transient states exceeds that of the absorbing states and Case II, the converse. The approach is via the use of generalised inverses and numerical examples are given.


1983 ◽  
Vol 20 (01) ◽  
pp. 191-196 ◽  
Author(s):  
R. L. Tweedie

We give conditions under which the stationary distribution π of a Markov chain admits moments of the general form ∫ f(x)π(dx), where f is a general function; specific examples include f(x) = xr and f(x) = esx . In general the time-dependent moments of the chain then converge to the stationary moments. We show that in special cases this convergence of moments occurs at a geometric rate. The results are applied to random walk on [0, ∞).


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


1992 ◽  
Vol 29 (01) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n= 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩+= {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix𝐏= (pij), and letpij(n) =P∈Xn=j, Xk∈ 𝒩+fork= 0, 1, ···,n|X0=i],i, j𝒩+. The prime concern of this paper is conditions for the existence of the limits,qijsay, ofasn →∞. Ifthe distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vectorx= (xi) satisfyingrxT=xT𝐏andexists, whereris the convergence norm of𝐏, i.e.r=R–1andand T denotes transpose, then it is unique, positive elementwise, andqij(n) necessarily converge toxjasn →∞.Unlike existing results in the literature, our results can be applied even to theR-null andR-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix isR-transient is discussed to demonstrate the usefulness of our results.


Sign in / Sign up

Export Citation Format

Share Document