scholarly journals Multiplicative ergodicity of Laplace transforms for additive functional of Markov chains

2019 ◽  
Vol 23 ◽  
pp. 607-637 ◽  
Author(s):  
Loïc Hervé ◽  
Sana Louhichi ◽  
Françoise Pène

This article is motivated by the quantitative study of the exponential growth of Markov-driven bifurcating processes [see Hervé et al., ESAIM: PS 23 (2019) 584–606]. In this respect, a key property is the multiplicative ergodicity, which deals with the asymptotic behaviour of some Laplace-type transform of nonnegative additive functional of a Markov chain. We establish a spectral version of this multiplicative ergodicity property in a general framework. Our approach is based on the use of the operator perturbation method. We apply our general results to two examples of Markov chains, including linear autoregressive models. In these two examples the operator-type assumptions reduce to some expected finite moment conditions on the functional (no exponential moment conditions are assumed in this work).

1981 ◽  
Vol 13 (2) ◽  
pp. 369-387 ◽  
Author(s):  
Richard D. Bourgin ◽  
Robert Cogburn

The general framework of a Markov chain in a random environment is presented and the problem of determining extinction probabilities is discussed. An efficient method for determining absorption probabilities and criteria for certain absorption are presented in the case that the environmental process is a two-state Markov chain. These results are then applied to birth and death, queueing and branching chains in random environments.


1988 ◽  
Vol 25 (1) ◽  
pp. 106-119 ◽  
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i))i∊S letters from a finite alphabet S and v=(v(i))i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i1 · ·· im is defined to be V(w)=v(i1) + · ·· + v(im). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X1X2 · ·· Xn and Y1Y2· ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q,v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included.Implicit in these results is a large-deviation result for the additive functional, H ≡ Σn < τv(Xn), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


1996 ◽  
Vol 33 (02) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


1996 ◽  
Vol 33 (2) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


1988 ◽  
Vol 25 (01) ◽  
pp. 106-119
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i)) i∊S letters from a finite alphabet S and v=(v(i)) i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i 1 · ·· im is defined to be V(w)=v(i 1) + · ·· + v(im ). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X 1 X 2 · ·· Xn and Y 1 Y 2 · ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q , v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included. Implicit in these results is a large-deviation result for the additive functional, H ≡ Σ n &lt; τ v(Xn ), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


1981 ◽  
Vol 13 (02) ◽  
pp. 369-387 ◽  
Author(s):  
Richard D. Bourgin ◽  
Robert Cogburn

The general framework of a Markov chain in a random environment is presented and the problem of determining extinction probabilities is discussed. An efficient method for determining absorption probabilities and criteria for certain absorption are presented in the case that the environmental process is a two-state Markov chain. These results are then applied to birth and death, queueing and branching chains in random environments.


1990 ◽  
Vol 27 (03) ◽  
pp. 545-556 ◽  
Author(s):  
S. Kalpazidou

The asymptotic behaviour of the sequence (𝒞 n (ω), wc,n (ω)/n), is studied where 𝒞 n (ω) is the class of all cycles c occurring along the trajectory ωof a recurrent strictly stationary Markov chain (ξ n ) until time n and wc,n (ω) is the number of occurrences of the cycle c until time n. The previous sequence of sample weighted classes converges almost surely to a class of directed weighted cycles (𝒞∞, ω c ) which represents uniquely the chain (ξ n ) as a circuit chain, and ω c is given a probabilistic interpretation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Sign in / Sign up

Export Citation Format

Share Document