scholarly journals Extreme events of Markov chains

2017 ◽  
Vol 49 (1) ◽  
pp. 134-161 ◽  
Author(s):  
I. Papastathopoulos ◽  
K. Strokorb ◽  
J. A. Tawn ◽  
A. Butler

Abstract The extremal behaviour of a Markov chain is typically characterised by its tail chain. For asymptotically dependent Markov chains, existing formulations fail to capture the full evolution of the extreme event when the chain moves out of the extreme tail region, and, for asymptotically independent chains, recent results fail to cover well-known asymptotically independent processes, such as Markov processes with a Gaussian copula between consecutive values. We use more sophisticated limiting mechanisms that cover a broader class of asymptotically independent processes than current methods, including an extension of the canonical Heffernan‒Tawn normalisation scheme, and reveal features which existing methods reduce to a degenerate form associated with nonextreme states.

2019 ◽  
Vol 44 (3) ◽  
pp. 282-308 ◽  
Author(s):  
Brian G. Vegetabile ◽  
Stephanie A. Stout-Oswald ◽  
Elysia Poggi Davis ◽  
Tallie Z. Baram ◽  
Hal S. Stern

Predictability of behavior is an important characteristic in many fields including biology, medicine, marketing, and education. When a sequence of actions performed by an individual can be modeled as a stationary time-homogeneous Markov chain the predictability of the individual’s behavior can be quantified by the entropy rate of the process. This article compares three estimators of the entropy rate of finite Markov processes. The first two methods directly estimate the entropy rate through estimates of the transition matrix and stationary distribution of the process. The third method is related to the sliding-window Lempel–Ziv compression algorithm. The methods are compared via a simulation study and in the context of a study of interactions between mothers and their children.


2009 ◽  
Vol 30 (6) ◽  
pp. 1629-1663 ◽  
Author(s):  
LEWIS BOWEN

AbstractThis paper introduces Markov chains and processes over non-abelian free groups and semigroups. We prove a formula for the f-invariant of a Markov chain over a free group in terms of transition matrices that parallels the classical formula for the entropy a Markov chain. Applications include free group analogues of the Abramov–Rohlin formula for skew-product actions and Yuzvinskii’s addition formula for algebraic actions.


2020 ◽  
Vol 23 (1) ◽  
pp. 71-83
Author(s):  
Yu. M. Chinyuchin ◽  
A. S. Solov'ev

The process of aircraft operation involves constant effects of various factors on its components leading to accidental or systematic changes in their technical condition. Markov processes are a particular case of stochastic processes, which take place during aeronautical equipment operation. The relationship of the reliability characteristics with the cost recovery of the objects allows us to apply the analytic apparatus of Markov processes for the analysis and optimization of maintainability factors. The article describes two methods of the analysis and control of object maintainability based on stationary and non-stationary Markov chains. The model of a stationary Markov chain is used for the equipment with constant in time intensity of the events. For the objects with time-varying events intensity, a non-stationary Markov chain is used. In order to reduce the number of the mathematical operations for the analysis of aeronautical engineering maintainability by using non-stationary Markov processes an algorithm for their optimization is presented. The suggested methods of the analysis by means of Markov chains allow to execute comparative assessments of expected maintenance and repair costs for one or several one-type objects taking into account their original conditions and operation time. The process of maintainability control using Markov chains includes search of the optimal strategy of maintenance and repair considering each state of an object under which maintenance costs will be minimal. The given approbation of the analysis methods and maintainability control using Markov processes for an object under control allowed to build a predictive-controlled model in which the expected costs for its maintenance and repair are calculated as well as the required number of spare parts for each specified operating time interval. The possibility of using the mathematical apparatus of Markov processes for a large number of objects with different reliability factors distribution is shown. The software implementation of the described methods as well as the usage of tabular adapted software will contribute to reducing the complexity of the calculations and improving data visualization.


1990 ◽  
Vol 27 (03) ◽  
pp. 545-556 ◽  
Author(s):  
S. Kalpazidou

The asymptotic behaviour of the sequence (𝒞 n (ω), wc,n (ω)/n), is studied where 𝒞 n (ω) is the class of all cycles c occurring along the trajectory ωof a recurrent strictly stationary Markov chain (ξ n ) until time n and wc,n (ω) is the number of occurrences of the cycle c until time n. The previous sequence of sample weighted classes converges almost surely to a class of directed weighted cycles (𝒞∞, ω c ) which represents uniquely the chain (ξ n ) as a circuit chain, and ω c is given a probabilistic interpretation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


Author(s):  
Walter Leal Filho ◽  
Abul Al-Amin ◽  
Gustavo Nagy ◽  
Ulisses Azeiteiro ◽  
Laura Wiesböck ◽  
...  

There are various climate risks that are caused or influenced by climate change. They are known to have a wide range of physical, economic, environmental and social impacts. Apart from damages to the physical environment, many climate risks (climate variability, extreme events and climate-related hazards) are associated with a variety of impacts on human well-being, health, and life-supporting systems. These vary from boosting the proliferation of vectors of diseases (e.g., mosquitos), to mental problems triggered by damage to properties and infrastructure. There is a great variety of literature about the strong links between climate change and health, while there is relatively less literature that specifically examines the health impacts of climate risks and extreme events. This paper is an attempt to address this knowledge gap, by compiling eight examples from a set of industrialised and developing countries, where such interactions are described. The policy implications of these phenomena and the lessons learned from the examples provided are summarised. Some suggestions as to how to avert the potential and real health impacts of climate risks are made, hence assisting efforts to adapt to a problem whose impacts affect millions of people around the world. All the examples studied show some degree of vulnerability to climate risks regardless of their socioeconomic status and need to increase resilience against extreme events.


Sign in / Sign up

Export Citation Format

Share Document