Linear Programming Performance Bounds for Markov Chains With Polyhedrally Translation Invariant Probabilities and Applications to Unreliable Manufacturing Systems and Enhanced Wafer Fab Models

Author(s):  
James R. Morrison ◽  
P. R. Kumar

Our focus is on a class of Markov chains which have a polyhedral translation invariance property for the transition probabilities. This class can be used to model several applications of interest which feature complexities not found in usual models of queueing networks, for example failure prone manufacturing systems which are operating under hedging point policies, or enhanced wafer fab models featuring batch tools and setups or affine index policies. We present a new family of performance bounds which is more powerful both in expressive capability as well as the quality of the bounds than some earlier approaches.

2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


Author(s):  
Özgür Evren ◽  
Farhad Hüsseinov

Consider a dominance relation (a preorder) ≿ on a topological space X, such as the greater than or equal to relation on a function space or a stochastic dominance relation on a space of probability measures. Given a compact set K ⊆ X, we study when a continuous real function on K that is strictly monotonic with respect to ≿ can be extended to X without violating the continuity and monotonicity conditions. We show that such extensions exist for translation invariant dominance relations on a large class of topological vector spaces. Translation invariance or a vector structure are no longer needed when X is locally compact and second countable. In decision theoretic exercises, our extension theorems help construct monotonic utility functions on the universal space X starting from compact subsets. To illustrate, we prove several representation theorems for revealed or exogenously given preferences that are monotonic with respect to a dominance relation.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1987 ◽  
Vol 1 (3) ◽  
pp. 251-264 ◽  
Author(s):  
Sheldon M. Ross

In this paper we propose a new approach for estimating the transition probabilities and mean occupation times of continuous-time Markov chains. Our approach is to approximate the probability of being in a state (or the mean time already spent in a state) at time t by the probability of being in that state (or the mean time already spent in that state) at a random time that is gamma distributed with mean t.


Sign in / Sign up

Export Citation Format

Share Document