Large deviations in the local theorem for sums of stochastic variables forming a Markov chain

1976 ◽  
Vol 16 (3) ◽  
pp. 421-428
Author(s):  
L. Saulis ◽  
V. Statulevičius
2002 ◽  
Vol 34 (2) ◽  
pp. 375-393 ◽  
Author(s):  
Nadine Guillotin-Plantard

Let (Sk)k≥0 be a Markov chain with state space E and (ξx)x∊E be a family of ℝp-valued random vectors assumed independent of the Markov chain. The ξx could be assumed independent and identically distributed or could be Gaussian with reasonable correlations. We study the large deviations of the sum


2002 ◽  
Vol 34 (02) ◽  
pp. 375-393
Author(s):  
Nadine Guillotin-Plantard

Let (Sk)k≥0be a Markov chain with state spaceEand (ξx)x∊Ebe a family of ℝp-valued random vectors assumed independent of the Markov chain. The ξxcould be assumed independent and identically distributed or could be Gaussian with reasonable correlations. We study the large deviations of the sum


1990 ◽  
Vol 27 (1) ◽  
pp. 44-59 ◽  
Author(s):  
James A Bucklew ◽  
Peter Ney ◽  
John S. Sadowsky

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.


Sign in / Sign up

Export Citation Format

Share Document