Limit theorems for processes such as the Markov branching process

1975 ◽  
Vol 12 (2) ◽  
pp. 289-297 ◽  
Author(s):  
Andrew D. Barbour

Let X(t) be a continuous time Markov process on the integers such that, if σ is a time at which X makes a jump, X(σ)– X(σ–) is distributed independently of X(σ–), and has finite mean μ and variance. Let q(j) denote the residence time parameter for the state j. If tn denotes the time of the nth jump and Xn ≡ X(tb), it is easy to deduce limit theorems for from those for sums of independent identically distributed random variables. In this paper, it is shown how, for μ > 0 and for suitable q(·), these theorems can be translated into limit theorems for X(t), by using the continuous mapping theorem.

1975 ◽  
Vol 12 (02) ◽  
pp. 289-297
Author(s):  
Andrew D. Barbour

LetX(t) be a continuous time Markov process on the integers such that, ifσis a time at whichXmakes a jump,X(σ)– X(σ–) is distributed independently ofX(σ–), and has finite meanμand variance. Letq(j) denote the residence time parameter for the statej.Iftndenotes the time of thenth jump andXn≡X(tb), it is easy to deduce limit theorems forfrom those for sums of independent identically distributed random variables. In this paper, it is shown how, forμ> 0 and for suitableq(·), these theorems can be translated into limit theorems forX(t), by using the continuous mapping theorem.


Author(s):  
C. W. Anderson

Let , where the Xi, i = 1, 2, … are independent identically distributed random variables. Classical extreme value theory, described for example in the books of do Haan(6) and Galambos(3) gives conditions under which there exist constants an > 0 and bn such thatwhere G(x) is taken to be one of the extreme value distributions G1(x) = exp (− e−x), G2(x) = exp (− x−a) (x > 0, α > 0) and G3(x) = exp (−(− x)α) (x < 0, α > 0).


1958 ◽  
Vol 10 ◽  
pp. 222-229 ◽  
Author(s):  
J. R. Blum ◽  
H. Chernoff ◽  
M. Rosenblatt ◽  
H. Teicher

Let {Xn} (n = 1, 2 , …) be a stochastic process. The random variables comprising it or the process itself will be said to be interchangeable if, for any choice of distinct positive integers i 1, i 2, H 3 … , ik, the joint distribution of depends merely on k and is independent of the integers i 1, i 2, … , i k. It was shown by De Finetti (3) that the probability measure for any interchangeable process is a mixture of probability measures of processes each consisting of independent and identically distributed random variables.


1967 ◽  
Vol 4 (2) ◽  
pp. 402-405 ◽  
Author(s):  
H. D. Miller

Let X(t) be the position at time t of a particle undergoing a simple symmetrical random walk in continuous time, i.e. the particle starts at the origin at time t = 0 and at times T1, T1 + T2, … it undergoes jumps ξ1, ξ2, …, where the time intervals T1, T2, … between successive jumps are mutually independent random variables each following the exponential density e–t while the jumps, which are independent of the τi, are mutually independent random variables with the distribution . The process X(t) is clearly a Markov process whose state space is the set of all integers.


1976 ◽  
Vol 28 (2) ◽  
pp. 403-407
Author(s):  
A. G. Mucci

Let be an adapted sequence of integrable random variables on the probability space . Let us set .The following result can be immediately derived from Brown [2]:


1973 ◽  
Vol 16 (2) ◽  
pp. 173-177 ◽  
Author(s):  
D. R. Beuerman

Let Xl,X2,X3, … be a sequence of independent and identically distributed (i.i.d.) random variables which belong to the domain of attraction of a stable law of index α≠1. That is,1whereandwhere L(n) is a function of slow variation; also take S0=0, B0=l.In §2, we are concerned with the weak convergence of the partial sum process to a stable process and the question of centering for stable laws and drift for stable processes.


1973 ◽  
Vol 5 (01) ◽  
pp. 66-102 ◽  
Author(s):  
J. F. C. Kingman

Ifx0is a particular state for a continuous-time Markov processX, the random time setis often of both practical and theoretical interest. Ignoring trivial or pathological cases, there are four different types of structure which this random set can display. To some extent, it is possible to treat all four cases in a unified way, but they raise different questions and require different modes of description. The distributions of various random quantities associated withcan be related to one another by simple and useful formulae.


1978 ◽  
Vol 15 (03) ◽  
pp. 639-644 ◽  
Author(s):  
Peter Hall

LetXn1≦Xn2≦ ··· ≦Xnndenote the order statistics from a sample ofnindependent, identically distributed random variables, and suppose that the variablesXnn, Xn,n–1, ···, when suitably normalized, have a non-trivial limiting joint distributionξ1,ξ2, ···, asn → ∞. It is well known that the limiting distribution must be one of just three types. We provide a canonical representation of the stochastic process {ξn,n≧ 1} in terms of exponential variables, and use this representation to obtain limit theorems forξnasn →∞.


1992 ◽  
Vol 24 (02) ◽  
pp. 241-266
Author(s):  
Douglas P. Kennedy ◽  
Robert P. Kertz

For linear-cost-adjusted and geometric-discounted infinite sequences of i.i.d. random variables, point process convergence results are proved as the cost or discounting effect diminishes. These process convergence results are combined with continuous-mapping principles to obtain results on joint convergence of suprema and threshold-stopped random variables, and last-exit times and locations. Applications are made to several classical optimal stopping problems in these settings.


Sign in / Sign up

Export Citation Format

Share Document