The principle of the diffusion of arbitrary constants

1972 ◽  
Vol 9 (3) ◽  
pp. 519-541 ◽  
Author(s):  
Andrew D. Barbour

Equations are derived describing a central limit type large population approximation for continuous time Markov lattice processes in one or more dimensions, such as are commonly encountered in biological models. A method of solving the equations using only the deterministic solution of the process is explained, and it is extended by the use of a martingale argument to provide more detailed information about the process.

1972 ◽  
Vol 9 (03) ◽  
pp. 519-541 ◽  
Author(s):  
Andrew D. Barbour

Equations are derived describing a central limit type large population approximation for continuous time Markov lattice processes in one or more dimensions, such as are commonly encountered in biological models. A method of solving the equations using only the deterministic solution of the process is explained, and it is extended by the use of a martingale argument to provide more detailed information about the process.


2017 ◽  
Vol 49 (2) ◽  
pp. 549-580 ◽  
Author(s):  
Bertrand Cloez

AbstractWe consider a particle system in continuous time, a discrete population, with spatial motion, and nonlocal branching. The offspring's positions and their number may depend on the mother's position. Our setting captures, for instance, the processes indexed by a Galton–Watson tree. Using a size-biased auxiliary process for the empirical measure, we determine the asymptotic behaviour of the particle system. We also obtain a large population approximation as a weak solution of a growth-fragmentation equation. Several examples illustrate our results. The main one describes the behaviour of a mitosis model; the population is size structured. In this example, the sizes of the cells grow linearly and if a cell dies then it divides into two descendants.


1967 ◽  
Vol 30 ◽  
pp. 47-56 ◽  
Author(s):  
Masatoshi Fukushima ◽  
Masuyuki Hitsuda

We shall consider a class of Markov processes (n(t), x(t)) with the continuous time parameter t∈e[0, ∞), whose state space is {1, 2,..., N}×R1. We shall assume that the processes are spacially homogeneous with respect to X∈R1. In detail, our assumption is that the transition functionFij(x,t) = P(n(t) = j, x(t)≦x|n(0) = i,x(0) = 0), t > 0, 1≦i, j,≦N, x∈R1satisfies following conditions (1, 1)~(1,4).


2020 ◽  
Vol 10 (2) ◽  
pp. 124-151
Author(s):  
Justin Sirignano ◽  
Konstantinos Spiliopoulos

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for nonconvex objective functions as well. An [Formula: see text] convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.


Sign in / Sign up

Export Citation Format

Share Document