scholarly journals Estimating Equations for Density Dependent Markov Jump Processes

Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 391
Author(s):  
Oluseyi Odubote ◽  
Daniel F. Linder

Reaction networks are important tools for modeling a variety of biological phenomena across a wide range of scales, for example as models of gene regulation within a cell or infectious disease outbreaks in a population. Hence, calibrating these models to observed data is useful for predicting future system behavior. However, the statistical estimation of the parameters of reaction networks is often challenging due to intractable likelihoods. Here we explore estimating equations to estimate the reaction rate parameters of density dependent Markov jump processes (DDMJP). The variance–covariance weights we propose to use in the estimating equations are obtained from an approximating process, derived from the Fokker–Planck approximation of the chemical master equation for stochastic reaction networks. We investigate the performance of the proposed methodology in a simulation study of the Lotka–Volterra predator–prey model and by fitting a susceptible, infectious, removed (SIR) model to real data from the historical plague outbreak in Eyam, England.

2014 ◽  
Vol 51 (3) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


2014 ◽  
Vol 51 (03) ◽  
pp. 741-755
Author(s):  
Adam W. Grace ◽  
Dirk P. Kroese ◽  
Werner Sandmann

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


2013 ◽  
Vol 150 (1) ◽  
pp. 181-203 ◽  
Author(s):  
Paolo Muratore-Ginanneschi ◽  
Carlos Mejía-Monasterio ◽  
Luca Peliti

Sign in / Sign up

Export Citation Format

Share Document