Clustering of Bursts of Openings in Markov and Semi-Markov Models of Single Channel Gating

1997 ◽  
Vol 29 (1) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean [N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝ Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.

1997 ◽  
Vol 29 (01) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean[N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.


2015 ◽  
Vol 32 (3-4) ◽  
pp. 159-176
Author(s):  
Nicole Bäuerle ◽  
Igor Gilitschenski ◽  
Uwe Hanebeck

Abstract We consider a Hidden Markov Model (HMM) where the integrated continuous-time Markov chain can be observed at discrete time points perturbed by a Brownian motion. The aim is to derive a filter for the underlying continuous-time Markov chain. The recursion formula for the discrete-time filter is easy to derive, however involves densities which are very hard to obtain. In this paper we derive exact formulas for the necessary densities in the case the state space of the HMM consists of two elements only. This is done by relating the underlying integrated continuous-time Markov chain to the so-called asymmetric telegraph process and by using recent results on this process. In case the state space consists of more than two elements we present three different ways to approximate the densities for the filter. The first approach is based on the continuous filter problem. The second approach is to derive a PDE for the densities and solve it numerically. The third approach is a crude discrete time approximation of the Markov chain. All three approaches are compared in a numerical study.


1993 ◽  
Vol 30 (3) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1988 ◽  
Vol 25 (4) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.


1988 ◽  
Vol 20 (03) ◽  
pp. 546-572 ◽  
Author(s):  
Frank Ball ◽  
Mark Sansom

We consider a finite-state-space, continuous-time Markov chain which is time reversible. The state space is partitioned into two sets, termed ‘open' and ‘closed', and it is only possible to observe which set the process is in. Further, short sojourns in either the open or closed sets of states will fail to be detected. We show that the dynamic stochastic properties of the observed process are completely described by an embedded Markov renewal process. The parameters of this Markov renewal process are obtained, allowing us to derive expressions for the moments and autocorrelation functions of successive sojourns in both the open and closed states. We illustrate the theory with a numerical study.


1990 ◽  
Vol 22 (04) ◽  
pp. 802-830 ◽  
Author(s):  
Frank Ball

We consider a time reversible, continuous time Markov chain on a finite state space. The state space is partitioned into two sets, termed open and closed, and it is only possible to observe whether the process is in an open or a closed state. Further, short sojourns in either the open or closed states fail to be detected. We consider the situation when the length of minimal detectable sojourns follows a negative exponential distribution with mean μ–1. We show that the probability density function of observed open sojourns takes the form , where n is the size of the state space. We present a thorough asymptotic analysis of f O(t) as μ tends to infinity. We discuss the relevance of our results to the modelling of single channel records. We illustrate the theory with a numerical example.


1988 ◽  
Vol 20 (3) ◽  
pp. 546-572 ◽  
Author(s):  
Frank Ball ◽  
Mark Sansom

We consider a finite-state-space, continuous-time Markov chain which is time reversible. The state space is partitioned into two sets, termed ‘open' and ‘closed', and it is only possible to observe which set the process is in. Further, short sojourns in either the open or closed sets of states will fail to be detected. We show that the dynamic stochastic properties of the observed process are completely described by an embedded Markov renewal process. The parameters of this Markov renewal process are obtained, allowing us to derive expressions for the moments and autocorrelation functions of successive sojourns in both the open and closed states. We illustrate the theory with a numerical study.


1989 ◽  
Vol 26 (03) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l 1 . Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


2017 ◽  
Vol 59 (2) ◽  
pp. 240-246 ◽  
Author(s):  
JEREMY G. SUMNER

We prove that the probability substitution matrices obtained from a continuous-time Markov chain form a multiplicatively closed set if and only if the rate matrices associated with the chain form a linear space spanning a Lie algebra. The key original contribution we make is to overcome an obstruction, due to the presence of inequalities that are unavoidable in the probabilistic application, which prevents free manipulation of terms in the Baker–Campbell–Haursdorff formula.


Sign in / Sign up

Export Citation Format

Share Document