scholarly journals Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Encoding Model

2004 ◽  
Vol 16 (12) ◽  
pp. 2533-2561 ◽  
Author(s):  
Liam Paninski ◽  
Jonathan W. Pillow ◽  
Eero P. Simoncelli

We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrate-and-fire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log-likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model's validity using time-rescaling and density evolution techniques.

2011 ◽  
Vol 23 (11) ◽  
pp. 2833-2867 ◽  
Author(s):  
Yi Dong ◽  
Stefan Mihalas ◽  
Alexander Russell ◽  
Ralph Etienne-Cummings ◽  
Ernst Niebur

When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli ( 2004 ) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.


Author(s):  
Jay D. Martin

A kriging model can be used as a surrogate to a more computationally expensive model or simulation. It is capable of providing a continuous mathematical relationship that can interpolate a set of observations. One of the major issues with using kriging models is the potentially computationally expensive process of estimating the best model parameters. One of the most common methods used to estimate model parameters is Maximum Likelihood Estimation (MLE). MLE of kriging model parameters requires the use of numerical optimization of a continuous but possibly multi-modal log-likelihood function. This paper presents some enhancements to gradient-based methods to make them more computationally efficient and compares the potential reduction in computational burden. These enhancements include the development of the analytic gradient and Hessian for the log-likelihood equation of a kriging model that uses a Gaussian spatial correlation function. The suggested algorithm is very similar to the Scoring algorithm traditionally used in statistics, a Newton-Raphson gradient-based optimization method.


2016 ◽  
Vol 11 (10) ◽  
pp. 5697-5704
Author(s):  
Mohammed Sari Alsukaini ◽  
Alkreemawi khazaal Walaa ◽  
Wang Xiang Jun

We study n independent stochastic processes(xi (t),tiЄ[o,t1 ],i=1,......n) defined by a stochastic differential equation with diffusion coefficients depending nonlinearly on a random variables  and  (the random effects).The distributions of the random effects Ñ„i,and,μi and  depends on unknown parameters which are to be estimated from the continuous observations of the processes xi (t) . When the distributions of the random effects Ñ„ ,μ, are Gaussian and exponential respectively, we obtained an explicit formula for the likelihood function and the asymptotic properties (consistency and asymptotic normality) of the maximum likelihood estimator (MLE) are derived when  tend to infinity.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Helena Mouriño ◽  
Maria Isabel Barão

Missing-data problems are extremely common in practice. To achieve reliable inferential results, we need to take into account this feature of the data. Suppose that the univariate data set under analysis has missing observations. This paper examines the impact of selecting an auxiliary complete data set—whose underlying stochastic process is to some extent interdependent with the former—to improve the efficiency of the estimators for the relevant parameters of the model. The Vector AutoRegressive (VAR) Model has revealed to be an extremely useful tool in capturing the dynamics of bivariate time series. We propose maximum likelihood estimators for the parameters of the VAR(1) Model based on monotone missing data pattern. Estimators’ precision is also derived. Afterwards, we compare the bivariate modelling scheme with its univariate counterpart. More precisely, the univariate data set with missing observations will be modelled by an AutoRegressive Moving Average (ARMA(2,1)) Model. We will also analyse the behaviour of the AutoRegressive Model of order one, AR(1), due to its practical importance. We focus on the mean value of the main stochastic process. By simulation studies, we conclude that the estimator based on the VAR(1) Model is preferable to those derived from the univariate context.


2012 ◽  
Vol 2 (1) ◽  
pp. 7 ◽  
Author(s):  
Andrzej Kijko

This work is focused on the Bayesian procedure for the estimation of the regional maximum possible earthquake magnitude <em>m</em><sub>max</sub>. The paper briefly discusses the currently used Bayesian procedure for m<sub>max</sub>, as developed by Cornell, and a statistically justifiable alternative approach is suggested. The fundamental problem in the application of the current Bayesian formalism for <em>m</em><sub>max</sub> estimation is that one of the components of the posterior distribution is the sample likelihood function, for which the range of observations (earthquake magnitudes) depends on the unknown parameter <em>m</em><sub>max</sub>. This dependence violates the property of regularity of the maximum likelihood function. The resulting likelihood function, therefore, reaches its maximum at the maximum observed earthquake magnitude <em>m</em><sup>obs</sup><sub>max</sub> and not at the required maximum <em>possible</em> magnitude <em>m</em><sub>max</sub>. Since the sample likelihood function is a key component of the posterior distribution, the posterior estimate of <em>m^</em><sub>max</sub> is biased. The degree of the bias and its sign depend on the applied Bayesian estimator, the quantity of information provided by the prior distribution, and the sample likelihood function. It has been shown that if the maximum posterior estimate is used, the bias is negative and the resulting underestimation of <em>m</em><sub>max</sub> can be as big as 0.5 units of magnitude. This study explores only the maximum posterior estimate of <em>m</em><sub>max</sub>, which is conceptionally close to the classic maximum likelihood estimation. However, conclusions regarding the shortfall of the current Bayesian procedure are applicable to all Bayesian estimators, <em>e.g.</em> posterior mean and posterior median. A simple, <em>ad hoc</em> solution of this non-regular maximum likelihood problem is also presented.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


Author(s):  
Tu Xu ◽  
Jorge Laval

This paper analyzes the impact of uphill grades on the acceleration drivers choose to impose on their vehicles. Statistical inference is made based on the maximum likelihood estimation of a two-regime stochastic car-following model using Next Generation SIMulation (NGSIM) data. Previous models assume that the loss in acceleration on uphill grades is given by the effects of gravity. We find evidence that this is not the case for car drivers, who tend to overcome half of the gravitational effects by using more engine power. Truck drivers only compensate for 5% of the loss, possibly because of limited engine power. This indicates not only that current models are severely overestimating the operational impacts that uphill grades have on regular vehicles, but also underestimating their environmental impacts. We also find that car-following model parameters are significantly different among shoulder, median and middle lanes but more data is needed to understand clearly why this happens.


2019 ◽  
Vol 36 (10) ◽  
pp. 2352-2357
Author(s):  
David A Shaw ◽  
Vu C Dinh ◽  
Frederick A Matsen

Abstract Maximum likelihood estimation in phylogenetics requires a means of handling unknown ancestral states. Classical maximum likelihood averages over these unknown intermediate states, leading to provably consistent estimation of the topology and continuous model parameters. Recently, a computationally efficient approach has been proposed to jointly maximize over these unknown states and phylogenetic parameters. Although this method of joint maximum likelihood estimation can obtain estimates more quickly, its properties as an estimator are not yet clear. In this article, we show that this method of jointly estimating phylogenetic parameters along with ancestral states is not consistent in general. We find a sizeable region of parameter space that generates data on a four-taxon tree for which this joint method estimates the internal branch length to be exactly zero, even in the limit of infinite-length sequences. More generally, we show that this joint method only estimates branch lengths correctly on a set of measure zero. We show empirically that branch length estimates are systematically biased downward, even for short branches.


Sign in / Sign up

Export Citation Format

Share Document