The asymptotic convexity of the negative likelihood function of GARCH models

2006 ◽  
Vol 50 (2) ◽  
pp. 311-331 ◽  
Author(s):  
W.C. Ip ◽  
Heung Wong ◽  
J.Z. Pan ◽  
D.F. Li
Author(s):  
M. Angeles Carnero ◽  
M. Hakan Eratalay

AbstractThis paper analyzes the performance of multiple steps estimators of vector autoregressive multivariate conditional correlation GARCH models by means of Monte Carlo experiments. We show that if innovations are Gaussian, estimating the parameters in multiple steps is a reasonable alternative to the maximization of the full likelihood function. Our results also suggest that for the sample sizes usually encountered in financial econometrics, the differences between the volatility and correlation estimates obtained with the more efficient estimator and the multiple steps estimators are negligible. However, when innovations are distributed as a Student-t, using multiple steps estimators might not be a good idea.


Author(s):  
Antara Dasgupta ◽  
Renaud Hostache ◽  
RAAJ Ramasankaran ◽  
Guy J.‐P Schumann ◽  
Stefania Grimaldi ◽  
...  

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

Dynamic stochastic general equilibrium (DSGE) models have become one of the workhorses of modern macroeconomics and are extensively used for academic research as well as forecasting and policy analysis at central banks. This book introduces readers to state-of-the-art computational techniques used in the Bayesian analysis of DSGE models. The book covers Markov chain Monte Carlo techniques for linearized DSGE models, novel sequential Monte Carlo methods that can be used for parameter inference, and the estimation of nonlinear DSGE models based on particle filter approximations of the likelihood function. The theoretical foundations of the algorithms are discussed in depth, and detailed empirical applications and numerical illustrations are provided. The book also gives invaluable advice on how to tailor these algorithms to specific applications and assess the accuracy and reliability of the computations. The book is essential reading for graduate students, academic researchers, and practitioners at policy institutions.


Author(s):  
T. V. Oblakova

The paper is studying the justification of the Pearson criterion for checking the hypothesis on the uniform distribution of the general totality. If the distribution parameters are unknown, then estimates of the theoretical frequencies are used [1, 2, 3]. In this case the quantile of the chi-square distribution with the number of degrees of freedom, reduced by the number of parameters evaluated, is used to determine the upper threshold of the main hypothesis acceptance [7]. However, in the case of a uniform law, the application of Pearson's criterion does not extend to complex hypotheses, since the likelihood function does not allow differentiation with respect to parameters, which is used in the proof of the theorem mentioned [7, 10, 11].A statistical experiment is proposed in order to study the distribution of Pearson statistics for samples from a uniform law. The essence of the experiment is that at first a statistically significant number of one-type samples from a given uniform distribution is modeled, then for each sample Pearson statistics are calculated, and then the law of distribution of the totality of these statistics is studied. Modeling and processing of samples were performed in the Mathcad 15 package using the built-in random number generator and array processing facilities.In all the experiments carried out, the hypothesis that the Pearson statistics conform to the chi-square law was unambiguously accepted (confidence level 0.95). It is also statistically proved that the number of degrees of freedom in the case of a complex hypothesis need not be corrected. That is, the maximum likelihood estimates of the uniform law parameters implicitly used in calculating Pearson statistics do not affect the number of degrees of freedom, which is thus determined by the number of grouping intervals only.


2017 ◽  
Author(s):  
Darren Rhodes

Time is a fundamental dimension of human perception, cognition and action, as the perception and cognition of temporal information is essential for everyday activities and survival. Innumerable studies have investigated the perception of time over the last 100 years, but the neural and computational bases for the processing of time remains unknown. First, we present a brief history of research and the methods used in time perception and then discuss the psychophysical approach to time, extant models of time perception, and advancing inconsistencies between each account that this review aims to bridge the gap between. Recent work has advocated a Bayesian approach to time perception. This framework has been applied to both duration and perceived timing, where prior expectations about when a stimulus might occur in the future (prior distribution) are combined with current sensory evidence (likelihood function) in order to generate the perception of temporal properties (posterior distribution). In general, these models predict that the brain uses temporal expectations to bias perception in a way that stimuli are ‘regularized’ i.e. stimuli look more like what has been seen before. Evidence for this framework has been found using human psychophysical testing (experimental methods to quantify behaviour in the perceptual system). Finally, an outlook for how these models can advance future research in temporal perception is discussed.


Sign in / Sign up

Export Citation Format

Share Document