Parametric estimators for stationary time series with missing observations

1981 ◽  
Vol 13 (1) ◽  
pp. 129-146 ◽  
Author(s):  
W. Dunsmuir ◽  
P. M. Robinson

Three related estimators are considered for the parametrized spectral density of a discrete-time process X(n), n = 1, 2, · · ·, when observations are not available for all the values n = 1(1)N. Each of the estimators is obtained by maximizing a frequency domain approximation to a Gaussian likelihood, although they do not appear to be the most efficient estimators available because they do not fully utilize the information in the process a(n) which determines whether X(n) is observed or missed. One estimator, called M3, assumes that the second-order properties of a(n) are known; another, M2, lets these be known only up to an unknown parameter vector; the third, M1, requires no model for a(n). Under representative sets of conditions, which allow for both deterministic and stochastic a(n), the strong consistency and asymptotic normality of M1, M2, and M3 are established. The conditions needed for consistency when X(n) is an autoregressive moving-average process are discussed in more detail. It is also shown that in general M1 and M3 are equally efficient asymptotically and M2 is never more efficient, and may be less efficient, than M1 and M3.

1981 ◽  
Vol 13 (01) ◽  
pp. 129-146 ◽  
Author(s):  
W. Dunsmuir ◽  
P. M. Robinson

Three related estimators are considered for the parametrized spectral density of a discrete-time process X(n), n = 1, 2, · · ·, when observations are not available for all the values n = 1(1)N. Each of the estimators is obtained by maximizing a frequency domain approximation to a Gaussian likelihood, although they do not appear to be the most efficient estimators available because they do not fully utilize the information in the process a(n) which determines whether X(n) is observed or missed. One estimator, called M3, assumes that the second-order properties of a(n) are known; another, M2, lets these be known only up to an unknown parameter vector; the third, M1, requires no model for a(n). Under representative sets of conditions, which allow for both deterministic and stochastic a(n), the strong consistency and asymptotic normality of M1, M2, and M3 are established. The conditions needed for consistency when X(n) is an autoregressive moving-average process are discussed in more detail. It is also shown that in general M1 and M3 are equally efficient asymptotically and M2 is never more efficient, and may be less efficient, than M1 and M3.


1974 ◽  
Vol 11 (01) ◽  
pp. 63-71 ◽  
Author(s):  
R. F. Galbraith ◽  
J. I. Galbraith

Expressions are obtained for the determinant and inverse of the covariance matrix of a set of n consecutive observations on a mixed autoregressive moving average process. Explicit formulae for the inverse of this matrix are given for the general autoregressive process of order p (n ≧ p), and for the first order mixed autoregressive moving average process.


1974 ◽  
Vol 11 (1) ◽  
pp. 63-71 ◽  
Author(s):  
R. F. Galbraith ◽  
J. I. Galbraith

Expressions are obtained for the determinant and inverse of the covariance matrix of a set of n consecutive observations on a mixed autoregressive moving average process. Explicit formulae for the inverse of this matrix are given for the general autoregressive process of order p (n ≧ p), and for the first order mixed autoregressive moving average process.


1985 ◽  
Vol 17 (04) ◽  
pp. 810-840 ◽  
Author(s):  
Jürgen Franke

The maximum-entropy approach to the estimation of the spectral density of a time series has become quite popular during the last decade. It is closely related to the fact that an autoregressive process of order p has maximal entropy among all time series sharing the same autocovariances up to lag p. We give a natural generalization of this result by proving that a mixed autoregressive-moving-average process (ARMA process) of order (p, q) has maximal entropy among all time series sharing the same autocovariances up to lag p and the same impulse response coefficients up to lag q. The latter may be estimated from a finite record of the time series, for example by using a method proposed by Bhansali (1976). By the way, we give a result on the existence of ARMA processes with prescribed autocovariances up to lag p and impulse response coefficients up to lag q.


1994 ◽  
Vol 44 (1-2) ◽  
pp. 11-28 ◽  
Author(s):  
A. K. Basu ◽  
J. K. Das

This paper develops a Bayesian formulation of Kalman filter under the errors having elliptically contoured distributions in both observation equation and system (or state) equation, using some recent results in multivariate analysis. Estimation of parameters in case of missing observations and prediction of missing observations as well are dealt with under the above set up of autoregressive-moving average process in time series. Two illustrative examples are presented with the help of AR(1) model and ARMA (1, 1) model.


2004 ◽  
Vol 41 (A) ◽  
pp. 375-382 ◽  
Author(s):  
Peter J. Brockwell

Using the kernel representation of a continuous-time Lévy-driven ARMA (autoregressive moving average) process, we extend the class of nonnegative Lévy-driven Ornstein–Uhlenbeck processes employed by Barndorff-Nielsen and Shephard (2001) to allow for nonmonotone autocovariance functions. We also consider a class of fractionally integrated Lévy-driven continuous-time ARMA processes obtained by a simple modification of the kernel of the continuous-time ARMA process. Asymptotic properties of the kernel and of the autocovariance function are derived.


Sign in / Sign up

Export Citation Format

Share Document