Mutual Information and k-Nearest Neighbors Approximator for Time Series Prediction

Author(s):  
Antti Sorjamaa ◽  
Jin Hao ◽  
Amaury Lendasse
1995 ◽  
Vol 06 (02) ◽  
pp. 145-170 ◽  
Author(s):  
ALEX AUSSEM ◽  
FIONN MURTAGH ◽  
MARC SARAZIN

Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meteorological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method.


2010 ◽  
Vol 40-41 ◽  
pp. 930-936 ◽  
Author(s):  
Cong Gui Yuan ◽  
Xin Zheng Zhang ◽  
Shu Qiong Xu

A nonlinear correlative time series prediction method is presented in this paper.It is based on the mutual information of time series and orthogonal polynomial basis neural network. Inputs of network are selected by mutual information, and orthogonal polynomial basis is used as active function.The network is trained by an error iterative learning algorithm.This proposed method for nonlinear time series is tested using two well known time series prediction problems:Gas furnace data time series and Mackey-Glass time series.


2009 ◽  
Vol 19 (12) ◽  
pp. 4197-4215 ◽  
Author(s):  
ANGELIKI PAPANA ◽  
DIMITRIS KUGIUMTZIS

We study some of the most commonly used mutual information estimators, based on histograms of fixed or adaptive bin size, k-nearest neighbors and kernels and focus on optimal selection of their free parameters. We examine the consistency of the estimators (convergence to a stable value with the increase of time series length) and the degree of deviation among the estimators. The optimization of parameters is assessed by quantifying the deviation of the estimated mutual information from its true or asymptotic value as a function of the free parameter. Moreover, some commonly used criteria for parameter selection are evaluated for each estimator. The comparative study is based on Monte Carlo simulations on time series from several linear and nonlinear systems of different lengths and noise levels. The results show that the k-nearest neighbor is the most stable and less affected by the method-specific parameter. A data adaptive criterion for optimal binning is suggested for linear systems but it is found to be rather conservative for nonlinear systems. It turns out that the binning and kernel estimators give the least deviation in identifying the lag of the first minimum of mutual information from nonlinear systems, and are stable in the presence of noise.


2012 ◽  
Vol 11 (02) ◽  
pp. 1250018 ◽  
Author(s):  
AIJING LIN ◽  
PENGJIAN SHANG ◽  
GUOCHEN FENG ◽  
BO ZHONG

The purpose of this paper is to forecast the daily closing prices of stock markets based on the past sequences. In this paper, keeping in mind the recent trends and the limitations of previous researches, we proposed a new technique, called empirical mode decomposition combined with k-nearest neighbors (EMD–KNN) method, in forecasting the stock index. EMD–KNN takes the advantages of the KNN and EMD. To demonstrate that our EMD–KNN method is robust, we used the new technique to forecast four stock index time series at a specific time. Detailed experiments are implemented for both of the proposed forecasting models, in which EMD–KNN, KNN method and ARIMA are compared. The results demonstrate that the proposed EMD–KNN model is more successful than KNN method and ARIMA in predicting the stock closing prices.


Sign in / Sign up

Export Citation Format

Share Document