scholarly journals Two Tests for Dependence (of Unknown Form) between Time Series

Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 878
Author(s):  
M. Victoria Caballero-Pintado ◽  
Mariano Matilla-García ◽  
Jose M. Rodríguez ◽  
Manuel Ruiz Marín

This paper proposes two new nonparametric tests for independence between time series. Both tests are based on symbolic analysis, specifically on symbolic correlation integral, in order to be robust to potential unknown nonlinearities. The first test is developed for a scenario in which each considered time series is independent and therefore the interest is to ascertain if two internally independent time series share a relationship of an unknown form. This is especially relevant as the test is nuisance parameter free, as proved in the paper. The second proposed statistic tests for independence among variables, allowing these time series to exhibit within-dependence. Monte Carlo experiments are conducted to show the empirical properties of the tests

2001 ◽  
Vol 17 (1) ◽  
pp. 156-187 ◽  
Author(s):  
Atsushi Inoue

This paper proposes nonparametric tests of change in the distribution function of a time series. The limiting null distributions of the test statistics depend on a nuisance parameter, and critical values cannot be tabulated a priori. To circumvent this problem, a new simulation-based statistical method is developed. The validity of our simulation procedure is established in terms of size, local power, and test consistency. The finite-sample properties of the proposed tests are evaluated in a set of Monte Carlo experiments, and the distributional stability in financial markets is examined.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jorge Martínez Compains ◽  
Ignacio Rodríguez Carreño ◽  
Ramazan Gençay ◽  
Tommaso Trani ◽  
Daniel Ramos Vilardell

Abstract Johansen’s Cointegration Test (JCT) performs remarkably well in finding stable bivariate cointegration relationships. Nonetheless, the JCT is not necessarily designed to detect such relationships in presence of non-linear patterns such as structural breaks or cycles that fall in the low frequency portion of the spectrum. Seasonal adjustment procedures might not detect such non-linear patterns, and thus, we expose the difficulty in identifying cointegrating relations under the traditional use of JCT. Within several Monte Carlo experiments, we show that wavelets can empower more the JCT framework than the traditional seasonal adjustment methodologies, allowing for identification of hidden cointegrating relationships. Moreover, we confirm these results using seasonally adjusted time series as US consumption and income, gross national product (GNP) and money supply M1 and GNP and M2.


Methodology ◽  
2010 ◽  
Vol 6 (2) ◽  
pp. 83-92 ◽  
Author(s):  
Tetiana Stadnytska

Time series with deterministic and stochastic trends possess different memory characteristics and exhibit dissimilar long-range development. Trending series are nonstationary and must be transformed to be stabilized. The choice of correct transformation depends on patterns of nonstationarity in the data. Inappropriate transformations are consequential for subsequent analysis and should be omitted. The objectives of this article are (1) to introduce unit root testing procedures, (2) to evaluate the strategies for distinguishing between stochastic and deterministic alternatives by means of Monte Carlo experiments, and (3) to demonstrate their implementation on empirical examples using SAS for Windows.


2021 ◽  
Vol 2 (2) ◽  
pp. 01-07
Author(s):  
Halim Zeghdoudi ◽  
Madjda Amrani

In this work, we study the famous model of volatility; called model of conditional heteroscedastic autoregressive with mixed memory MMGARCH for modeling nonlinear time series. The MMGARCH model has two mixing components, one is a GARCH short memory and the other is GARCH long memory. the main objective of this search for finds the best model between mixtures of the models we made (long memory with long memory, short memory with short memory and short memory with long memory) Also, the existence of its stationary solution is discussed. The Monte Carlo experiments demonstrate we discovered theoretical. In addition, the empirical application of the MMGARCH model (1, 1) to the daily index DOW and NASDAQ illustrates its capabilities; we find that for the mixture between APARCH and EGARCH is superior to any other model tested because it produces the smallest errors.


2021 ◽  
Author(s):  
Stefano Farris ◽  
Roberto Deidda ◽  
Francesco Viola ◽  
Giuseppe Mascaro

<p>A number of studies have shown that the ability of statistical tests to detect trends in hydrologic extremes is negatively affected by (i) the presence of autocorrelation in the time series, and (ii) field significance. Here, we investigate these two issues and evaluate the power of several trend tests using time series of frequencies (or counts) of precipitation extremes from long-term (100 years) precipitation records of 1087 gauges of the Global Historical Climate Network database. For this aim, we design several Monte Carlo experiments based on simulations of random count time series with different levels of autocorrelation and trend. We find the following. (1) The observed records are consistent with the hypothesis of autocorrelation induced by the presence of trends, indicating that the existence of serial correlation does not significantly affect trend detection. (2) Tests based on the linear and Poisson regressions are more powerful that nonparametric tests, such as Mann Kendall. (3) Accounting for field significance improves the interpretation of the results by limiting the rejection of the false null hypothesis. We then use these results to investigate the presence of trends in the observed records. We find that, depending on the quantiles used to define the frequency of precipitation extremes, 34-47% of the selected gages exhibit a statistically significant trend, of which 70-80% are positive and located mainly in United States and Northern Europe. The significant negative trends are mostly located in Southern Australia.</p>


2013 ◽  
Vol 5 (1) ◽  
pp. 61-86 ◽  
Author(s):  
Tae-Hwy Lee ◽  
Zhou Xi ◽  
Ru Zhang

AbstractThis paper makes a simple but previously neglected point with regard to an empirical application of the test of White (1989) and Lee, White, and Granger (LWG, 1993), for neglected nonlinearity in conditional mean, using the feedforward single layer artificial neural network (ANN). Because the activation parameters in the hidden layer are not identified under the null hypothesis of linearity, LWG suggested to activate the ANN hidden units based on the randomly generated activation parameters. Their Monte Carlo experiments demonstrated the excellent performance (good size and power), even if LWG considered a fairly small number (10 or 20) of random hidden unit activations. However, in this paper, we note that the good size and power of Monte Carlo experiments are the average frequencies of rejecting the null hypothsis over multiple replications of the data generating process. The average over many simulations in Monte Carlo smooths out the randomness of the activations. In an empirical study, unlike in a Monte Carlo study, multiple realizations of the data are not possible or available. In this case, the ANN test is sensitive to the randomly generated activation parameters. One solution is the use of Bonferroni bounds as suggested by LWG (1993), which however still exhibits some excessive dependence on the random activations (as shown in this paper). Another solution is to integrate the test statistic over the nuisance parameter space, for which however, bootstrap or simulation should be used to obtain the null distribution of the integrated statistic. In this paper, we consider a much simpler solution that is shown to work very well. That is, we simply increase the number of randomized hidden unit activations to a (very) large number (e.g. 1,000). We show that using many randomly generated activation parameters can robustify the performance of the ANN test when it is applied to a real empirical data. This robustification is reliable and useful in practice and can be achieved at no cost as increasing the number of random activations is almost costless given today’s computer technology.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 817
Author(s):  
Fernando López ◽  
Mariano Matilla-García ◽  
Jesús Mur ◽  
Manuel Ruiz Marín

A novel general method for constructing nonparametric hypotheses tests based on the field of symbolic analysis is introduced in this paper. Several existing tests based on symbolic entropy that have been used for testing central hypotheses in several branches of science (particularly in economics and statistics) are particular cases of this general approach. This family of symbolic tests uses few assumptions, which increases the general applicability of any symbolic-based test. Additionally, as a theoretical application of this method, we construct and put forward four new statistics to test for the null hypothesis of spatiotemporal independence. There are very few tests in the specialized literature in this regard. The new tests were evaluated with the mean of several Monte Carlo experiments. The results highlight the outstanding performance of the proposed test.


2006 ◽  
Vol 06 (01) ◽  
pp. L7-L15
Author(s):  
ALEXANDROS LEONTITSIS

The paper introduces a method for estimation and reduction of calendar effects from time series, which their fluctuations are governed by a nonlinear dynamical system and additive normal noise. Calendar effects can be considered deviations of the distribution(s) of particular group(s) of observations that have a common characteristic related to the calendar. The concept of this method is the following: since the calendar effects are not related to the dynamics of the time series, the accurate estimation and reduction will result a time series with a smaller amount of noise level (i.e. more accurate dynamics). The main tool of this method is the correlation integral, due to its inherit capability of modeling both the dynamics and the additive normal noise. Experimental results are presented on the Nasdaq Cmp. index.


2021 ◽  
Author(s):  
Klaus B. Beckmann ◽  
Lennart Reimer

This monograph generalises, and extends, the classic dynamic models in conflict analysis (Lanchester 1916, Richardson 1919, Boulding 1962). Restrictions on parameters are relaxed to account for alliances and for peacekeeping. Incrementalist as well as stochastic versions of the model are reviewed. These extensions allow for a rich variety of patterns of dynamic conflict. Using Monte Carlo techniques as well as time series analyses based on GDELT data (for the Ethiopian-Eritreian war, 1998–2000), we also assess the empirical usefulness of the model. It turns out that linear dynamic models capture selected phases of the conflict quite well, offering a potential taxonomy for conflict dynamics. We also discuss a method for introducing a modicum of (bounded) rationality into models from this tradition.


Sign in / Sign up

Export Citation Format

Share Document