scholarly journals One- versus multi-component regular variation and extremes of Markov trees

2020 ◽  
Vol 52 (3) ◽  
pp. 855-878
Author(s):  
Johan Segers

AbstractA Markov tree is a random vector indexed by the nodes of a tree whose distribution is determined by the distributions of pairs of neighbouring variables and a list of conditional independence relations. Upon an assumption on the tails of the Markov kernels associated to these pairs, the conditional distribution of the self-normalized random vector when the variable at the root of the tree tends to infinity converges weakly to a random vector of coupled random walks called a tail tree. If, in addition, the conditioning variable has a regularly varying tail, the Markov tree satisfies a form of one-component regular variation. Changing the location of the root, that is, changing the conditioning variable, yields a different tail tree. When the tails of the marginal distributions of the conditioning variables are balanced, these tail trees are connected by a formula that generalizes the time change formula for regularly varying stationary time series. The formula is most easily understood when the various one-component regular variation statements are tied up into a single multi-component statement. The theory of multi-component regular variation is worked out for general random vectors, not necessarily Markov trees, with an eye towards other models, graphical or otherwise.

2016 ◽  
Vol 53 (3) ◽  
pp. 733-746 ◽  
Author(s):  
Adrien Hitz ◽  
Robin Evans

AbstractThe problem of inferring the distribution of a random vector given that its norm is large requires modeling a homogeneous limiting density. We suggest an approach based on graphical models which is suitable for high-dimensional vectors. We introduce the notion of one-component regular variation to describe a function that is regularly varying in its first component. We extend the representation and Karamata's theorem to one-component regularly varying functions, probability distributions and densities, and explain why these results are fundamental in multivariate extreme-value theory. We then generalize the Hammersley–Clifford theorem to relate asymptotic conditional independence to a factorization of the limiting density, and use it to model multivariate tails.


Extremes ◽  
2020 ◽  
Vol 23 (4) ◽  
pp. 521-545
Author(s):  
Marco Oesting ◽  
Alexander Schnurr

Abstract In this paper, we investigate temporal clusters of extremes defined as subsequent exceedances of high thresholds in a stationary time series. Two meaningful features of these clusters are the probability distribution of the cluster size and the ordinal patterns giving the relative positions of the data points within a cluster. Since these patterns take only the ordinal structure of consecutive data points into account, the method is robust under monotone transformations and measurement errors. We verify the existence of the corresponding limit distributions in the framework of regularly varying time series, develop non-parametric estimators and show their asymptotic normality under appropriate mixing conditions. The performance of the estimators is demonstrated in a simulated example and a real data application to discharge data of the river Rhine.


2008 ◽  
Vol 18 (06) ◽  
pp. 469-480 ◽  
Author(s):  
HE NI ◽  
HUJUN YIN

Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.


Sign in / Sign up

Export Citation Format

Share Document