scholarly journals Tail Dependence for Regularly Varying Time Series

2012 ◽  
Vol 2012 ◽  
pp. 1-14
Author(s):  
Ai-Ju Shi ◽  
Jin-Guan Lin

We use tail dependence functions to study tail dependence for regularly varying (RV) time series. First, tail dependence functions about RV time series are deduced through the intensity measure. Then, the relation between the tail dependence function and the intensity measure is established: they are biuniquely determined. Finally, we obtain the expressions of the tail dependence parameters based on the expectation of the RV components of the time series. These expressions are coincided with those obtained by the conditional probability. Some simulation examples are demonstrated to verify the results we established in this paper.

2017 ◽  
Vol 5 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Piotr Jaworski

Abstract The paper deals with Conditional Value at Risk (CoVaR) for copulas with nontrivial tail dependence. We show that both in the standard and the modified settings, the tail dependence function determines the limiting properties of CoVaR as the conditioning event becomes more extreme. The results are illustrated with examples using the extreme value, conic and truncation invariant families of bivariate tail-dependent copulas.


Biometrika ◽  
2020 ◽  
Author(s):  
Ting Zhang

Summary Quantile regression is a popular and powerful method for studying the effect of regressors on quantiles of a response distribution. However, existing results on quantile regression were mainly developed for cases in which the quantile level is fixed, and the data are often assumed to be independent. Motivated by recent applications, we consider the situation where (i) the quantile level is not fixed and can grow with the sample size to capture the tail phenomena, and (ii) the data are no longer independent, but collected as a time series that can exhibit serial dependence in both tail and non-tail regions. To study the asymptotic theory for high-quantile regression estimators in the time series setting, we introduce a tail adversarial stability condition, which had not previously been described, and show that it leads to an interpretable and convenient framework for obtaining limit theorems for time series that exhibit serial dependence in the tail region, but are not necessarily strongly mixing. Numerical experiments are conducted to illustrate the effect of tail dependence on high-quantile regression estimators, for which simply ignoring the tail dependence may yield misleading $p$-values.


2020 ◽  
Vol 52 (3) ◽  
pp. 855-878
Author(s):  
Johan Segers

AbstractA Markov tree is a random vector indexed by the nodes of a tree whose distribution is determined by the distributions of pairs of neighbouring variables and a list of conditional independence relations. Upon an assumption on the tails of the Markov kernels associated to these pairs, the conditional distribution of the self-normalized random vector when the variable at the root of the tree tends to infinity converges weakly to a random vector of coupled random walks called a tail tree. If, in addition, the conditioning variable has a regularly varying tail, the Markov tree satisfies a form of one-component regular variation. Changing the location of the root, that is, changing the conditioning variable, yields a different tail tree. When the tails of the marginal distributions of the conditioning variables are balanced, these tail trees are connected by a formula that generalizes the time change formula for regularly varying stationary time series. The formula is most easily understood when the various one-component regular variation statements are tied up into a single multi-component statement. The theory of multi-component regular variation is worked out for general random vectors, not necessarily Markov trees, with an eye towards other models, graphical or otherwise.


Sign in / Sign up

Export Citation Format

Share Document