scholarly journals Dependence structure analysis with copula GARCH method and for data set suitable copula selection

2017 ◽  
pp. 13-13
Author(s):  
ayşe metin karakaş
2019 ◽  
Vol 36 (4) ◽  
pp. 569-586
Author(s):  
Ricardo Puziol Oliveira ◽  
Jorge Alberto Achcar

Purpose The purpose of this paper is to provide a new method to estimate the reliability of series system by using a discrete bivariate distribution. This problem is of great interest in industrial and engineering applications. Design/methodology/approach The authors considered the Basu–Dhar bivariate geometric distribution and a Bayesian approach with application to a simulated data set and an engineering data set. Findings From the obtained results of this study, the authors observe that the discrete Basu–Dhar bivariate probability distribution could be a good alternative in the analysis of series system structures with accurate inference results for the reliability of the system under a Bayesian approach. Originality/value System reliability studies usually assume independent lifetimes for the components (series, parallel or complex system structures) in the estimation of the reliability of the system. This assumption in general is not reasonable in many engineering applications, since it is possible that the presence of some dependence structure between the lifetimes of the components could affect the evaluation of the reliability of the system.


2021 ◽  
Author(s):  
Georgia Lazoglou ◽  
George Zittis ◽  
Panos Hadjinicolaou ◽  
Jos Lelieveld

<p>Over the last decades, the use of climate models in the projection and assessment of future climate conditions, both on global and regional scales, has become common practice. However, inevitable biases between the simulated model output and observed conditions remain, mainly due to the variable nature of the atmospheric system, and limitations in representing sub-grid-scale processes that need to be parameterized. The present study aims to test a new approach for increasing the accuracy of daily climate model output. We apply the recently introduced TIN-Copula statistical method to the results of a state-of-the-art global Earth System Model (Hadley Centre Global Environmental Model version 3 - HadGEM3). The TIN-Copula approach is a combination of Triangular Irregular Networks and Copulas that focuses on modeling the whole dependence structure of the studied variables. The study area of the current application is the Middle East and North Africa (MENA) region, a prominent global climate change hot-spot. Considering the lack of accurate and consistent observational records in the MENA, we used the ERA5 reanalysis dataset as a reference. The results of the study reveal that the TIN-Copula method significantly improves the simulation of maximum temperature, both on annual and seasonal time scales. Specifically, the HadGEM3 model tends to overestimate the ERA5 temperature data in the major part of the MENA region. This overestimation is mainly evident for the lower values of the studied data sets during all seasons, while in summer the overestimation is found in the whole data set. However, after the use the TIN-Copula method, the differences between the simulated maximum temperature and the ERA5 data were minimized in more than the 85% of the studied grids.</p>


2021 ◽  
Vol 7 (3) ◽  
pp. 4038-4060
Author(s):  
Mohamed Kayid ◽  
◽  
Adel Alrasheedi

<abstract><p>In this paper, a mean inactivity time frailty model is considered. Examples are given to calculate the mean inactivity time for several reputable survival models. The dependence structure between the population variable and the frailty variable is characterized. The classical weighted proportional mean inactivity time model is considered as a special case. We prove that several well-known stochastic orderings between two frailties are preserved for the response variables under the weighted proportional mean inactivity time model. We apply this model on a real data set and also perform a simulation study to examine the accuracy of the model.</p></abstract>


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 505
Author(s):  
Lluís Bermúdez ◽  
Dimitris Karlis

A multivariate INAR(1) regression model based on the Sarmanov distribution is proposed for modelling claim counts from an automobile insurance contract with different types of coverage. The correlation between claims from different coverage types is considered jointly with the serial correlation between the observations of the same policyholder observed over time. Several models based on the multivariate Sarmanov distribution are analyzed. The new models offer some advantages since they have all the advantages of the MINAR(1) regression model but allow for a more flexible dependence structure by using the Sarmanov distribution. Driven by a real panel data set, these models are considered and fitted to the data to discuss their goodness of fit and computational efficiency.


2011 ◽  
Vol 42 (2-3) ◽  
pp. 193-216 ◽  
Author(s):  
Hemant Chowdhary ◽  
Luis A. Escobar ◽  
Vijay P. Singh

Multivariate flood frequency analysis, involving flood peak flow, volume and duration, has been traditionally accomplished by employing available functional bivariate and multivariate frequency distributions that have a restriction on the marginals to be from the same family of distributions. The copula concept overcomes this restriction by allowing a combination of arbitrarily chosen marginal types. It also provides a wider choice of admissible dependence structure as compared to the conventional approach. The availability of a vast variety of copula types makes the selection of an appropriate copula family for different hydrological applications a non-trivial task. Graphical and analytic goodness-of-fit tests for testing the suitability of copulas are beginning to evolve and are being developed; there is limited experience of their usage at present, especially in the hydrological field. This paper provides a step-wise procedure for copula selection and illustrates its application to bivariate flood frequency analysis, involving flood peak flow and volume data. Several graphical procedures, tail dependence characteristics, and formal goodness-of-fit tests involving a parametric bootstrap-based technique are considered while investigating the relative applicability of six copula families. The Clayton copula has been identified as a valid model for the particular flood peak flow and volume data set considered in the study.


Metrika ◽  
2021 ◽  
Author(s):  
Jorge Navarro

AbstractThe purpose of the paper is to provide a general method based on conditional quantile curves to predict record values from preceding records. The predictions are based on conditional median (or median regression) curves. Moreover, conditional quantiles curves are used to provide confidence bands for these predictions. The method is based on the recently introduced concept of multivariate distorted distributions that are used instead of copulas to represent the dependence structure. This concept allows us to compute the conditional quantile curves in a simple way. The theoretical findings are illustrated with a non-parametric model (standard uniform), two parametric models (exponential and Pareto), and a non-parametric procedure for the general case. A real data set and a simulated case study in reliability are analysed.


2020 ◽  
Author(s):  
Barbara Haese ◽  
Sebastian Hörning ◽  
Maximilian Graf ◽  
Adam Eshel ◽  
Christian Chwala ◽  
...  

&lt;p&gt;&lt;span&gt;Precipitation is one of the crucial variables within the hydrological system, and accordingly one of the main drivers for terrestrial hydrological processes. The quality of many hydrological applications such as climate prediction, water resource management, and flood forecasting, depends on the correct reproduction of its spatiotemporal distribution. However, the global network of precipitation observations is relatively sparse in large areas of the world. Compared to these observation network, inhabited areas typically have a relative dense network of Commercial Microwave Links (CMLs). These CMLs can be used to calculate path-averaged rain rates, derived from their attenuation. One challenge when using path-averaged rain rates is the construction of spatial precipitation fields. To address these challenges, we apply Random Mixing Whittaker-Shannon (RMWSPy) to stochastically simulate precipitation fields. Therefore, we generate precipitation fields as a linear combination of unconditional spatial random fields, where the spatial dependence structure is described by copulas. The weights of the linear combination are optimized in such a way that the observations and the spatial structure of the precipitation observations are reproduced. Within this method the path-averaged rain rates are used as non-linear constrains. One big advantage when using RMWSPy is the ability to simulate precipitation field ensembles of any size, where each ensemble member is in concordance with the underlying observations. The spread of such an ensemble enables an uncertainty estimation of the simulated fields. In particular, it reflects the precipitation variability along the CML path and the uncertainty between the observation locations. We demonstrate RMWSPy using CML observations within various areas of Germany with a different density of observations. We show, that the reconstructed precipitation fields reproduce the observed spatial precipitation pattern in a comparable good quality as the RADOLAN weather radar data set provided by the German Weather Service (DWD).&lt;/span&gt;&lt;/p&gt;


2017 ◽  
Vol 20 (08) ◽  
pp. 1750052 ◽  
Author(s):  
JULIUSZ JABłECKI

This paper uses a unique data set of more than 1000 synthetic Collateralized Debt Obligations (CDOs) deals to describe typical structures, their pricing and performance with the aim of identifying the factors behind the spectacular collapse of this important segment of structured credit market in late 2008. The data suggests that mark-to-market losses on many synthetic CDO tranches were much more significant than in case of simpler, lower-rated products despite the former experiencing little or no impairment of the notional. The losses were driven instead by the concentration of relatively limited number of defaults in a short period of time, suggesting that pre-crisis pricing must have seriously underestimated such risk of default clustering. In view of the post-crisis pick-up in synthetic CDO issuance, the paper attempts to heed this lesson and offer a simple factor model of default correlation in the spirit of Marshall–Olkin that is naturally suited to capturing the temporal dimension of default dependencies that have been crucial for synthetic CDOs investors. The model allows building a rich dependence structure capable of consistently fitting standardized iTraxx and CDX index tranches, which makes it ideal for pricing bespoke CDOs.


Sign in / Sign up

Export Citation Format

Share Document