Pricing CDO tranches with stochastic correlation and random factor loadings in a mixture copula model

2012 ◽  
Vol 219 (6) ◽  
pp. 2909-2916
Author(s):  
Zhe Chen ◽  
Qunfang Bao ◽  
Shenghong Li ◽  
Jianli Chen
2014 ◽  
Vol 40 ◽  
pp. 167-174 ◽  
Author(s):  
Jianli Chen ◽  
Zhen Liu ◽  
Shenghong Li

2020 ◽  
Author(s):  
Alexander P. Christensen ◽  
Hudson Golino

Recent research has demonstrated that the network measure node strength or sum of a node’s connections is roughly equivalent to confirmatory factor analysis (CFA) loadings. A key finding of this research is that node strength represents a combination of different latent causes. In the present research, we sought to circumvent this issue by formulating a network equivalent of factor loadings, which we call network loadings. In two simulations, we evaluated whether these network loadings could effectively (1) separate the effects of multiple latent causes and (2) estimate the simulated factor loading matrix of factor models. Our findings suggest that the network loadings can effectively do both. In addition, we leveraged the second simulation to derive effect size guidelines for network loadings. In a third simulation, we evaluated the similarities and differences between factor and network loadings when the data were generated from random, factor, and network models. We found sufficient differences between the loadings, which allowed us to develop an algorithm to predict the data generating model called the Loadings Comparison Test (LCT). The LCT had high sensitivity and specificity when predicting the data generating model. In sum, our results suggest that network loadings can provide similar information to factor loadings when the data are generated from a factor model and therefore can be used in a similar way (e.g., item selection, measurement invariance, factor scores).


Methodology ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 11-20 ◽  
Author(s):  
Gregor Sočan

Abstract. When principal component solutions are compared across two groups, a question arises whether the extracted components have the same interpretation in both populations. The problem can be approached by testing null hypotheses stating that the congruence coefficients between pairs of vectors of component loadings are equal to 1. Chan, Leung, Chan, Ho, and Yung (1999) proposed a bootstrap procedure for testing the hypothesis of perfect congruence between vectors of common factor loadings. We demonstrate that the procedure by Chan et al. is both theoretically and empirically inadequate for the application on principal components. We propose a modification of their procedure, which constructs the resampling space according to the characteristics of the principal component model. The results of a simulation study show satisfactory empirical properties of the modified procedure.


Methodology ◽  
2013 ◽  
Vol 9 (1) ◽  
pp. 1-12 ◽  
Author(s):  
Holger Steinmetz

Although the use of structural equation modeling has increased during the last decades, the typical procedure to investigate mean differences across groups is still to create an observed composite score from several indicators and to compare the composite’s mean across the groups. Whereas the structural equation modeling literature has emphasized that a comparison of latent means presupposes equal factor loadings and indicator intercepts for most of the indicators (i.e., partial invariance), it is still unknown if partial invariance is sufficient when relying on observed composites. This Monte-Carlo study investigated whether one or two unequal factor loadings and indicator intercepts in a composite can lead to wrong conclusions regarding latent mean differences. Results show that unequal indicator intercepts substantially affect the composite mean difference and the probability of a significant composite difference. In contrast, unequal factor loadings demonstrate only small effects. It is concluded that analyses of composite differences are only warranted in conditions of full measurement invariance, and the author recommends the analyses of latent mean differences with structural equation modeling instead.


2020 ◽  
Vol 15 (4) ◽  
pp. 351-361
Author(s):  
Liwei Huang ◽  
Arkady Shemyakin

Skewed t-copulas recently became popular as a modeling tool of non-linear dependence in statistics. In this paper we consider three different versions of skewed t-copulas introduced by Demarta and McNeill; Smith, Gan and Kohn; and Azzalini and Capitanio. Each of these versions represents a generalization of the symmetric t-copula model, allowing for a different treatment of lower and upper tails. Each of them has certain advantages in mathematical construction, inferential tools and interpretability. Our objective is to apply models based on different types of skewed t-copulas to the same financial and insurance applications. We consider comovements of stock index returns and times-to-failure of related vehicle parts under the warranty period. In both cases the treatment of both lower and upper tails of the joint distributions is of a special importance. Skewed t-copula model performance is compared to the benchmark cases of Gaussian and symmetric Student t-copulas. Instruments of comparison include information criteria, goodness-of-fit and tail dependence. A special attention is paid to methods of estimation of copula parameters. Some technical problems with the implementation of maximum likelihood method and the method of moments suggest the use of Bayesian estimation. We discuss the accuracy and computational efficiency of Bayesian estimation versus MLE. Metropolis-Hastings algorithm with block updates was suggested to deal with the problem of intractability of conditionals.


Sign in / Sign up

Export Citation Format

Share Document