scholarly journals Merton structural model and IRB compliance

2010 ◽  
Vol 7 (1) ◽  
Author(s):  
Matej Jovan

This paper discusses the 1974 Merton's model in light of the minimum regulatory requirements of the Internal Ratings-Based (IRB) Approach provided in the Directive 2006/48/EC of the European Parliament and of the Council for the calculation of capital requirement for credit risk. The basic purpose is to illustrate potential deficiencies of the model in assigning obligors ratings and/or estimating probability of default to which supervisors should be attentive when validating this model in bank's IRB approach. The procedures of three estimation methods of Merton's model are described (calibration, Moody's KMV, maximum likelihood estimation), based on which deficiencies of this model can be identified. The Merton's model per se does not ensure compliance with the minimum requirements of the IRB approach for the estimation of probability of default, as its theoretical assumptions often do not reflect reality. It is therefore necessary to calibrate the fundamental parameters estimated by the model using empirical data on defaults, which must be defined in accordance with the regulatory minimum requirements, and must be representative of the population for which the model is valid. Results on the simulated data also show that calibration method provides different estimates of probability of default for the same obligors compared to other two methods. Differences are mainly influenced by the volatility of equity and leverage in the time series, which calibration method does not sufficiently account for. Some regulatory minimum requirements can be relaxed when obligors are being assigned ratings on the basis of the Merton's model estimation methods. However, the results of the analysis on simulated and empirical data show that different estimation methods generate different obligor credit rating assignments.

Author(s):  
Themistoklis Koutsellis ◽  
Zissimos P. Mourelatos

Abstract For many data-driven reliability problems, the population is not homogeneous; i.e., its statistics are not described by a unimodal distribution. Also, the interval of observation may not be long enough to capture the failure statistics. A limited failure population (LFP) consists of two subpopulations, a defective and a nondefective one, with well-separated modes of the two underlying distributions. In reliability and warranty forecasting applications, the estimation of the number of defective units and the estimation of the parameters of the underlying distribution are very important. Among various estimation methods, the maximum likelihood estimation (MLE) approach is the most widely used. Its likelihood function, however, is often incomplete, resulting in an erroneous statistical inference. In this paper, we estimate the parameters of a LFP analytically using a rational function fitting (RFF) method based on the Weibull probability plot (WPP) of observed data. We also introduce a censoring factor (CF) to assess how sufficient the number of collected data is for statistical inference. The proposed RFF method is compared with existing MLE approaches using simulated data and data related to automotive warranty forecasting.


Author(s):  
A. S. Ogunsanya ◽  
E. E. E. Akarawak ◽  
W. B. Yahya

In this paper, we compared different Parameter Estimation method of the two parameter Weibull-Rayleigh Distribution (W-RD) namely; Maximum Likelihood Estimation (MLE), Least Square Estimation method (LSE) and three methods of Quartile Estimators. Two of the quartile methods have been applied in literature, while the third method (Q1-M) is introduced in this work. The methods have been applied to simulate data. These methods of estimation were compared using Error, Mean Square Error and Total Deviation (TD) which is also known as Sum Absolute Error Estimate (SAEE). The analytical results show that the performances of all the parameter estimation methods were satisfactory with data set of Weibull-Rayleigh distribution while degree of accuracy is determined by the sample size. The proposed quartile (Q1-M) method has the least Total Deviation and MSE. In addition, the quartile methods perform better than MLE for the simulated data. In particular, the proposed quartile methods (Q1-M) have an added advantage of simplicity in usage than MLE methods.


2005 ◽  
Vol 13 (2) ◽  
pp. 61-85
Author(s):  
Myung Jig Kim ◽  
Sung Hwan Shin ◽  
Hong Sun Song

This paper proposes a method that estimates credit ratings by mapping empirical probability of default (PD) and standardized historical financial ratios. Unlike standard approaches such as the parametric logit model. discriminant analysis. neural network. and survival function model. the proposed approach has an advantage of offering a multiple credit rating categories. as opposed to either default or not default. of obligors. It would provide an useful information to practitioners because the probability of default for each credit rating category is a critical input under New Basel Capital Accord. Emoirical results based upon the historical PD and financial ratios of Korean savings bank industry from 2000 and 2003 suggest that the industry’s average credit rating belong to a speculative grade. that is BB and below. In addition, the computed transition matrix indicates that volatility of transition matrix fluctuates substantially each year and the orobability of staying in the same rating category at the end of the year tended to be much smaller than the average reported by the rating agencies for the overall Korean companies. The proposed method can easily be applied to industries other than savings bank industry.


Author(s):  
Anne Krogh Nøhr ◽  
Kristian Hanghøj ◽  
Genis Garcia Erill ◽  
Zilong Li ◽  
Ida Moltke ◽  
...  

Abstract Estimation of relatedness between pairs of individuals is important in many genetic research areas. When estimating relatedness, it is important to account for admixture if this is present. However, the methods that can account for admixture are all based on genotype data as input, which is a problem for low-depth next-generation sequencing (NGS) data from which genotypes are called with high uncertainty. Here we present a software tool, NGSremix, for maximum likelihood estimation of relatedness between pairs of admixed individuals from low-depth NGS data, which takes the uncertainty of the genotypes into account via genotype likelihoods. Using both simulated and real NGS data for admixed individuals with an average depth of 4x or below we show that our method works well and clearly outperforms all the commonly used state-of-the-art relatedness estimation methods PLINK, KING, relateAdmix, and ngsRelate that all perform quite poorly. Hence, NGSremix is a useful new tool for estimating relatedness in admixed populations from low-depth NGS data. NGSremix is implemented in C/C ++ in a multi-threaded software and is freely available on Github https://github.com/KHanghoj/NGSremix.


2021 ◽  
Vol 11 (2) ◽  
pp. 582
Author(s):  
Zean Bu ◽  
Changku Sun ◽  
Peng Wang ◽  
Hang Dong

Calibration between multiple sensors is a fundamental procedure for data fusion. To address the problems of large errors and tedious operation, we present a novel method to conduct the calibration between light detection and ranging (LiDAR) and camera. We invent a calibration target, which is an arbitrary triangular pyramid with three chessboard patterns on its three planes. The target contains both 3D information and 2D information, which can be utilized to obtain intrinsic parameters of the camera and extrinsic parameters of the system. In the proposed method, the world coordinate system is established through the triangular pyramid. We extract the equations of triangular pyramid planes to find the relative transformation between two sensors. One capture of camera and LiDAR is sufficient for calibration, and errors are reduced by minimizing the distance between points and planes. Furthermore, the accuracy can be increased by more captures. We carried out experiments on simulated data with varying degrees of noise and numbers of frames. Finally, the calibration results were verified by real data through incremental validation and analyzing the root mean square error (RMSE), demonstrating that our calibration method is robust and provides state-of-the-art performance.


Axioms ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 25 ◽  
Author(s):  
Ehab Almetwally ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
Eslam Hafez

This paper aims to find a statistical model for the COVID-19 spread in the United Kingdom and Canada. We used an efficient and superior model for fitting the COVID 19 mortality rates in these countries by specifying an optimal statistical model. A new lifetime distribution with two-parameter is introduced by a combination of inverted Topp-Leone distribution and modified Kies family to produce the modified Kies inverted Topp-Leone (MKITL) distribution, which covers a lot of application that both the traditional inverted Topp-Leone and the modified Kies provide poor fitting for them. This new distribution has many valuable properties as simple linear representation, hazard rate function, and moment function. We made several methods of estimations as maximum likelihood estimation, least squares estimators, weighted least-squares estimators, maximum product spacing, Crame´r-von Mises estimators, and Anderson-Darling estimators methods are applied to estimate the unknown parameters of MKITL distribution. A numerical result of the Monte Carlo simulation is obtained to assess the use of estimation methods. also, we applied different data sets to the new distribution to assess its performance in modeling data.


Stats ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 247-258 ◽  
Author(s):  
Pedro L. Ramos ◽  
Francisco Louzada

A new one-parameter distribution is proposed in this paper. The new distribution allows for the occurrence of instantaneous failures (inliers) that are natural in many areas. Closed-form expressions are obtained for the moments, mean, variance, a coefficient of variation, skewness, kurtosis, and mean residual life. The relationship between the new distribution with the exponential and Lindley distributions is presented. The new distribution can be viewed as a combination of a reparametrized version of the Zakerzadeh and Dolati distribution with a particular case of the gamma model and the occurrence of zero value. The parameter estimation is discussed under the method of moments and the maximum likelihood estimation. A simulation study is performed to verify the efficiency of both estimation methods by computing the bias, mean squared errors, and coverage probabilities. The superiority of the proposed distribution and some of its concurrent distributions are tested by analyzing four real lifetime datasets.


Genetics ◽  
2001 ◽  
Vol 157 (3) ◽  
pp. 1369-1385 ◽  
Author(s):  
Z W Luo ◽  
C A Hackett ◽  
J E Bradshaw ◽  
J W McNicol ◽  
D Milbourne

Abstract This article presents methodology for the construction of a linkage map in an autotetraploid species, using either codominant or dominant molecular markers scored on two parents and their full-sib progeny. The steps of the analysis are as follows: identification of parental genotypes from the parental and offspring phenotypes; testing for independent segregation of markers; partition of markers into linkage groups using cluster analysis; maximum-likelihood estimation of the phase, recombination frequency, and LOD score for all pairs of markers in the same linkage group using the EM algorithm; ordering the markers and estimating distances between them; and reconstructing their linkage phases. The information from different marker configurations about the recombination frequency is examined and found to vary considerably, depending on the number of different alleles, the number of alleles shared by the parents, and the phase of the markers. The methods are applied to a simulated data set and to a small set of SSR and AFLP markers scored in a full-sib population of tetraploid potato.


Author(s):  
Valentin Raileanu ◽  

The article briefly describes the history and fields of application of the theory of extreme values, including climatology. The data format, the Generalized Extreme Value (GEV) probability distributions with Bock Maxima, the Generalized Pareto (GP) distributions with Point of Threshold (POT) and the analysis methods are presented. Estimating the distribution parameters is done using the Maximum Likelihood Estimation (MLE) method. Free R software installation, the minimum set of required commands and the GUI in2extRemes graphical package are described. As an example, the results of the GEV analysis of a simulated data set in in2extRemes are presented.


Sign in / Sign up

Export Citation Format

Share Document