OPTIMAL INSURANCE CONTRACTS UNDER DISTORTION RISK MEASURES WITH AMBIGUITY AVERSION

2020 ◽  
Vol 50 (2) ◽  
pp. 619-646
Author(s):  
Wenjun Jiang ◽  
Marcos Escobar-Anel ◽  
Jiandong Ren

AbstractThis paper presents analytical representations for an optimal insurance contract under distortion risk measure and in the presence of model uncertainty. We incorporate ambiguity aversion and distortion risk measure through the model of Robert and Therond [(2014) ASTIN Bulletin: The Journal of the IAA, 44(2), 277–302.] as per the framework of Klibanoff et al. [(2005) A smooth model of decision making under ambiguity. Econometrica, 73(6), 1849–1892.]. Explicit optimal insurance indemnity functions are derived when the decision maker (DM) applies Value-at-Risk as risk measure and is ambiguous about the loss distribution. Our results show that: (1) under model uncertainty, ambiguity aversion results in a distorted probability distribution over the set of possible models with a bias in favor of the model which yields a larger risk; (2) a more ambiguity-averse DM would demand more insurance coverage; (3) for a given budget, uncertainties about the loss distribution result in higher risk level for the DM.

2021 ◽  
Vol 14 (5) ◽  
pp. 201
Author(s):  
Yuan Hu ◽  
W. Brent Lindquist ◽  
Svetlozar T. Rachev

This paper investigates performance attribution measures as a basis for constraining portfolio optimization. We employ optimizations that minimize conditional value-at-risk and investigate two performance attributes, asset allocation (AA) and the selection effect (SE), as constraints on asset weights. The test portfolio consists of stocks from the Dow Jones Industrial Average index. Values for the performance attributes are established relative to two benchmarks, equi-weighted and price-weighted portfolios of the same stocks. Performance of the optimized portfolios is judged using comparisons of cumulative price and the risk-measures: maximum drawdown, Sharpe ratio, Sortino–Satchell ratio and Rachev ratio. The results suggest that achieving SE performance thresholds requires larger turnover values than that required for achieving comparable AA thresholds. The results also suggest a positive role in price and risk-measure performance for the imposition of constraints on AA and SE.


2009 ◽  
Vol 39 (2) ◽  
pp. 591-613 ◽  
Author(s):  
Andreas Kull

AbstractWe revisit the relative retention problem originally introduced by de Finetti using concepts recently developed in risk theory and quantitative risk management. Instead of using the Variance as a risk measure we consider the Expected Shortfall (Tail-Value-at-Risk) and include capital costs and take constraints on risk capital into account. Starting from a risk-based capital allocation, the paper presents an optimization scheme for sharing risk in a multi-risk class environment. Risk sharing takes place between two portfolios and the pricing of risktransfer reflects both portfolio structures. This allows us to shed more light on the question of how optimal risk sharing is characterized in a situation where risk transfer takes place between parties employing similar risk and performance measures. Recent developments in the regulatory domain (‘risk-based supervision’) pushing for common, insurance industry-wide risk measures underline the importance of this question. The paper includes a simple non-life insurance example illustrating optimal risk transfer in terms of retentions of common reinsurance structures.


2021 ◽  
Vol 14 (11) ◽  
pp. 540
Author(s):  
Eyden Samunderu ◽  
Yvonne T. Murahwa

Developments in the world of finance have led the authors to assess the adequacy of using the normal distribution assumptions alone in measuring risk. Cushioning against risk has always created a plethora of complexities and challenges; hence, this paper attempts to analyse statistical properties of various risk measures in a not normal distribution and provide a financial blueprint on how to manage risk. It is assumed that using old assumptions of normality alone in a distribution is not as accurate, which has led to the use of models that do not give accurate risk measures. Our empirical design of study firstly examined an overview of the use of returns in measuring risk and an assessment of the current financial environment. As an alternative to conventional measures, our paper employs a mosaic of risk techniques in order to ascertain the fact that there is no one universal risk measure. The next step involved looking at the current risk proxy measures adopted, such as the Gaussian-based, value at risk (VaR) measure. Furthermore, the authors analysed multiple alternative approaches that do not take into account the normality assumption, such as other variations of VaR, as well as econometric models that can be used in risk measurement and forecasting. Value at risk (VaR) is a widely used measure of financial risk, which provides a way of quantifying and managing the risk of a portfolio. Arguably, VaR represents the most important tool for evaluating market risk as one of the several threats to the global financial system. Upon carrying out an extensive literature review, a data set was applied which was composed of three main asset classes: bonds, equities and hedge funds. The first part was to determine to what extent returns are not normally distributed. After testing the hypothesis, it was found that the majority of returns are not normally distributed but instead exhibit skewness and kurtosis greater or less than three. The study then applied various VaR methods to measure risk in order to determine the most efficient ones. Different timelines were used to carry out stressed value at risks, and it was seen that during periods of crisis, the volatility of asset returns was higher. The other steps that followed examined the relationship of the variables, correlation tests and time series analysis conducted and led to the forecasting of the returns. It was noted that these methods could not be used in isolation. We adopted the use of a mosaic of all the methods from the VaR measures, which included studying the behaviour and relation of assets with each other. Furthermore, we also examined the environment as a whole, then applied forecasting models to accurately value returns; this gave a much more accurate and relevant risk measure as compared to the initial assumption of normality.


2015 ◽  
Author(s):  
Γεώργιος Παπαγιάννης

The main aim of the present thesis is to investigate the effect of diverging priors concerning model uncertainty on decision making. One of the main issues in the thesis is to assess the effect of different notions of distance in the space of probability measures and their use as loss functionals in the process of identifying the best suited model among a set of plausible priors. Another issue, is that of addressing the problem of ``inhomogeneous" sets of priors, i.e. sets of priors that highly divergent opinions may occur, and the need to robustly treat that case. As high degrees of inhomogeneity may lead to distrust of the decision maker to the priors it may be desirable to adopt a particular prior corresponding to the set which somehow minimizes the ``variability" among the models on the set. This leads to the notion of Frechet risk measure. Finally, an important problem is the actual calculation of robust risk measures. An account of their variational definition, the problem of calculation leads to the numerical treatment of problems of the calculus of variations for which reliable and effective algorithms are proposed. The contributions of the thesis are presented in the following three chapters. In Chapter 2, a statistical learning scheme is introduced for constructing the best model compatible with a set of priors provided by different information sources of varying reliability. As various priors may model well different aspects of the phenomenon the proposed scheme is a variational scheme based on the minimization of a weighted loss function in the space of probability measures which in certain cases is shown to be equivalent to weighted quantile averaging schemes. Therefore in contrast to approaches such as minimax decision theory in which a particular element of the prior set is chosen we construct for each prior set a probability measure which is not necessarily an element of it, a fact that as shown may lead to better description of the phenomenon in question. While treating this problem we also address the issue of the effect of the choice of distance functional in the space of measures on the problem of model selection. One of the key findings in this respect is that the class of Wasserstein distances seems to have the best performance as compared to other distances such as the KL-divergence. In Chapter 3, motivated by the results of Chapter 2, we treat the problem of specifying the risk measure for a particular loss when a set of highly divergent priors concerning the distribution of the loss is available. Starting from the principle that the ``variability" of opinions is not welcome, a fact for which a strong axiomatic framework is provided (see e.g. Klibanoff (2005) and references therein) we introduce the concept of Frechet risk measures, which corresponds to a minimal variance risk measure. Here we view a set of priors as a discrete measure on the space of probability measures and by variance we mean the variance of this discrete probability measure. This requires the use of the concept of Frechet mean. By different metrizations of the space of probability measures we define a variety of Frechet risk measures, the Wasserstein, the Hellinger and the weighted entropic risk measure, and illustrate their use and performance via an example related to the static hedging of derivatives under model uncertainty. In Chapter 4, we consider the problem of numerical calculation of convex risk measures applying techniques from the calculus of variations. Regularization schemes are proposed and the theoretical convergence of the algorithms is considered.


2012 ◽  
Vol 3 (1) ◽  
pp. 150-157 ◽  
Author(s):  
Suresh Andrew Sethi ◽  
Mike Dalton

Abstract Traditional measures that quantify variation in natural resource systems include both upside and downside deviations as contributing to variability, such as standard deviation or the coefficient of variation. Here we introduce three risk measures from investment theory, which quantify variability in natural resource systems by analyzing either upside or downside outcomes and typical or extreme outcomes separately: semideviation, conditional value-at-risk, and probability of ruin. Risk measures can be custom tailored to frame variability as a performance measure in terms directly meaningful to specific management objectives, such as presenting risk as harvest expected in an extreme bad year, or by characterizing risk as the probability of fishery escapement falling below a prescribed threshold. In this paper, we present formulae, empirical examples from commercial fisheries, and R code to calculate three risk measures. In addition, we evaluated risk measure performance with simulated data, and we found that risk measures can provide unbiased estimates at small sample sizes. By decomposing complex variability into quantitative metrics, we envision risk measures to be useful across a range of wildlife management scenarios, including policy decision analyses, comparative analyses across systems, and tracking the state of natural resource systems through time.


2019 ◽  
Vol 12 (4) ◽  
pp. 159 ◽  
Author(s):  
Yuyang Cheng ◽  
Marcos Escobar-Anel ◽  
Zhenxian Gong

This paper proposes and investigates a multivariate 4/2 Factor Model. The name 4/2 comes from the superposition of a CIR term and a 3/2-model component. Our model goes multidimensional along the lines of a principal component and factor covariance decomposition. We find conditions for well-defined changes of measure and we also find two key characteristic functions in closed-form, which help with pricing and risk measure calculations. In a numerical example, we demonstrate the significant impact of the newly added 3/2 component (parameter b) and the common factor (a), both with respect to changes on the implied volatility surface (up to 100%) and on two risk measures: value at risk and expected shortfall where an increase of up to 29% was detected.


2006 ◽  
Vol 36 (2) ◽  
pp. 375-413
Author(s):  
Gary G. Venter ◽  
John A. Major ◽  
Rodney E. Kreps

The marginal approach to risk and return analysis compares the marginal return from a business decision to the marginal risk imposed. Allocation distributes the total company risk to business units and compares the profit/risk ratio of the units. These approaches coincide when the allocation actually assigns the marginal risk to each business unit, i.e., when the marginal impacts add up to the total risk measure. This is possible for one class of risk measures (scalable measures) under the assumption of homogeneous growth and by a subclass (transformed probability measures) otherwise. For homogeneous growth, the allocation of scalable measures can be accomplished by the directional derivative. The first well known additive marginal allocations were the Myers-Read method from Myers and Read (2001) and co-Tail Value at Risk, discussed in Tasche (2000). Now we see that there are many others, which allows the choice of risk measure to be based on economic meaning rather than the availability of an allocation method. We prefer the term “decomposition” to “allocation” here because of the use of the method of co-measures, which quantifies the component composition of a risk measure rather than allocating it proportionally to something.Risk adjusted profitability calculations that do not rely on capital allocation still may involve decomposition of risk measures. Such a case is discussed. Calculation issues for directional derivatives are also explored.


2019 ◽  
Vol 8 (1) ◽  
pp. 15
Author(s):  
NI WAYAN UCHI YUSHI ARI SUDINA ◽  
KOMANG DHARMAWAN ◽  
I WAYAN SUMARJAYA

Conditional value at risk (CVaR) is widely used in risk measure that takes into account losses exceeding the value at risk level. The aim of this research is to compare the performance of the EVT-GJR-vine copula method and EVT-GARCH-vine copula method in estimating CVaR of the portfolio using backtesting. Based on the backtesting results, it was found that the EVT-GJR-vine copula method have better performance when compared to the EVT-GARCH-vine copula method in estimating the CVaR value of the portfolio. This can be seen from the statistical values ??, and  of EVT-GJR-vine copula method which is generally smaller than the statistical values , and of the EVT-GARCH-vine copula method.


2020 ◽  
Vol 23 (03) ◽  
pp. 2050017
Author(s):  
YANHONG CHEN ◽  
YIJUN HU

In this paper, we study how to evaluate the risk of a financial portfolio, whose components may be dependent and come from different markets or involve more than one kind of currencies, while we also take into consideration the uncertainty about the time value of money. Namely, we introduce a new class of risk measures, named set-valued dynamic risk measures for bounded discrete-time processes that are adapted to a given filtration. The time horizon can be finite or infinite. We investigate the representation results for them by making full use of Legendre–Fenchel conjugation theory for set-valued functions. Finally, some examples such as the set-valued dynamic average value at risk and the entropic risk measure for bounded discrete-time processes are also given.


2016 ◽  
Vol 4 (1) ◽  
Author(s):  
Silvana M. Pesenti ◽  
Pietro Millossovich ◽  
Andreas Tsanakas

AbstractOne of risk measures’ key purposes is to consistently rank and distinguish between different risk profiles. From a practical perspective, a risk measure should also be robust, that is, insensitive to small perturbations in input assumptions. It is known in the literature [14, 39], that strong assumptions on the risk measure’s ability to distinguish between risks may lead to a lack of robustness. We address the trade-off between robustness and consistent risk ranking by specifying the regions in the space of distribution functions, where law-invariant convex risk measures are indeed robust. Examples include the set of random variables with bounded second moment and those that are less volatile (in convex order) than random variables in a given uniformly integrable set. Typically, a risk measure is evaluated on the output of an aggregation function defined on a set of random input vectors. Extending the definition of robustness to this setting, we find that law-invariant convex risk measures are robust for any aggregation function that satisfies a linear growth condition in the tail, provided that the set of possible marginals is uniformly integrable. Thus, we obtain that all law-invariant convex risk measures possess the aggregation-robustness property introduced by [26] and further studied by [40]. This is in contrast to the widely-used, non-convex, risk measure Value-at-Risk, whose robustness in a risk aggregation context requires restricting the possible dependence structures of the input vectors.


Sign in / Sign up

Export Citation Format

Share Document