scholarly journals Mortality Forecasting: How Far Back Should We Look in Time?

Risks ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 22 ◽  
Author(s):  
Han Li ◽  
Colin O’Hare

Extrapolative methods are one of the most commonly-adopted forecasting approaches in the literature on projecting future mortality rates. It can be argued that there are two types of mortality models using this approach. The first extracts patterns in age, time and cohort dimensions either in a deterministic fashion or a stochastic fashion. The second uses non-parametric smoothing techniques to model mortality and thus has no explicit constraints placed on the model. We argue that from a forecasting point of view, the main difference between the two types of models is whether they treat recent and historical information equally in the projection process. In this paper, we compare the forecasting performance of the two types of models using Great Britain male mortality data from 1950–2016. We also conduct a robustness test to see how sensitive the forecasts are to the changes in the length of historical data used to calibrate the models. The main conclusion from the study is that more recent information should be given more weight in the forecasting process as it has greater predictive power over historical information.

2021 ◽  
pp. 1-30
Author(s):  
Chou-Wen Wang ◽  
Jinggong Zhang ◽  
Wenjun Zhu

ABSTRACT We propose a new neighbouring prediction model for mortality forecasting. For each mortality rate at age x in year t, mx,t, we construct an image of neighbourhood mortality data around mx,t, that is, Ꜫ mx,t (x1, x2, s), which includes mortality information for ages in [x-x1, x+x2], lagging k years (1 ≤ k ≤ s). Combined with the deep learning model – convolutional neural network, this framework is able to capture the intricate nonlinear structure in the mortality data: the neighbourhood effect, which can go beyond the directions of period, age, and cohort as in classic mortality models. By performing an extensive empirical analysis on all the 41 countries and regions in the Human Mortality Database, we find that the proposed models achieve superior forecasting performance. This framework can be further enhanced to capture the patterns and interactions between multiple populations.


Risks ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 123 ◽  
Author(s):  
Marie Angèle Cathleen Alijean ◽  
Jason Narsoo

Mortality forecasting has always been a target of study by academics and practitioners. Since the introduction and rising significance of securitization of risk in mortality and longevity, more in-depth studies regarding mortality have been carried out to enable the fair pricing of such derivatives. In this article, a comparative analysis is performed on the mortality forecasting accuracy of four mortality models. The methodology employs the Age-Period-Cohort model, the Cairns-Blake-Dowd model, the classical Lee-Carter model and the Kou-Modified Lee-Carter model. The Kou-Modified Lee-Carter model combines the classical Lee-Carter with the Double Exponential Jump Diffusion model. This paper is the first study to employ the Kou model to forecast French mortality data. The dataset comprises death data of French males from age 0 to age 90, available for the years 1900–2015. The paper differentiates between two periods: the 1900–1960 period where extreme mortality events occurred for French males and the 1961–2015 period where no significant jump is observed. The Kou-modified Lee-Carter model turns out to give the best mortality forecasts based on the RMSE, MAE, MPE and MAPE metrics for the period 1900–1960 during which the two World Wars occurred. This confirms that the consideration of jumps and leptokurtic features conveys important information for mortality forecasting.


Author(s):  
Ana Debón ◽  
Steven Haberman ◽  
Francisco Montes ◽  
Edoardo Otranto

The parametric model introduced by Lee and Carter in 1992 for modeling mortality rates in the USA was a seminal development in forecasting life expectancies and has been widely used since then. Different extensions of this model, using different hypotheses about the data, constraints on the parameters, and appropriate methods have led to improvements in the model’s fit to historical data and the model’s forecasting of the future. This paper’s main objective is to evaluate if differences between models are reflected in different mortality indicators’ forecasts. To this end, nine sets of indicator predictions were generated by crossing three models and three block-bootstrap samples with each of size fifty. Later the predicted mortality indicators were compared using functional ANOVA. Models and block bootstrap procedures are applied to Spanish mortality data. Results show model, block-bootstrap, and interaction effects for all mortality indicators. Although it was not our main objective, it is essential to point out that the sample effect should not be present since they must be realizations of the same population, and therefore the procedure should lead to samples that do not influence the results. Regarding significant model effect, it follows that, although the addition of terms improves the adjustment of probabilities and translates into an effect on mortality indicators, the model’s predictions must be checked in terms of their probabilities and the mortality indicators of interest.


Author(s):  
Colin O’Hare ◽  
Youwei Li

In recent years, the issue of life expectancy has become of utmost importance to pension providers, insurance companies, and government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data to anticipate future life expectancy and hence quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age, and cohort and forecast these trends into the future by using standard statistical methods. These approaches rely on the assumption that structural breaks in the trend do not exist or do not have a significant impact on the mortality forecasts. Recent literature has started to question this assumption. In this paper, we carry out a comprehensive investigation of the presence or of structural breaks in a selection of leading mortality models. We find that structural breaks are present in the majority of cases. In particular, we find that allowing for structural break, where present, improves the forecast result significantly.


2021 ◽  
Vol 23 (4) ◽  
pp. 1-12
Author(s):  
Dhamodharavadhani S. ◽  
R. Rathipriya

The main objective of this study is to estimate the future COVID-19 mortality rate for India using COVID-19 mortality rate models from different countries. Here, the regression method with the optimal hyperparameter is used to build these models. In the literature, numerous mortality models for infectious diseases have been proposed, most of which predict future mortality by extending one or more disease-related attributes or parameters. But most of these models predict mortality rates from historical data. In this paper, the Gaussian process regression model with the optimal hyperparameter is used to develop the COVID-19 mortality rate prediction (MRP) model. Five different MRP models have been built for the U.S., Italy, Germany, Japan, and India. The results show that Germany has the lowest death rate in 2000 plus COVID-19 confirmed cases. Therefore, if India follows the strategy pursued by Germany, India will control the COVID-19 mortality rate even in the increase of confirmed cases.


2017 ◽  
Vol 11 (2) ◽  
pp. 343-389 ◽  
Author(s):  
Man Chung Fung ◽  
Gareth W. Peters ◽  
Pavel V. Shevchenko

AbstractThis paper explores and develops alternative statistical representations and estimation approaches for dynamic mortality models. The framework we adopt is to reinterpret popular mortality models such as the Lee–Carter class of models in a general state-space modelling methodology, which allows modelling, estimation and forecasting of mortality under a unified framework. We propose alternative model identification constraints which are more suited to statistical inference in filtering and parameter estimation. We then develop a class of Bayesian state-space models which incorporate a priori beliefs about the mortality model characteristics as well as for more flexible and appropriate assumptions relating to heteroscedasticity that present in observed mortality data. To study long-term mortality dynamics, we introduce stochastic volatility to the period effect. The estimation of the resulting stochastic volatility model of mortality is performed using a recent class of Monte Carlo procedure known as the class of particle Markov chain Monte Carlo methods. We illustrate the framework using Danish male mortality data, and show that incorporating heteroscedasticity and stochastic volatility markedly improves model fit despite an increase of model complexity. Forecasting properties of the enhanced models are examined with long-term and short-term calibration periods on the reconstruction of life tables.


2016 ◽  
Vol 64 (2) ◽  
pp. 99-104
Author(s):  
Md Hasinur Rahaman Khan ◽  
Sadia Afrin ◽  
Mohammad Shahed Masud

For the reason of simplicity the Lee and Carter (LC) method is getting widely adopted for long-run forecasts of age specific mortality rates. In this paper the LC model is applied to French mortality data to demonstrate the mortality results. The age-specific death rates are used for the period 1816 to 2006. The index of the level of mortality, and the shape and sensitivity coefficients for each age are obtained through the LC method. The autoregressive moving average and the singular value decomposition models are used to forecast the general index for a long period of time that goes from 2007 to 2056. The projection is useful since the projected mortality rates can be used to project life expectancy at birth which is the widely used social indicator in demography. Dhaka Univ. J. Sci. 64(2): 99-104, 2016 (July)


2021 ◽  
Vol 37 (04) ◽  
pp. 485-497
Author(s):  
Mushtaq Ahmad Khan Barakzai ◽  
Aqil Burney

This study examine twenty-nine parametric mortality models and assess their suitability for graduating mortality rates of urban and rural areas in Pakistan. Grouped age specific mortality rates of rural and urban populations for the year 2019 are used. The data is collected from the website of National Institute of Population Studies which conduct Maternal Mortality Survey in Pakistan on regular basis. The parametric mortality models were applied to rural and urban mortality data. We used R software to estimate the model’s parameters and assess their suitability for urban and rural populations. The suitability of these models was assessed by using 3 different loss functions. Our analyses found that the fourth type of Heligman-Polard’s model with loss function 3 provides reliable results for graduating the mortality of rural population while second type of Carriere model with loss function 3 produce best results for graduating the urban mortality of Pakistan. Based on two models, mortality rates of urban and rural population have been graduated over age range 0-85. We suggest the use the graduated mortality rates of urban and rural areas for pricing life insurance products in rural and urban areas respectively. In addition, graduated mortality rates are also suggested for use in calculation of life insurance liabilities.


2021 ◽  
Author(s):  
Jean Bosco NDIKUBWIMANA ◽  
LAWAL F.K ◽  
James KARAMUZI ◽  
Angelique DUKUNDE ◽  
Evariste GATABAZI ◽  
...  

Abstract Incidence and mortality rates are considered as a guideline for planning public health strategies and allocating resources. Several methods have been proposed and used for modeling mortalities of various countries. Among the leading mortality, models are the Lee-Carter model which has been used in various countries and adjudged to fit the mortality of these countries well. But it came with its own limitations as the model was used in a more developed nation. In this research work, we propose functional data analysis techniques to model Nigerian Male mortality using the data obtained from the Nigeria Bureau of Statistics from 1998-2010. We compared the results obtained using some parameters such as MAPE and MSE. From the results, we discovered that the improvement of the parameters of our model shows that it is better than the Lee-Carter model in analyzing Nigerian Male Mortality.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Sumaira Mubarik ◽  
Ying Hu ◽  
Chuanhua Yu

Abstract Background Precise predictions of incidence and mortality rates due to breast cancer (BC) are required for planning of public health programs as well as for clinical services. A number of approaches has been established for prediction of mortality using stochastic models. The performance of these models intensely depends on different patterns shown by mortality data in different countries. Methods The BC mortality data is retrieved from the Global burden of disease (GBD) study 2017 database. This study include BC mortality rates from 1990 to 2017, with ages 20 to 80+ years old women, for different Asian countries. Our study extend the current literature on Asian BC mortality data, on both the number of considered stochastic mortality models and their rigorous evaluation using multivariate Diebold-Marino test and by range of graphical analysis for multiple countries. Results Study findings reveal that stochastic smoothed mortality models based on functional data analysis generally outperform on quadratic structure of BC mortality rates than the other lee-carter models, both in term of goodness of fit and on forecast accuracy. Besides, smoothed lee carter (SLC) model outperform the functional demographic model (FDM) in case of symmetric structure of BC mortality rates, and provides almost comparable results to FDM in within and outside data forecast accuracy for heterogeneous set of BC mortality rates. Conclusion Considering the SLC model in comparison to the other can be obliging to forecast BC mortality and life expectancy at birth, since it provides even better results in some cases. In the current situation, we can assume that there is no single model, which can truly outperform all the others on every population. Therefore, we also suggest generating BC mortality forecasts using multiple models rather than relying upon any single model.


Sign in / Sign up

Export Citation Format

Share Document