MULTIVARIATE LONG-MEMORY COHORT MORTALITY MODELS

2019 ◽  
Vol 50 (1) ◽  
pp. 223-263 ◽  
Author(s):  
Hongxuan Yan ◽  
Gareth W. Peters ◽  
Jennifer S.K. Chan

AbstractThe existence of long memory in mortality data improves the understandings of features of mortality data and provides a new approach for establishing mortality models. The findings of long-memory phenomena in mortality data motivate us to develop new mortality models by extending the Lee–Carter (LC) model to death counts and incorporating long-memory model structure. Furthermore, there are no identification issues arising in the proposed model class. Hence, the constraints which cause many computational issues in LC models are removed. The models are applied to analyse mortality death count data sets from three different countries divided according to genders. Bayesian inference with various selection criteria is applied to perform the model parameter estimation and mortality rate forecasting. Results show that multivariate long-memory mortality model with long-memory cohort effect model outperforms multivariate extended LC cohort model in both in-sample fitting and out-sample forecast. Increasing the accuracy of forecasting of mortality rates and improving the projection of life expectancy is an important consideration for insurance companies and governments since misleading predictions may result in insufficient funds for retirement and pension plans.

Author(s):  
Colin O’Hare ◽  
Youwei Li

In recent years, the issue of life expectancy has become of utmost importance to pension providers, insurance companies, and government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data to anticipate future life expectancy and hence quantify the costs of providing for future aging populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age, and cohort and forecast these trends into the future by using standard statistical methods. These approaches rely on the assumption that structural breaks in the trend do not exist or do not have a significant impact on the mortality forecasts. Recent literature has started to question this assumption. In this paper, we carry out a comprehensive investigation of the presence or of structural breaks in a selection of leading mortality models. We find that structural breaks are present in the majority of cases. In particular, we find that allowing for structural break, where present, improves the forecast result significantly.


Author(s):  
Muhammad Farooq ◽  
Qamar-uz-zaman ◽  
Muhammad Ijaz

The Covid-19 infections outbreak is increasing day by day and the mortality rate is increasing exponentially both in underdeveloped and developed countries. It becomes inevitable for mathematicians to develop some models that could define the rate of infections and deaths in a population. Although there exist a lot of probability models but they fail to model different structures (non-monotonic) of the hazard rate functions and also do not provide an adequate fit to lifetime data. In this paper, a new probability model (FEW) is suggested which is designed to evaluate the death rates in a Population. Various statistical properties of FEW have been screened out in addition to the parameter estimation by using the maximum likelihood method (MLE). Furthermore, to delineate the significance of the parameters, a simulation study is conducted. Using death data from Pakistan due to Covid-19 outbreak, the proposed model applications is studied and compared to that of other existing probability models such as Ex-W, W, Ex, AIFW, and GAPW. The results show that the proposed model FEW provides a much better fit while modeling these data sets rather than Ex-W, W, Ex, AIFW, and GAPW.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 384
Author(s):  
Francisca Corpas-Burgos ◽  
Miguel A. Martinez-Beneito

One of the more evident uses of spatio-temporal disease mapping is forecasting the spatial distribution of diseases for the next few years following the end of the period of study. Spatio-temporal models rely on very different modeling tools (polynomial fit, splines, time series, etc.), which could show very different forecasting properties. In this paper, we introduce an enhancement of a previous autoregressive spatio-temporal model with particularly interesting forecasting properties, given its reliance on time series modeling. We include a common spatial component in that model and show how that component improves the previous model in several ways, its predictive capabilities being one of them. In this paper, we introduce and explore the theoretical properties of this model and compare them with those of the original autoregressive model. Moreover, we illustrate the benefits of this new model with the aid of a comprehensive study on 46 different mortality data sets in the Valencian Region (Spain) where the benefits of the new proposed model become evident.


2021 ◽  
pp. 1-38
Author(s):  
Hongxuan Yan ◽  
Gareth W. Peters ◽  
Jennifer Chan

Abstract Mortality projection and forecasting of life expectancy are two important aspects of the study of demography and life insurance modelling. We demonstrate in this work the existence of long memory in mortality data. Furthermore, models incorporating long memory structure provide a new approach to enhance mortality forecasts in terms of accuracy and reliability, which can improve the understanding of mortality. Novel mortality models are developed by extending the Lee–Carter (LC) model for death counts to incorporate a long memory time series structure. To link our extensions to existing actuarial work, we detail the relationship between the classical models of death counts developed under a Generalised Linear Model (GLM) formulation and the extensions we propose that are developed under an extension to the GLM framework known in time series literature as the Generalised Linear Autoregressive Moving Average (GLARMA) regression models. Bayesian inference is applied to estimate the model parameters. The Deviance Information Criterion (DIC) is evaluated to select between different LC model extensions of our proposed models in terms of both in-sample fits and out-of-sample forecasts performance. Furthermore, we compare our new models against existing models structures proposed in the literature when applied to the analysis of death count data sets from 16 countries divided according to genders and age groups. Estimates of mortality rates are applied to calculate life expectancies when constructing life tables. By comparing different life expectancy estimates, results show the LC model without the long memory component may provide underestimates of life expectancy, while the long memory model structure extensions reduce this effect. In summary, it is valuable to investigate how the long memory feature in mortality influences life expectancies in the construction of life tables.


Risks ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 35
Author(s):  
Hong Li ◽  
Yanlin Shi

This paper proposes an age-coherent sparse Vector Autoregression mortality model, which combines the appealing features of existing VAR-based mortality models, to forecast future mortality rates. In particular, the proposed model utilizes a data-driven method to determine the autoregressive coefficient matrix, and then employs a rotation algorithm in the projection phase to generate age-coherent mortality forecasts. In the estimation phase, the age-specific mortality improvement rates are fitted to a VAR model with dimension reduction algorithms such as the elastic net. In the projection phase, the projected mortality improvement rates are assumed to follow a short-term fluctuation component and a long-term force of decay, and will eventually converge to an age-invariant mean in expectation. The age-invariance of the long-term mean guarantees age-coherent mortality projections. The proposed model is generalized to multi-population context in a computationally efficient manner. Using single-age, uni-sex mortality data of the UK and France, we show that the proposed model is able to generate more reasonable long-term projections, as well as more accurate short-term out-of-sample forecasts than popular existing mortality models under various settings. Therefore, the proposed model is expected to be an appealing alternative to existing mortality models in insurance and demographic analyses.


2021 ◽  
Vol 25 (3) ◽  
pp. 687-710
Author(s):  
Mostafa Boskabadi ◽  
Mahdi Doostparast

Regression trees are powerful tools in data mining for analyzing data sets. Observations are usually divided into homogeneous groups, and then statistical models for responses are derived in the terminal nodes. This paper proposes a new approach for regression trees that considers the dependency structures among covariates for splitting the observations. The mathematical properties of the proposed method are discussed in detail. To assess the accuracy of the proposed model, various criteria are defined. The performance of the new approach is assessed by conducting a Monte-Carlo simulation study. Two real data sets on classification and regression problems are analyzed by using the obtained results.


2019 ◽  
Author(s):  
Hongxuan Yan ◽  
Gareth Peters ◽  
Jennifer Chan
Keyword(s):  

Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2020 ◽  
Vol 16 (3) ◽  
pp. 263-290
Author(s):  
Hui Guan ◽  
Chengzhen Jia ◽  
Hongji Yang

Since computing semantic similarity tends to simulate the thinking process of humans, semantic dissimilarity must play a part in this process. In this paper, we present a new approach for semantic similarity measuring by taking consideration of dissimilarity into the process of computation. Specifically, the proposed measures explore the potential antonymy in the hierarchical structure of WordNet to represent the dissimilarity between concepts and then combine the dissimilarity with the results of existing methods to achieve semantic similarity results. The relation between parameters and the correlation value is discussed in detail. The proposed model is then applied to different text granularity levels to validate the correctness on similarity measurement. Experimental results show that the proposed approach not only achieves high correlation value against human ratings but also has effective improvement to existing path-distance based methods on the word similarity level, in the meanwhile effectively correct existing sentence similarity method in some cases in Microsoft Research Paraphrase Corpus and SemEval-2014 date set.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


Sign in / Sign up

Export Citation Format

Share Document