scholarly journals Weighted proportional mean inactivity time model

2021 ◽  
Vol 7 (3) ◽  
pp. 4038-4060
Author(s):  
Mohamed Kayid ◽  
◽  
Adel Alrasheedi

<abstract><p>In this paper, a mean inactivity time frailty model is considered. Examples are given to calculate the mean inactivity time for several reputable survival models. The dependence structure between the population variable and the frailty variable is characterized. The classical weighted proportional mean inactivity time model is considered as a special case. We prove that several well-known stochastic orderings between two frailties are preserved for the response variables under the weighted proportional mean inactivity time model. We apply this model on a real data set and also perform a simulation study to examine the accuracy of the model.</p></abstract>

2017 ◽  
Vol 27 (11) ◽  
pp. 3207-3223 ◽  
Author(s):  
Thiago G Ramires ◽  
Gauss M Cordeiro ◽  
Michael W Kattan ◽  
Niel Hens ◽  
Edwin MM Ortega

Cure fraction models are useful to model lifetime data with long-term survivors. We propose a flexible four-parameter cure rate survival model called the log-sinh Cauchy promotion time model for predicting breast carcinoma survival in women who underwent mastectomy. The model can estimate simultaneously the effects of the explanatory variables on the timing acceleration/deceleration of a given event, the surviving fraction, the heterogeneity, and the possible existence of bimodality in the data. In order to examine the performance of the proposed model, simulations are presented to verify the robust aspects of this flexible class against outlying and influential observations. Furthermore, we determine some diagnostic measures and the one-step approximations of the estimates in the case-deletion model. The new model was implemented in the generalized additive model for location, scale and shape package of the R software, which is presented throughout the paper by way of a brief tutorial on its use. The potential of the new regression model to accurately predict breast carcinoma mortality is illustrated using a real data set.


2006 ◽  
Author(s):  
Θεοδώρα Δημητρακοπούλου

The study of events involving an element of time has a long and important history in statistical research and practice. Survival analysis is a collection of statistical procedures for the analysis of data, where the response of interest is the time until an event occurs. Though such events may refer to any designated experience of interest, they are generally referred to as ‘failures’, whereas the time to their occurrences is referred to as ‘lifetime’ or ‘failure time’. Examples of failure times include the lifetimes of machine components in industrial reliability, the durations of strikes or periods of unemployment in economics, the times taken by subjects to complete specified tasks in psychological experimentation and the survival or remission times of patients in clinical trials.Generally speaking, the estimation, prediction or otimization of survival probabilities or life expectancies has become an issue of considerable interest in many different fields of human life and activity. Therefore, survival analysis has developed into an important tool for researchers in many areas, particularly, those involving biomedical studies and industrial life testing. This dissertation is occupied with continuous lifetime models. In this context, the first chapter, provides a short overview on the basic concepts o f survival analysis. Distribution representations of the time to failure are given when the life lengths are measured by a continuous nonnegative random variable and special emphasis is placed on the hazard function due to its intuitive appeal. In the sequel, several univariate popular lifetime distributions are presented and two specialized models designed to describe more complicated failure patterns (competing risks and frailty models) are briefly examined. The basic concepts of survival analysis for bivariate populations are considered next and the most popular bivariate lifetime distributions are reported. In the second chapter, various statistical properties and reliability aspects of a two parameter distribution with decreasing and increasing failure rates are explored. The model includes the Exponential-Geometric distribution (Adamidis and Loukas, 1988) as a special case. Characterizations are given and the estimation of parameters is studied by the method of maximum likelihood. An EM algorithm (Dempster et al., 1977) is proposed for computing the estimates and expressions for their asymptotic variances and covariances are derived. Numerical examples based on real data are shown, to illustrate the applicability of the new model. The results of this chapter are included in Adamidis et al. (2005).Though the most popular lifetime models are those with monotone hazard rates, when the entire life span of a biological entity or a manufactured item is under consideration, high initial and eventual failure rates are frequently observed, indicating a bathtub shaped failure rate (Gaver and Acar, 1979). Also, situations involving a high occurrence of early ‘failures’ are best modeled by distributions with upturned bathtub shaped hazard rates (Chhikara and Folks, 1977). In the third chapter, a three parameter lifetime distribution with increasing, decreasing, bathtub and upside down bathtub shaped failure rates is introduced. The new model includes the Weibull distribution as a special case. A motivation for its derivation is given using a competing risks interpretation when restricting its parametric space. Several of its statistical properties and reliability aspects are explored and the estimation of the parameters is studied using the standard maximum likelihood procedures. Applications of the model to real data are also included. The results of this chapter are included in Dimitrakopoulou et al. (2006 b). In the forth chapter, bivariate extensions of the model introduced in the second chapter are presented, along with the physical considerations leading to their derivation. Marginal and conditional distributions are obtained and their corresponding survival and hazard functions are calculated. The dependence in the proposed bivariate distributions is evaluated by means of the Pearson correlation coefficient. The models presented so far, implicitly assume that the population under study is homogeneous, an assumption which is often unrealistic in practice. However, heterogeneity is not only of interest in its own right but actually distorts what is observed. One o f the ways of assessing the impact of heterogeneity in mortality studies is via the concept of frailty introduced by Vaupel et al. (1979). When the multiplicative frailty model is underconsideration (e.g. Hougaard, 1984), the assumption of a gamma distributed frailty leads to the so called gamma frailty model. Chapter five, is devoted to exploiting some aspects of its relevant distribution theory. Failure rate characterizations are obtained and bounds on the survival function are constructed. Moreover, it is shown that the model can serve as a method of constructing lifetime models or extending existing ones (by adding a parameter in the sense of Marsall and Olkin, (1997)). Therefore, the investigation of its reliability aspects, provides a unified approach in studying lifetime distributions in a reliability context and a way of assessing the impact of the ‘average’ individual survival capacity - in the presence of heterogeneity - on what is actually observed. The results of this chapter are included in Dimitrakopoulou et al. (2006 a).


2019 ◽  
Vol 42 (1) ◽  
pp. 35-59
Author(s):  
Elizabeth González Patiño ◽  
Gisela Tunes ◽  
Maria Isabel Munera

In this paper, the structure of semicompeting risks data, dened by Fine, Jiang & Chappell (2001), is studied. Two events are of interest: a nonterminal and a terminal event, the last one, can censor the non-terminal event, but not vice versa. Due to the possible dependence between the times until the occurrence of such events, two approaches are evaluated: modelling the bivariate survival function through Archimedean copulas and a shared frailty model. A simulation is conducted to examine its performance and both approaches are applied to a real data set of patients with chronic kidney disease (CKD).


2018 ◽  
Vol 34 (3) ◽  
pp. 364-380
Author(s):  
Daoyuan Shi ◽  
Lynn Kuo

The variable selection has been an important topic in regression and Bayesian survival analysis. In the era of rapid development of genomics and precision medicine, the topic is becoming more important and challenging. In addition to the challenges of handling censored data in survival analysis, we are facing increasing demand of handling big data with too many predictors where most of them may not be relevant to the prediction of the survival outcome. With the desire of improving upon the accuracy of prediction, we explore the Bregman divergence criterion in selecting predictive models. We develop sparse Bayesian formulation for parametric regression and semiparametric regression models and demonstrate how variable selection is done using the predictive approach. Model selections for a simulated data set, and two real-data sets (one for a kidney transplant study, and the other for a breast cancer microarray study at the Memorial Sloan-Kettering Cancer Center) are carried out to illustrate our methods.


Risks ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 3 ◽  
Author(s):  
Stephan M. Bischofberger

We introduce a generalization of the one-dimensional accelerated failure time model allowing the covariate effect to be any positive function of the covariate. This function and the baseline hazard rate are estimated nonparametrically via an iterative algorithm. In an application in non-life reserving, the survival time models the settlement delay of a claim and the covariate effect is often called operational time. The accident date of a claim serves as covariate. The estimated hazard rate is a nonparametric continuous-time alternative to chain-ladder development factors in reserving and is used to forecast outstanding liabilities. Hence, we provide an extension of the chain-ladder framework for claim numbers without the assumption of independence between settlement delay and accident date. Our proposed algorithm is an unsupervised learning approach to reserving that detects operational time in the data and adjusts for it in the estimation process. Advantages of the new estimation method are illustrated in a data set consisting of paid claims from a motor insurance business line on which we forecast the number of outstanding claims.


2021 ◽  
pp. 096228022110111
Author(s):  
Katy C Molina ◽  
Vinicius F Calsavara ◽  
Vera D Tomazella ◽  
Eder A Milani

Survival models with a frailty term are presented as an extension of Cox’s proportional hazard model, in which a random effect is introduced in the hazard function in a multiplicative form with the aim of modeling the unobserved heterogeneity in the population. Candidates for the frailty distribution are assumed to be continuous and non-negative. However, this assumption may not be true in some situations. In this paper, we consider a discretely distributed frailty model that allows units with zero frailty, that is, it can be interpreted as having long-term survivors. We propose a new discrete frailty-induced survival model with a zero-modified power series family, which can be zero-inflated or zero-deflated depending on the parameter value. Parameter estimation was obtained using the maximum likelihood method, and the performance of the proposed models was performed by Monte Carlo simulation studies. Finally, the applicability of the proposed models was illustrated with a real melanoma cancer data set.


2019 ◽  
Vol 18 (01) ◽  
pp. 365-387 ◽  
Author(s):  
Zheng Wei ◽  
Seongyong Kim ◽  
Boseung Choi ◽  
Daeyoung Kim

The exchangeability and radial symmetry assumptions on the dependence structure of the multivariate data are restrictive in practical situations where the variables of interest are not likely to be associated to each other in an identical manner. In this paper, we propose a flexible class of multivariate skew normal copulas to model high-dimensional asymmetric dependence patterns. The proposed copulas have two sets of parameters capturing asymmetric dependence, one for association between the variables and the other for skewness of the variables. In order to efficiently estimate the two sets of parameters, we introduce the block coordinate ascent algorithm and discuss its convergence property. The proposed class of multivariate skew normal copulas is illustrated using a real data set.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


Sign in / Sign up

Export Citation Format

Share Document