scholarly journals General regression model for the subdistribution of a competing risk under left-truncation and right-censoring

Biometrika ◽  
2020 ◽  
Vol 107 (4) ◽  
pp. 949-964
Author(s):  
A Bellach ◽  
M R Kosorok ◽  
P B Gilbert ◽  
J P Fine

Summary Left-truncation poses extra challenges for the analysis of complex time-to-event data. We propose a general semiparametric regression model for left-truncated and right-censored competing risks data that is based on a novel weighted conditional likelihood function. Targeting the subdistribution hazard, our parameter estimates are directly interpretable with regard to the cumulative incidence function. We compare different weights from recent literature and develop a heuristic interpretation from a cure model perspective that is based on pseudo risk sets. Our approach accommodates external time-dependent covariate effects on the subdistribution hazard. We establish consistency and asymptotic normality of the estimators and propose a sandwich estimator of the variance. In comprehensive simulation studies we demonstrate solid performance of the proposed method. Comparing the sandwich estimator with the inverse Fisher information matrix, we observe a bias for the inverse Fisher information matrix and diminished coverage probabilities in settings with a higher percentage of left-truncation. To illustrate the practical utility of the proposed method, we study its application to a large HIV vaccine efficacy trial dataset.

2012 ◽  
Vol 51 (1) ◽  
pp. 115-130
Author(s):  
Sergei Leonov ◽  
Alexander Aliev

ABSTRACT We provide some details of the implementation of optimal design algorithm in the PkStaMp library which is intended for constructing optimal sampling schemes for pharmacokinetic (PK) and pharmacodynamic (PD) studies. We discuss different types of approximation of individual Fisher information matrix and describe a user-defined option of the library.


2006 ◽  
Vol 18 (5) ◽  
pp. 1007-1065 ◽  
Author(s):  
Shun-ichi Amari ◽  
Hyeyoung Park ◽  
Tomoko Ozeki

The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramér-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed. This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.


2018 ◽  
Vol 35 (2) ◽  
pp. 519-535 ◽  
Author(s):  
Guilherme Ferreira Gomes ◽  
Fabricio Alves de Almeida ◽  
Patricia da Silva Lopes Alexandrino ◽  
Sebastiao Simões da Cunha ◽  
Bruno Silva de Sousa ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document