scholarly journals Statistical Inference of Truncated Normal Distribution Based on the Generalized Progressive Hybrid Censoring

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 186
Author(s):  
Xinyi Zeng ◽  
Wenhao Gui

In this paper, the parameter estimation problem of a truncated normal distribution is discussed based on the generalized progressive hybrid censored data. The desired maximum likelihood estimates of unknown quantities are firstly derived through the Newton–Raphson algorithm and the expectation maximization algorithm. Based on the asymptotic normality of the maximum likelihood estimators, we develop the asymptotic confidence intervals. The percentile bootstrap method is also employed in the case of the small sample size. Further, the Bayes estimates are evaluated under various loss functions like squared error, general entropy, and linex loss functions. Tierney and Kadane approximation, as well as the importance sampling approach, is applied to obtain the Bayesian estimates under proper prior distributions. The associated Bayesian credible intervals are constructed in the meantime. Extensive numerical simulations are implemented to compare the performance of different estimation methods. Finally, an authentic example is analyzed to illustrate the inference approaches.

2020 ◽  
Author(s):  
Justin R. Williams ◽  
Hyung-Woo Kim ◽  
Catherine M. Crespi

Abstract Background: When data are collected subject to a detection limit, observations below the detection limit may be considered censored. In addition, the domain of such observations may be restricted; for example, values may be required to be non-negative. Methods: We propose a method for estimating population mean and variance from censored observations that accounts for known domain restriction. The method finds maximum likelihood estimates assuming an underlying truncated normal distribution. Results: We show that our method, tcensReg, has lower bias, Type I error rates, and mean squared error than other methods commonly used for data with detection limits such as Tobit regression and single imputation under a range of simulation settings from mild to heavy censoring and truncation. We further demonstrate the consistency of the maximum likelihood estimators. We apply our method to analyze vision quality data collected from ophthalmology clinical trials comparing different types of intraocular lenses implanted during cataract surgery. All of the methods yield similar conclusions regarding non-inferiority, but estimates from the tcensReg method suggest that there may be greater mean differences and overall variability. Conclusions: In the presence of detection limits, our new method tcensReg provides a way to incorporate known domain restrictions in dependent variables that substantially improves inferences.


2016 ◽  
Vol 27 (8) ◽  
pp. 2459-2477
Author(s):  
Guo-Liang Tian ◽  
Da Ju ◽  
Kam Chuen Yuen ◽  
Chi Zhang

To analyze univariate truncated normal data, in this paper, we stochastically represent the normal random variable as a mixture of a truncated normal random variable and its complementary random variable. This stochastic representation is a new idea and it is the first time to appear in literature. According to this stochastic representation, we derive important distributional properties for the truncated normal distribution and develop two new expectation–maximization algorithms to calculate the maximum likelihood estimates of parameters of interest for Type I data (without and with covariates) and Type II/III data. Bootstrap confidence intervals of parameters for small sample sizes are provided. To evaluate the performance of the proposed methods for the truncated normal distribution, in simulation studies, we first focus on the comparison of estimation results between including the unobserved data counts and excluding the unobserved data counts, and we next investigate the impact of the number of unobserved data on the estimation results. The plasma ferritin concentration data collected by Australian Institute of Sport and the blood fat content data are used to illustrate the proposed methods and to compare the truncated normal distribution with the half normal, the folded normal, and the folded normal slash distributions based on Akaike information criterion and Bayesian information criterion.


2020 ◽  
Author(s):  
Justin R. Williams ◽  
Hyung-Woo Kim ◽  
Catherine M. Crespi

Abstract Background: When data are collected subject to a detection limit, observations below the detection limit may be considered censored. In addition, the domain of such observations may be restricted; for example, values may be required to be non-negative. Methods We propose a regression method for censored observations that also accounts for domain restriction. The method finds maximum likelihood estimates assuming an underlying truncated normal distribution. Results: We show that our method, tcensReg, outperforms other methods commonly used for data with detection limits such as Tobit regression and single imputation of the detection limit or half detection limit with respect to bias and mean squared error under a range of simulation settings. We apply our method to analyze vision quality data collected from ophthalmology clinical trials comparing different types of intraocular lenses implanted during cataract surgery. All methods tested returned similar conclusions for non-inferiority testing, but estimates from the tcensReg method suggest that there may be greater mean differences and overall variability. Conclusions: In the presence of detection limits, our new method tcensReg provides a way to incorporate known domain restrictions when modeling limited dependent variables that substantially improves inferences.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 490
Author(s):  
Yuxin Cai ◽  
Wenhao Gui

Point and interval estimations are taken into account for a progressive first-failure censored left-truncated normal distribution in this paper. First, we derive the estimators for parameters on account of the maximum likelihood principle. Subsequently, we construct the asymptotic confidence intervals based on these estimates and the log-transformed estimates using the asymptotic normality of maximum likelihood estimators. Meanwhile, bootstrap methods are also proposed for the construction of confidence intervals. As for Bayesian estimation, we implement the Lindley approximation method to determine the Bayesian estimates under not only symmetric loss function but also asymmetric loss functions. The importance sampling procedure is applied at the same time, and the highest posterior density (HPD) credible intervals are established in this procedure. The efficiencies of classical statistical and Bayesian inference methods are evaluated through numerous simulations. We conclude that the Bayes estimates given by Lindley approximation under Linex loss function are highly recommended and HPD interval possesses the narrowest interval length among the proposed intervals. Ultimately, we introduce an authentic dataset describing the tensile strength of 50mm carbon fibers as an illustrative sample.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2019 ◽  
Vol 11 (3) ◽  
pp. 168781401983684 ◽  
Author(s):  
Leilei Cao ◽  
Lulu Cao ◽  
Lei Guo ◽  
Kui Liu ◽  
Xin Ding

It is difficult to have enough samples to implement the full-scale life test on the loader drive axle due to high cost. But the extreme small sample size can hardly meet the statistical requirements of the traditional reliability analysis methods. In this work, the method of combining virtual sample expanding with Bootstrap is proposed to evaluate the fatigue reliability of the loader drive axle with extreme small sample. First, the sample size is expanded by virtual augmentation method to meet the requirement of Bootstrap method. Then, a modified Bootstrap method is used to evaluate the fatigue reliability of the expanded sample. Finally, the feasibility and reliability of the method are verified by comparing the results with the semi-empirical estimation method. Moreover, from the practical perspective, the promising result from this study indicates that the proposed method is more efficient than the semi-empirical method. The proposed method provides a new way for the reliability evaluation of costly and complex structures.


Sign in / Sign up

Export Citation Format

Share Document