Parameter Estimation of Limited Failure Population Model With a Weibull Underlying Distribution

Author(s):  
Themistoklis Koutsellis ◽  
Zissimos P. Mourelatos

Abstract For many data-driven reliability problems, the population is not homogeneous; i.e., its statistics are not described by a unimodal distribution. Also, the interval of observation may not be long enough to capture the failure statistics. A limited failure population (LFP) consists of two subpopulations, a defective and a nondefective one, with well-separated modes of the two underlying distributions. In reliability and warranty forecasting applications, the estimation of the number of defective units and the estimation of the parameters of the underlying distribution are very important. Among various estimation methods, the maximum likelihood estimation (MLE) approach is the most widely used. Its likelihood function, however, is often incomplete, resulting in an erroneous statistical inference. In this paper, we estimate the parameters of a LFP analytically using a rational function fitting (RFF) method based on the Weibull probability plot (WPP) of observed data. We also introduce a censoring factor (CF) to assess how sufficient the number of collected data is for statistical inference. The proposed RFF method is compared with existing MLE approaches using simulated data and data related to automotive warranty forecasting.

2012 ◽  
Vol 4 (1) ◽  
pp. 185
Author(s):  
Irfan Wahyudi ◽  
Purhadi Purhadi ◽  
Sutikno Sutikno ◽  
Irhamah Irhamah

Multivariate Cox proportional hazard models have ratio property, that is the ratio of  hazard functions for two individuals with covariate vectors  z1 and  z2 are constant (time independent). In this study we talk about estimation of prameters on multivariate Cox model by using Maximum Partial Likelihood Estimation (MPLE) method. To determine the appropriate estimators  that maximize the ln-partial likelihood function, after a score vector and a Hessian matrix are found, numerical iteration methods are applied. In this case, we use a Newton Raphson method. This numerical method is used since the solutions of the equation system of the score vector after setting it equal to zero vector are not closed form. Considering the studies about multivariate Cox model are limited, including the parameter estimation methods, but the methods are urgently needed by some fields of study related such as economics, engineering and medical sciences. For this reasons, the goal of this study is designed to develop parameter estimation methods from univariate to multivariate cases.


Author(s):  
A. S. Ogunsanya ◽  
E. E. E. Akarawak ◽  
W. B. Yahya

In this paper, we compared different Parameter Estimation method of the two parameter Weibull-Rayleigh Distribution (W-RD) namely; Maximum Likelihood Estimation (MLE), Least Square Estimation method (LSE) and three methods of Quartile Estimators. Two of the quartile methods have been applied in literature, while the third method (Q1-M) is introduced in this work. The methods have been applied to simulate data. These methods of estimation were compared using Error, Mean Square Error and Total Deviation (TD) which is also known as Sum Absolute Error Estimate (SAEE). The analytical results show that the performances of all the parameter estimation methods were satisfactory with data set of Weibull-Rayleigh distribution while degree of accuracy is determined by the sample size. The proposed quartile (Q1-M) method has the least Total Deviation and MSE. In addition, the quartile methods perform better than MLE for the simulated data. In particular, the proposed quartile methods (Q1-M) have an added advantage of simplicity in usage than MLE methods.


2010 ◽  
Vol 7 (1) ◽  
Author(s):  
Matej Jovan

This paper discusses the 1974 Merton's model in light of the minimum regulatory requirements of the Internal Ratings-Based (IRB) Approach provided in the Directive 2006/48/EC of the European Parliament and of the Council for the calculation of capital requirement for credit risk. The basic purpose is to illustrate potential deficiencies of the model in assigning obligors ratings and/or estimating probability of default to which supervisors should be attentive when validating this model in bank's IRB approach. The procedures of three estimation methods of Merton's model are described (calibration, Moody's KMV, maximum likelihood estimation), based on which deficiencies of this model can be identified. The Merton's model per se does not ensure compliance with the minimum requirements of the IRB approach for the estimation of probability of default, as its theoretical assumptions often do not reflect reality. It is therefore necessary to calibrate the fundamental parameters estimated by the model using empirical data on defaults, which must be defined in accordance with the regulatory minimum requirements, and must be representative of the population for which the model is valid. Results on the simulated data also show that calibration method provides different estimates of probability of default for the same obligors compared to other two methods. Differences are mainly influenced by the volatility of equity and leverage in the time series, which calibration method does not sufficiently account for. Some regulatory minimum requirements can be relaxed when obligors are being assigned ratings on the basis of the Merton's model estimation methods. However, the results of the analysis on simulated and empirical data show that different estimation methods generate different obligor credit rating assignments.


Author(s):  
Johanna Bertl ◽  
Gregory Ewing ◽  
Carolin Kosiol ◽  
Andreas Futschik

AbstractIn many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.


2006 ◽  
Vol 2006 ◽  
pp. 1-13 ◽  
Author(s):  
Nikos Tzavidis ◽  
Yan-Xia Lin

We discuss alternative approaches for estimating from cross-sectional categorical data in the presence of misclassification. Two parameterisations of the misclassification model are reviewed. The first employs misclassification probabilities and leads to moment-based inference. The second employs calibration probabilities and leads to maximum likelihood inference. We show that maximum likelihood estimation can be alternatively performed by employing misclassification probabilities and a missing data specification. As an alternative to maximum likelihood estimation we propose a quasi-likelihood parameterisation of the misclassification model. In this context an explicit definition of the likelihood function is avoided and a different way of resolving a missing data problem is provided. Variance estimation for the alternative point estimators is considered. The different approaches are illustrated using real data from the UK Labour Force Survey and simulated data.


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 588 ◽  
Author(s):  
Eva María Ramos-Ábalos ◽  
Ramón Gutiérrez-Sánchez ◽  
Ahmed Nafidi

In this paper, we study a new family of Gompertz processes, defined by the power of the homogeneous Gompertz diffusion process, which we term the powers of the stochastic Gompertz diffusion process. First, we show that this homogenous Gompertz diffusion process is stable, by power transformation, and determine the probabilistic characteristics of the process, i.e., its analytic expression, the transition probability density function and the trend functions. We then study the statistical inference in this process. The parameters present in the model are studied by using the maximum likelihood estimation method, based on discrete sampling, thus obtaining the expression of the likelihood estimators and their ergodic properties. We then obtain the power process of the stochastic lognormal diffusion as the limit of the Gompertz process being studied and go on to obtain all the probabilistic characteristics and the statistical inference. Finally, the proposed model is applied to simulated data.


2020 ◽  
pp. 875529302097097
Author(s):  
Azad Yazdani ◽  
Mohammad-Sadegh Shahidzadeh ◽  
Tsuyoshi Takada

In this article, Bayes factors (BFs) are used for selecting and weighting the ground motion prediction equations (GMPEs). BFs are defined as the posterior probability of a model being the best model describing data. The Bayesian framework allows for merging information gathered from available seismic data and the experts’ opinion thus allowing for a bridge between data-driven and non-data-driven methods. A multi-dimensional likelihood function is used to account for earthquake-to-earthquake and record-to-record variability. A study is performed to identify the effects of model uncertainty and dataset variations on Bayesian weights by using simulated data. It was found that for a given median prediction, by increasing standard deviation the relative weights increase until it reaches a maximum and then start to decrease. The standard deviation corresponding to the maximum weights corresponds to the scatter of data used for calculating the weights. The method was applied to a local region with nine preselected local and regional GMPEs. The ranking, selection, and weighting are performed using a local dataset and the results are compared with four available ranking methods. While various methods may yield similar or different ranking results, the proposed method is the only one that provides scientific means of selecting appropriate models from a set of initially selected GMPEs.


2021 ◽  
Author(s):  
Jan Boelts ◽  
Jan-Matthis Lueckmann ◽  
Richard Gao ◽  
Jakob H. Macke

Identifying parameters of computational models that capture experimental data is a central task in cognitive neuroscience. Bayesian statistical inference aims to not only find a single configuration of best-fitting parameters, but to recover all model parameters that are consistent with the data and prior knowledge. Statistical inference methods usually require the ability to evaluate the likelihood of the model—however, for many models of interest in cognitive neuroscience, the associated likelihoods cannot be computed efficiently. Simulation-based inference (SBI) offers a solution to this problem by only requiring access to simulations produced by the model. Here, we provide an efficient SBI method for models of decision-making. Our approach, Mixed Neural Likelihood Estimation (MNLE), trains neural density estimators on model simulations to emulate the simulator. The likelihoods of the emulator can then be used to perform Bayesian parameter inference on experimental data using standard approximate inference methods like Markov Chain Monte Carlo sampling. While most neural likelihood estimation methods target continuous data, MNLE works with mixed data types, as typically obtained in decision-making experiments (e.g., binary decisions and associated continuous reaction times). We demonstrate MNLE on the classical drift-diffusion model (DDM) and compare its performance to a recently proposed method for SBI on DDMs, called likelihood approximation networks (LAN, Fengler et al. 2021). We show that MNLE is substantially more efficient than LANs, requiring up to six orders of magnitudes fewer model simulations to achieve comparable likelihood accuracy and evaluation time while providing the same level of flexibility. We include an implementation of our algorithm in the user-friendly open source package sbi.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2085
Author(s):  
Xue-Bo Jin ◽  
Ruben Jonhson Robert RobertJeremiah ◽  
Ting-Li Su ◽  
Yu-Ting Bai ◽  
Jian-Lei Kong

State estimation is widely used in various automated systems, including IoT systems, unmanned systems, robots, etc. In traditional state estimation, measurement data are instantaneous and processed in real time. With modern systems’ development, sensors can obtain more and more signals and store them. Therefore, how to use these measurement big data to improve the performance of state estimation has become a hot research issue in this field. This paper reviews the development of state estimation and future development trends. First, we review the model-based state estimation methods, including the Kalman filter, such as the extended Kalman filter (EKF), unscented Kalman filter (UKF), cubature Kalman filter (CKF), etc. Particle filters and Gaussian mixture filters that can handle mixed Gaussian noise are discussed, too. These methods have high requirements for models, while it is not easy to obtain accurate system models in practice. The emergence of robust filters, the interacting multiple model (IMM), and adaptive filters are also mentioned here. Secondly, the current research status of data-driven state estimation methods is introduced based on network learning. Finally, the main research results for hybrid filters obtained in recent years are summarized and discussed, which combine model-based methods and data-driven methods. This paper is based on state estimation research results and provides a more detailed overview of model-driven, data-driven, and hybrid-driven approaches. The main algorithm of each method is provided so that beginners can have a clearer understanding. Additionally, it discusses the future development trends for researchers in state estimation.


Sign in / Sign up

Export Citation Format

Share Document