johnson distribution
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Kun S. Marhadi ◽  
Georgios Alexandros Skrimpas

Setting optimal alarm thresholds in vibration based condition monitoring system is inherently difficult. There are no established thresholds for many vibration based measurements. Most of the time, the thresholds are set based on statistics of the collected data available. Often times the underlying probability distribution that describes the data is not known. Choosing an incorrect distribution to describe the data and then setting up thresholds based on the chosen distribution could result in sub-optimal thresholds. Moreover, in wind turbine applications the collected data available may not represent the whole operating conditions of a turbine, which results in uncertainty in the parameters of the fitted probability distribution and the thresholds calculated. In this study, Johnson, Normal, and Weibull distributions are investigated; which distribution can best fit vibration data collected from a period of time. False alarm rate resulted from using threshold determined from each distribution is used as a measure to determine which distribution is the most appropriate. This study shows that using Johnson distribution can eliminate testing or fitting various distributions to the data, and have more direct approach to obtain optimal thresholds. To quantify uncertainty in the thresholds due to limited data, implementations with bootstrap method and Bayesian inference are investigated.


Author(s):  
Selim Gündüz ◽  
Mustafa Ç. Korkmaz

This paper proposes a new probability distribution, which belongs a member of the exponential family, defined on (0,1) unit interval. The new unit model has been defined by relation of a random variable defined on unbounded interval with respect to standard logistic function. Some basic statistical properties of newly defined distribution are derived and studied. The different estimation methods and some inferences for the model parameters have been derived. We assess the performance of the estimators of these estimation methods based on the three different simulation scenarios. The analysis of three real data examples which one is related to the coronavirus data, show better fit of proposed distribution than many known distributions on the unit interval under some comparing criteria.


2019 ◽  
pp. 483-494
Author(s):  
Vitaliy Babak ◽  
Volodymyr Eremenko ◽  
Artur Zaporozhets

In this paper, it was proposed to carry out a preliminary normalization of diagnostic parameters using the Johnson distribution, which with three basic distribution groups (SL, SB, SU), covers a wide class of empirical distributions. The mathematical description of the family allows us to find the approximating probability density function in an explicit form, to determine the distribution parameters for obtaining the corresponding function (curve), as well as the inverse function for finding the quantiles of the specified levels. To assess the accuracy of the obtained normalized data, they were compared with the data obtained by replacing the resulting law with a Gaussian one. Percentages of values were compared in the implementation under study, which concentrated in the limits of estimated quantiles. Implementations were obtained using the simulation method. By the same method, the correctness (relative systematic error) of determining the quantile values of the specified levels was evaluated. The error value δ was estimated between the conditionally true quantile value calculated from the generated pseudo-general complex and the value estimated using the methods considered in the paper. Obtained data show that the relative error in the calculation of quantiles using the Johnson distribution does not exceed 0.07% and decreases in two orders of magnitude than the currently accepted procedure for replacing sample laws with Gaussian.


Author(s):  
Ирина Карловна Васильева ◽  
Анатолий Владиславович Попов

The subject matter of the article are the processes of forming of objects’ attribute features analytical descriptions for solving applied problems of statistical recognition of objects’ images on multi-channel images. The goal is to develop a multicomponent mathematical model for representing statistical information about the summation of geometric, colour and structural parameters of observational objects. The tasks to be solved are: to formalize the procedure of statistical image segmentation in conditions of incomplete a priori information about objects classes and unknown distribution densities of classification characteristics; to build effective algorithms for detection and linking contour points; to choose a universal mathematical model for describing the geometric shape of both the object and its structural components and to develop a robust method for estimating the model parameters. The methods used are: statistical methods of pattern recognition, methods of probability theory and mathematical statistics, methods of contour analysis, numerical methods for conditional optimization. The following results were obtained. The method of multicomponent model synthesis for describing colour, geometric and structural attributes of object images on multichannel images is proposed. In the model terms, the object is represented by a hierarchical set of nested contours, for the selection of which information about the colour characteristics of statistically homogeneous regions of the image is used. Methods for detecting and linking contour points have been developed, which make it possible to obtain the coordinates of the boundaries circular sweep for both convex and concave geometric objects. As a universal basis for describing the model components, the Johnson SB distribution is adopted, which allows us to describe practically any unimodal and wide class of bimodal distributions. A method for Johnson distribution parameters’ estimation from sample data, based on the method of moments and using optimization procedures for a non-linear objective function with constraints is given. Conclusions. The scientific novelty of the results obtained is as follows: the methods for describing the objects’ images in the form of a combination of several bright-geometric elements and structural connections between them have been further developed, which makes it possible to comprehensively take into account the attribute features of objects in the procedures for analyzing and interpreting images, automatically detecting and locating objects with specified characteristics


Author(s):  
Joshua Mullins ◽  
Sankaran Mahadevan

This paper proposes a comprehensive approach to prediction under uncertainty by application to the Sandia National Laboratories verification and validation challenge problem. In this problem, legacy data and experimental measurements of different levels of fidelity and complexity (e.g., coupon tests, material and fluid characterizations, and full system tests/measurements) compose a hierarchy of information where fewer observations are available at higher levels of system complexity. This paper applies a Bayesian methodology in order to incorporate information at different levels of the hierarchy and include the impact of sparse data in the prediction uncertainty for the system of interest. Since separation of aleatory and epistemic uncertainty sources is a pervasive issue in calibration and validation, maintaining this separation in order to perform these activities correctly is the primary focus of this paper. Toward this goal, a Johnson distribution family approach to calibration is proposed in order to enable epistemic and aleatory uncertainty to be separated in the posterior parameter distributions. The model reliability metric approach to validation is then applied, and a novel method of handling combined aleatory and epistemic uncertainty is introduced. The quality of the validation assessment is used to modify the parameter uncertainty and add conservatism to the prediction of interest. Finally, this prediction with its associated uncertainty is used to assess system-level reliability (a prediction goal for the challenge problem).


2010 ◽  
Vol 2010 (1) ◽  
pp. 000131-000136
Author(s):  
Mark Plucinski ◽  
Mark Hoffmeyer

Evaluating and predicting the failure rate of interconnect is a time consuming and expensive process. Non parametric techniques to analyze the qualification data, such as employing the Chi-square distribution require large sample sizes to achieve an accurate estimate. Recently there has been a resurgence in the use of extreme value theory (EVT). Increases in temperature records, the numbers of strong storms, and flooding events have fueled this interest. A novel method that is based on EVT and an accelerated degradation model for estimating the failure rate from a set of stress data is proposed and described. There are many advantages of this technique, and recommendations on sample size are discussed. Advice is given as to how the total sample should be sectioned before the maximum is taken of each subset. Interconnect examples are given, generated from Monte Carlo simulations of known distributions, and used for a comparison of the extreme value technique versus Chi-square and Johnson distribution methods.


Sign in / Sign up

Export Citation Format

Share Document