scholarly journals Computational Procedure of Performance Assessment of Lifetime Index of Products for the Weibull Distribution with the Progressive First-Failure-Censored Sampling Plan

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Ching-Wen Hong ◽  
Wen-Chuan Lee ◽  
Jong-Wuu Wu

Process capability analysis has been widely applied in the field of quality control to monitor the performance of industrial processes. In practice, lifetime performance indexCLis a popular means to assess the performance and potential of their processes, whereLis the lower specification limit. This study will apply the large-sample theory to construct a maximum likelihood estimator (MLE) ofCLwith the progressive first-failure-censored sampling plan under the Weibull distribution. The MLE ofCLis then utilized to develop a new hypothesis testing procedure in the condition of knownL.

2021 ◽  
Vol 25 (8) ◽  
pp. 1477-1482
Author(s):  
O.F. Odeyinka ◽  
F.O. Ogunwolu ◽  
O.P. Popoola ◽  
T.O. Oyedokun

Process capability analysis combines statistical tools and control charts with good engineering judgment to interpret and analyze the data representing a process. This work analyzes the process capability of a polypropylene bag producing company. The case study organization uses two plants for production and data was collected over a period of nine months for this study. Analysis showed that the output spread of plant 1 was greater than the specification interval spread which implies poor capability. There are non-conforming parts below the Lower Specification Limit (LSL: 500,000 metres) and above the Upper Specification Limit (USL: 600,000 metres) and that the output requires improvement. Similarly, the capability analysis of plant 2 shows that the overall output spread is greater than the specification interval spread (poor capability). The output centre in the specification and overall interval are vertically aligned, thus specifying that the output from plant 2 is also process centered and requires improvement. Recommendations were made to improve the outputs from each production plant.


2010 ◽  
Vol 3 (S1) ◽  
pp. 531-534
Author(s):  
Maja Rujnić-Sokele ◽  
Mladen Šercer ◽  
Damir Godec

2021 ◽  
Vol 66 (3) ◽  
pp. 7-21
Author(s):  
Mirosław Szreder

Increasing numbers of non-random errors are observed in contemporary sample surveying – in particular, those resulting from no response or faulty measutrements (imprecise statistical observation). Until recently, the consequences of these kinds of errors have not been widely discussed in the context of the testing of hypoteses. Researchers focused almost entirely on sampling errors (random errors), whose magnitude decreases as the size of the random sample grows. In consequence, researchers who often use samples of very large sizes tend to overlook the influence random and non-random errors have on the results of their study. The aim of this paper is to present how non-random errors can affect the decision-making process based on the classical hypothesis testing procedure. Particular attention is devoted to cases in which researchers manage samples of large sizes. The study proved the thesis that samples of large sizes cause statistical tests to be more sensitive to non-random errors. Systematic errors, as a special case of non-random errors, increase the probability of making the wrong decision to reject a true hypothesis as the sample size grows. Supplementing the testing of hypotheses with the analysis of confidence intervals may in this context provide substantive support for the researcher in drawing accurate inferences.


Author(s):  
Stoyan Stoyanov ◽  
Ying Kit Tang ◽  
Chris Bailey ◽  
Robert Evans ◽  
Silvia Marson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document