Decoupling the Confounded Effect of Machine Error and Geometric Characteristics of Artifacts in Precision Measurement and Machine Calibration

1999 ◽  
Vol 122 (2) ◽  
pp. 331-337 ◽  
Author(s):  
G. Lee ◽  
J. Mou ◽  
Y. Shen

Inspection is commonly used to scrutinize the quality of manufactured products against established standards and specifications. However, the quality and reliability of many inspection processes are contaminated by various measurement errors. One of the prominent sources for measurement error is the imperfection of the measuring device and its interaction with geometric characteristics of a measured feature. To ensure the quality and reliability of any inspection process, measurement errors need to be addressed for all data acquisition activities. A method is also needed to identify and decouple the effect of confounded errors. If this can be done, then the collected data can be adjusted properly to allow a more meaningful analysis. In this paper, the issue of measurement error identification and reduction for machine calibration and dimension measurement using artifacts with spherical features is discussed. Analytical models are derived to first assess and then decouple the confounded effect of imperfect measuring device and its interaction with geometric characteristics of a measured feature. Finally, case studies are used to illustrate the use and effectiveness of the methodology. [S1087-1357(00)00402-0]

Author(s):  
Oleksandr Kupriyanov ◽  

The influence of the measuring device error on the consumer’s and manufacturer’s risks was studied for three cases of the organization of completing: complete interchangeability, selective completing and completing with ranking. The presence of measurement error does not allow to avoid risks; however, their values must be estimated so that they do not have a significant impact on manufactured products. The study was carried out for a “shaft-hole” connection by statistical modeling, the laws of dimension distribution were accepted as normal, as well as the laws of distribution of measurement errors. For the case of completing with complete interchangeability, the accuracy of two-stage control was studied; it is recommended to establish the accuracy of the initial measurements at 20–25 % of the tolerance field, repeated measurements at 10–12 % of the tolerance field, while the manufacturer’s risk does not exceed 0.2 %, the consumer’s risk is practically zero. In the case of selective completing, the requirements for the accuracy of the measuring device are significantly higher than in the case of completing with complete interchangeability, since errors are possible not only at the limits of the tolerance field but also at the limits of the selection groups. Therefore, the measurement error should not exceed 5 % of the tolerance field width; it is also advisable to limit the number of selection groups. When completing with ranking, the accuracy of the measuring device has the least impact on risks, especially if the number of parts in the batch is large enough and the measurement error complies with the standards in mechanical engineering. It was established that for the number of sets greater than 10, almost complete assemblability is achieved and the risks associated with the measurement error become insignificant. Thus, if it is necessary to increase the accuracy of products at the assembly stage, it is recommended to use completing with ranking instead of selective completing.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


Dose-Response ◽  
2005 ◽  
Vol 3 (4) ◽  
pp. dose-response.0 ◽  
Author(s):  
Kenny S. Crump

Although statistical analyses of epidemiological data usually treat the exposure variable as being known without error, estimated exposures in epidemiological studies often involve considerable uncertainty. This paper investigates the theoretical effect of random errors in exposure measurement upon the observed shape of the exposure response. The model utilized assumes that true exposures are log-normally distributed, and multiplicative measurement errors are also log-normally distributed and independent of the true exposures. Under these conditions it is shown that whenever the true exposure response is proportional to exposure to a power r, the observed exposure response is proportional to exposure to a power K, where K < r. This implies that the observed exposure response exaggerates risk, and by arbitrarily large amounts, at sufficiently small exposures. It also follows that a truly linear exposure response will appear to be supra-linear—i.e., a linear function of exposure raised to the K-th power, where K is less than 1.0. These conclusions hold generally under the stated log-normal assumptions whenever there is any amount of measurement error, including, in particular, when the measurement error is unbiased either in the natural or log scales. Equations are provided that express the observed exposure response in terms of the parameters of the underlying log-normal distribution. A limited investigation suggests that these conclusions do not depend upon the log-normal assumptions, but hold more widely. Because of this problem, in addition to other problems in exposure measurement, shapes of exposure responses derived empirically from epidemiological data should be treated very cautiously. In particular, one should be cautious in concluding that the true exposure response is supra-linear on the basis of an observed supra-linear form.


Author(s):  
Vinodkumar Jacob ◽  
M. Bhasi ◽  
R. Gopikakumari

Measurement is the act or the result, of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is for observing and testing scientific and technological investigations and generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or human error. This study is devoted to understanding the human errors in measurement. Work and human involvement related factors that could affect measurement errors have been identified. An experimental study has been conducted using different subjects where the factors were changed one at a time and the measurements made by them recorded. Errors in measurement were then calculated and the data so obtained was subject to statistical analysis to draw conclusions regarding the influence of different factors on human errors in measurement. The findings are presented in the paper.


Author(s):  
Patricia Penabad Durán ◽  
Paolo Di Barba ◽  
Xose Lopez-Fernandez ◽  
Janusz Turowski

Purpose – The purpose of this paper is to describe a parameter identification method based on multiobjective (MO) deterministic and non-deterministic optimization algorithms to compute the temperature distribution on transformer tank covers. Design/methodology/approach – The strategy for implementing the parameter identification process consists of three main steps. The first step is to define the most appropriate objective function and the identification problem is solved for the chosen parameters using single-objective (SO) optimization algorithms. Then sensitivity to measurement error of the computational model is assessed and finally it is included as an additional objective function, making the identification problem a MO one. Findings – Computations with identified/optimal parameters yield accurate results for a wide range of current values and different conductor arrangements. From the numerical solution of the temperature field, decisions on dimensions and materials can be taken to avoid overheating on transformer covers. Research limitations/implications – The accuracy of the model depends on its parameters, such as heat exchange coefficients and material properties, which are difficult to determine from formulae or from the literature. Thus the goal of the presented technique is to achieve the best possible agreement between measured and numerically calculated temperature values. Originality/value – Differing from previous works found in the literature, sensitivity to measurement error is considered in the parameter identification technique as an additional objective function. Thus, solutions less sensitive to measurement errors at the expenses of a degradation in accuracy are identified by means of MO optimization algorithms.


2012 ◽  
Vol 263-266 ◽  
pp. 501-505
Author(s):  
Xiong Dong Ding ◽  
Guang Qian Chu

In the ocean, due to the non-ideal channel environment, it is difficult to locate the position of the vessel in the sea quickly and exactly. In order to improve the location accuracy of accident vessel in the ocean, one method is to analyze the main cause of the different kinds of the measurement errors to find their solutions to decrease the errors. The subject that Kalman filtering technology is applied in wireless location system is researched in this paper, which can reduce the measurement error and greatly improve the location accuracy.


Sign in / Sign up

Export Citation Format

Share Document