Design and Evaluation of a New General-Purpose Device for Calibrating Instrumented Spatial Linkages

2009 ◽  
Vol 131 (3) ◽  
Author(s):  
Joshua A. Nordquist ◽  
M. L. Hull

Because instrumented spatial linkages (ISLs) have been commonly used in measuring joint rotations and must be calibrated before using the device in confidence, a calibration device design and associated method for quantifying calibration device error would be useful. The objectives of the work reported by this paper were to (1) design an ISL calibration device and demonstrate the design for a specific application, (2) describe a new method for calibrating the device that minimizes measurement error, and (3) quantify measurement error of the device using the new method. Relative translations and orientations of the device were calculated via a series of transformation matrices containing inherent fixed and variable parameters. These translations and orientations were verified with a coordinate measurement machine, which served as a gold standard. Inherent fixed parameters of the device were optimized to minimize measurement error. After parameter optimization, accuracy was determined. The root mean squared error (RMSE) was 0.175 deg for orientation and 0.587 mm for position. All RMSE values were less than 0.8% of their respective full-scale ranges. These errors are comparable to published measurement errors of ISLs for positions and lower by at least a factor of 2 for orientations. These errors are in spite of the many steps taken in design and manufacturing to achieve high accuracy. Because it is challenging to achieve the accuracy required for a custom calibration device to serve as a viable gold standard, it is important to verify that a calibration device provides sufficient precision to calibrate an ISL.

1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


1994 ◽  
Vol 6 (6) ◽  
pp. 485-490
Author(s):  
Yoshio Mizugaki ◽  
◽  
Teruyuki Asao ◽  
Masafumi Sakamoto ◽  
Sadao Arai

This paper presents a new method of the compensation of sensitivity of a touch probe in a machine tool. In this study the sensitivity is defined as measurement error in each probing direction. In the detection of measurement errors, a reference ball used in Coordinate Measuring Machines is adopted as the pseudo-datum. After measuring it in various probing directions, each measurement error of lobbing is modelled by means of a spline interpolation. Thus, the sensitivity model is represented in form of Coons patch whose parameters at the angles of probing direction. The effect of probing approach speed on measurement error has also been examined. The finishing process after the on-machine in-process measurement is conducted in order to evaluate the utility of the sensitivity model. Through the measurement of finished surfaces, the utility of finishing process and the validity of the sensitivity model are confirmed.


2021 ◽  
Vol 2137 (1) ◽  
pp. 012027
Author(s):  
Rui Chen ◽  
Bowen Ji ◽  
Chenxi Duan

Abstract The light-screen array measurement method is very suitable for measuring the coordinates of rapid-fire weapons, and the measurement error is determined by the measurement model. In this paper, the separated light-screen array is improved to an integrated light-screen array, which reduces the parameters and optimizes the measurement model. Three kinds of factors affecting the coordinate measurement error of the projectile under the integrated measurement model are analysed, and the influence of the factors on the distribution of coordinate measurement errors is simulated and analysed in the selected 1m×1m target area. Then the error distribution of the separated measurement model and the integrated measurement model is simulated and analysed under the same conditions based on the design values and current technology level. The result shows that compared with the separated measurement model under the same simulation conditions, the comprehensive coordinate measurement error is optimized by about 2.1mm within 1m×1m target area. The research can provide reference for the design and optimization of light-screen array and other similar photoelectric measurement systems, and provide new ideas for improving the coordinate measurement precision of therapid-fire weapons.


Author(s):  
Jan Pablo Burgard ◽  
Joscha Krause ◽  
Dennis Kreber ◽  
Domingo Morales

AbstractThe connection between regularization and min–max robustification in the presence of unobservable covariate measurement errors in linear mixed models is addressed. We prove that regularized model parameter estimation is equivalent to robust loss minimization under a min–max approach. On the example of the LASSO, Ridge regression, and the Elastic Net, we derive uncertainty sets that characterize the feasible noise that can be added to a given estimation problem. These sets allow us to determine measurement error bounds without distribution assumptions. A conservative Jackknife estimator of the mean squared error in this setting is proposed. We further derive conditions under which min-max robust estimation of model parameters is consistent. The theoretical findings are supported by a Monte Carlo simulation study under multiple measurement error scenarios.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


1996 ◽  
Vol 29 (1) ◽  
pp. 133-142
Author(s):  
I V Krasovsky ◽  
V I Peresada
Keyword(s):  

2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


Author(s):  
H. James de St. Germain ◽  
David E. Johnson ◽  
Elaine Cohen

Reverse engineering (RE) is the process of defining and instantiating a model based on the measurements taken from an exemplar object. Traditional RE is costly, requiring extensive time from a domain expert using calipers and/or coordinate measurement machines to create new design drawings/CAD models. Increasingly RE is becoming more automated via the use of mechanized sensing devices and general purpose surface fitting software. This work demonstrates the ability to reverse-engineer parts by combining feature-based techniques with freeform surface fitting to produce more accurate and appropriate CAD models than previously possible.


Sign in / Sign up

Export Citation Format

Share Document