scholarly journals Setting Alarm Thresholds in Measurements with Systematic and Random Errors

Stats ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 259-271 ◽  
Author(s):  
Tom Burr ◽  
Elisa Bonner ◽  
Kamil Krzysztoszek ◽  
Claude Norman

For statistical evaluations that involve within-group and between-group variance components (denoted σ W 2 and σ B 2 , respectively), there is sometimes a need to monitor for a shift in the mean of time-ordered data. Uncertainty in the estimates σ ^ W 2 and σ ^ B 2 should be accounted for when setting alarm thresholds to check for a mean shift as both σ W 2 and σ B 2 must be estimated. One-way random effects analysis of variance (ANOVA) is the main tool for analysing such grouped data. Nearly all of the ANOVA applications assume that both the within-group and between-group components are normally distributed. However, depending on the application, the within-group and/or between-group probability distributions might not be well approximated by a normal distribution. This review paper uses the same example throughout to illustrate the possible approaches to setting alarm limits in grouped data, depending on what is assumed about the within-group and between-group probability distributions. The example involves measurement data, for which systematic errors are assumed to remain constant within a group, and to change between groups. The false alarm probability depends on the assumed measurement error model and its within-group and between-group error variances, which are estimated while using historical data, usually with ample within-group data, but with a small number of groups (three to 10 typically). This paper illustrates the parametric, semi-parametric, and non-parametric options to setting alarm thresholds in such grouped data.

1998 ◽  
Vol 120 (3) ◽  
pp. 489-495 ◽  
Author(s):  
S. J. Hu ◽  
Y. G. Liu

Autocorrelation in 100 percent measurement data results in false alarms when the traditional control charts, such as X and R charts, are applied in process monitoring. A popular approach proposed in the literature is based on prediction error analysis (PEA), i.e., using time series models to remove the autocorrelation, and then applying the control charts to the residuals, or prediction errors. This paper uses a step function type mean shift as an example to investigate the effect of prediction error analysis on the speed of mean shift detection. The use of PEA results in two changes in the 100 percent measurement data: (1) change in the variance, and (2) change in the magnitude of the mean shift. Both changes affect the speed of mean shift detection. These effects are model parameter dependent and are obtained quantitatively for AR(1) and ARMA(2,1) models. Simulations and examples from automobile body assembly processes are used to demonstrate these effects. It is shown that depending on the parameters of the AMRA models, the speed of detection could be increased or decreased significantly.


2019 ◽  
Vol 13 (1) ◽  
pp. 14
Author(s):  
Hendro Supratikno ◽  
David Premana

Parking is a condition of not moving a vehicle that is temporary because it was abandoned by the driver. Included in the definition of parking is every vehicle that stops at certain places whether stated by traffic signs or not, and not solely for the benefit of raising and / or lowering people and / or goods.Campus 3 Lumajang State Community Academy has facilities and infrastructure prepared by the Lumajang Regency government. However, the parking lots provided cannot accommodate vehicles optimally because of the ratio of the number of vehicles and the area of the parking area that is not appropriate. This is because the area of the parking lot is not analyzed by data error when measuring.Each measurement data is assumed to have errors both systematic errors, random errors, and large errors (blunders), so that in the measurement of parking lots certainly there are errors. From this the authors intend to conduct research to find out how the propagation of systematic errors and the large systematic errors of the area of campus parking lot 3 Lumajang Community Academy.The methods used in this study include preparing materials and tools, making land sketches, decomposing them, determining distances using theodolite, determining land area equations, and finding systematic error propagation. So that the final goal in this study is to find large systematic errors in the parking area of Campus 3 of the Lumajang State Community Academy


1998 ◽  
Vol 120 (4) ◽  
pp. 559-564 ◽  
Author(s):  
K. C. Gupta ◽  
P. Chutakanonta

The problem of accurate determination of object position from imprecise and excess measurement data arises in kinematics, biomechanics, robotics, CAD/CAM and flight/vehicle simulator design. Several methods described in the literature are reviewed. Two new methods which take advantage of the modern matrix oriented software (e.g., MATLAB, IMSL, EISPACK) are presented and compared with a “basic” method. It is found that both of the proposed decomposition methods (I: SVD/QR and II: SVD/QS) give better absolute results than a “basic” method available from the text books. On a relative basis, the second method (SVD/QS Decomposition) gives slightly better results than the first method (SVD/QR Decomposition). Examples are presented for the cases when the points chosen are nearly dependent and when the independent points have small random errors in their coordinates.


2010 ◽  
Vol 14 (10) ◽  
pp. 1989-2001 ◽  
Author(s):  
H. Murakami ◽  
X. Chen ◽  
M. S. Hahn ◽  
Y. Liu ◽  
M. L. Rockhold ◽  
...  

Abstract. This study presents a stochastic, three-dimensional characterization of a heterogeneous hydraulic conductivity field within the Hanford 300 Area, Washington, USA, by assimilating large-scale, constant-rate injection test data with small-scale, three-dimensional electromagnetic borehole flowmeter (EBF) measurement data. We first inverted the injection test data to estimate the transmissivity field, using zeroth-order temporal moments of pressure buildup curves. We applied a newly developed Bayesian geostatistical inversion framework, the method of anchored distributions (MAD), to obtain a joint posterior distribution of geostatistical parameters and local log-transmissivities at multiple locations. The unique aspects of MAD that make it suitable for this purpose are its ability to integrate multi-scale, multi-type data within a Bayesian framework and to compute a nonparametric posterior distribution. After we combined the distribution of transmissivities with depth-discrete relative-conductivity profile from the EBF data, we inferred the three-dimensional geostatistical parameters of the log-conductivity field, using the Bayesian model-based geostatistics. Such consistent use of the Bayesian approach throughout the procedure enabled us to systematically incorporate data uncertainty into the final posterior distribution. The method was tested in a synthetic study and validated using the actual data that was not part of the estimation. Results showed broader and skewed posterior distributions of geostatistical parameters except for the mean, which suggests the importance of inferring the entire distribution to quantify the parameter uncertainty.


2019 ◽  
Vol 13 (1) ◽  
pp. 149-156 ◽  
Author(s):  
Károly Szipka ◽  
◽  
Andreas Archenti

Detailed description of the multi-axis repeatability performance and the modelling of non-systematic variations in the positioning performance of machine tools can support the understanding of root-causes of capability variations in manufacturing processes. Kinematic characterization is implemented through repeated measurements, which include variations related to the performance of the machine tool. This paper addresses the integration of the positional repeatability in kinematic modelling through the employment of direct measurement results. The findings of this research can be used to develop standardized approaches. The statistical population of random errors along the multi-axis travel first requires the proper management of experimental data. In this paper a methodology and its application are presented for the determination of repeatability under static and unloaded conditions as an inhomogeneous parameter in the work space. The proposed approach is demonstrated in a case study, where the component errors of a linear axis are investigated with repeated laser interferometer measurements to quantify the estimated repeatability and express it in the composed repeatability budget. The conclusions of the proposed methodology outline the sensitivity of kinematic models relying on measurement data, as the repeatability of the system can be in the same magnitude as the systematic errors.


Author(s):  
James E. Short

This paper introduces a new, active methodology to modeling and leak detection intended to mitigate the effects of data uncertainty in such challenging situations, and presents three case studies. The American Petroleum Institute (API) has coined the phrase Computational Pipeline Monitoring (CPM) to encompass several methods of leak detection. The use of real-time transient hydraulic simulation tools, driven by data gathered by a Supervisory Control and Data Acquisition (SCADA) system, is one form of CPM system. Such real-time simulations impose SCADA-gathered data (typically pressures, flows, temperatures) onto a characterization of the pipeline (the model) and the fluids in the system. In a tuned CPM system, if the SCADA-gathered data cannot be successfully imposed on the model without transgressing the laws of fluid mechanics, this signifies a pipeline anomaly, which may be a release. However, in reality, many pipeline hydraulic anomalies are due to changing uncertainties in the data presented to the model and if annunciated to the pipeline operators would constitute a “false leak alarm.” While they typically are not large enough to compromise pipeline operations, uncertainties abound in the SCADA-gathered data. Even were the SCADA-gathered pressure and temperature data to contain no uncertainty, the fluid properties might not be sufficiently characterized for the simulation to accurately calculate how the fluid behaves under pressure and/or temperature changes. Measurement failure further complicates the task of the CPM application, as does slack line flow. Uncertainty in the CPM-driving data is not constant, it is ever-changing with variations in the pipeline flow rate, the characterization of the fluids in the line, and the quality of the individual measurement data, to mention only a few. CPM systems use a variety of methodologies to vary their sensitivity according to the uncertainty in the data used for their calculations. However, in general terms, the more uncertainty there is in the data, the lower the resulting system sensitivity becomes. Active features in a CPM leak detection system can mitigate the performance degradation due to varying data uncertainty.


Author(s):  
Edebaldo Peza-Ortiz ◽  
José Bernardo Torres-Valle ◽  
Enrique García-Trinidad ◽  
Alma Delia González Ramos-Gora

In this article, we propose a method as an alternative to obtain experimental measurement data, in the absence of laboratory equipment to perform tests, in a suitable format to perform mathematical operations in order to use them as information to validate: hypotheses, models constitutive and / or research theories focused on technological development. The proposed method uses as a main tool the image segmentation technique by region growth by pixel grouping and the normalization of the coordinates of the positions of the pixels extracted to the axis scale in the corresponding figure. The segmentation of the image separates the coordinates of the pixels that form the axes and the curves, the coordinates of the pixels of the curves are normalized to the scale of the axes. The method is tested with images of the result of experimental tests of stress-strain behavior recovered from [1]. The results of the data extraction are plotted and the averages of each curve extracted as well as the standard deviation are obtained. It is verified that the data obtained can be used to corroborate or support hypotheses in a wide range of investigations.


Author(s):  
James E. Warner ◽  
Geoffrey F. Bomarito ◽  
Jacob D. Hochhalter ◽  
William P. Leser ◽  
Patrick E. Leser ◽  
...  

This work presents a computationally-efficient, probabilistic approach to model-based damage diagnosis. Given measurement data, probability distributions of unknown damage parameters are estimated using Bayesian inference and Markov chain Monte Carlo (MCMC) sampling. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. While the formulation is general for arbitrary component geometry, damage type, and sensor data, it is applied to the problem of strain-based crack characterization and experimentally validated using full-field strain data from digital image correlation (DIC). Access to full-field DIC data facilitates the study of the effectiveness of strain-based diagnosis as the distance between the location of damage and strain measurements is varied. The ability of the framework to accurately estimate the crack parameters and effectively capture the uncertainty due to measurement proximity and experimental error is demonstrated. Furthermore, surrogate modeling is shown to enable diagnoses on the order of seconds and minutes rather than several days required with the FE model.


2012 ◽  
Vol 19 (6) ◽  
pp. 1257-1266 ◽  
Author(s):  
Andreas Josefsson ◽  
Kjell Ahlin ◽  
Göran Broman

Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias) and random errors due to leakage. In this paper the bias error in theH1andH2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.


2021 ◽  
Vol 30 (1) ◽  
pp. 159-167
Author(s):  
Chunsheng Jiang

Abstract A new method of orbit determination (OD) is proposed: distribution regression. The paper focuses on the process of using sparse observation data to determine the orbit of the spacecraft without any prior information. The standard regression process is to learn a map from real numbers to real numbers, but the approach put forward in this paper is to map from probability distributions to real-valued responses. According to the new algorithm, the number of orbital elements can be predicted by embedding the probability distribution into the reproducing kernel Hilbert space. While making full use of the edge of big data, it also avoids the problem that the algorithm cannot converge due to improper initial values in precise OD. The simulation experiment proves the effectiveness, robustness, and rapidity of the algorithm in the presence of noise in the measurement data.


Sign in / Sign up

Export Citation Format

Share Document