A Perspective on the Construction of Nonsymmetric Uncertainty Intervals

1997 ◽  
Vol 119 (4) ◽  
pp. 804-807
Author(s):  
Paul K. Maciejewski

This paper presents a new method for constructing nonsymmetric uncertainty intervals, one that is based on estimates of “expected values” and “variances” associated with deterministic errors that one constructs from estimates of “upper bias limits” and “lower bias limits” for measured variables. On the assumption that upper bias limits and lower bias limits specified by the user correspond to 95 percent confidence intervals for normally distributed deterministic errors, the uncertainty intervals determined by the new method reduce to approximate 95 percent confidence intervals for the true value of the measured variables.

2010 ◽  
Vol 113-116 ◽  
pp. 137-141
Author(s):  
Dan Di Ma ◽  
Zong Xin Liu

Considering the defect in the conventional methods for river health evaluation, this paper presents a new method for evaluation based on triangular fuzzy number expected values. At first, we built the grading indicator system of evaluation based on triangular fuzzy numbers. And then, we confirmed the ideal points of all grading using the triangular fuzzy number expected values formula, and build standard decision matrix based on the ideal point and the index value of evaluation object. After that, we determined the weight of each indicator dynamically using entropy coupling method based on AHP-PPC. At last, we calculated the chi-square distance between evaluation object and ideal points, which determined the grade of evaluation object by cluster analysis. The algorithm designing and data computing is achieved easily using computer in the method. It avoids excessive interference from human factors, as can gain experience from specialist. In this way, the result is on the verge of true value. We proved the new method reasonable and handy when applied to evaluate the health of Pengxi River in Yunyang County, Chongqing.


1997 ◽  
Vol 119 (2) ◽  
pp. 236-242 ◽  
Author(s):  
K. Peleg

The classical calibration problem is primarily concerned with comparing an approximate measurement method with a very precise one. Frequently, both measurement methods are very noisy, so we cannot regard either method as giving the true value of the quantity being measured. Sometimes, it is desired to replace a destructive or slow measurement method, by a noninvasive, faster or less expensive one. The simplest solution is to cross calibrate one measurement method in terms of the other. The common practice is to use regression models, as cross calibration formulas. However, such models do not attempt to discriminate between the clutter and the true functional relationship between the cross calibrated measurement methods. A new approach is proposed, based on minimizing the sum of squares of the differences between the absolute values of the Fast Fourier Transform (FFT) series, derived from the readings of the cross calibrated measurement methods. The line taken is illustrated by cross calibration examples of simulated linear and nonlinear measurement systems, with various levels of additive noise, wherein the new method is compared to the classical regression techniques. It is shown, that the new method can discover better the true functional relationship between two measurement systems, which is occluded by the noise.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Siyu Ji ◽  
Chenglin Wen

Neural network is a data-driven algorithm; the process established by the network model requires a large amount of training data, resulting in a significant amount of time spent in parameter training of the model. However, the system modal update occurs from time to time. Prediction using the original model parameters will cause the output of the model to deviate greatly from the true value. Traditional methods such as gradient descent and least squares methods are all centralized, making it difficult to adaptively update model parameters according to system changes. Firstly, in order to adaptively update the network parameters, this paper introduces the evaluation function and gives a new method to evaluate the parameters of the function. The new method without changing other parameters of the model updates some parameters in the model in real time to ensure the accuracy of the model. Then, based on the evaluation function, the Mean Impact Value (MIV) algorithm is used to calculate the weight of the feature, and the weighted data is brought into the established fault diagnosis model for fault diagnosis. Finally, the validity of this algorithm is verified by the example of UCI-Combined Cycle Power Plant (UCI-ccpp) simulation of standard data set.


Author(s):  
Kerry E. Back

When differences in beliefs are due to differences in information, investors learn from prices. If there are no risk‐sharing motives for trade, then differences in information do not lead to trade (the no‐trade theorem). Equilibrium prices can fully reveal information, but then there is no incentive to gather information (the Grossman‐Stiglitz paradox). Noisy trades or asset supplies facilitate partially revealing equilibria. In the Grossman‐Stiglitz model and the Hellwig model, prices equal discounted expected values minus a risk premium term that depends on the average precision of investors’ information weighted by their risk tolerances. The chapter explains the mechanics of updating beliefs when fundamentals and signals are normally distributed.


2020 ◽  
Vol 11 ◽  
Author(s):  
Ivan Jacob Agaloos Pesigan ◽  
Shu Fai Cheung

A SEM-based approach using likelihood-based confidence interval (LBCI) has been proposed to form confidence intervals for unstandardized and standardized indirect effect in mediation models. However, when used with the maximum likelihood estimation, this approach requires that the variables are multivariate normally distributed. This can affect the LBCIs of unstandardized and standardized effect differently. In the present study, the robustness of this approach when the predictor is not normally distributed but the error terms are conditionally normal, which does not violate the distributional assumption of ordinary least squares (OLS) estimation, is compared to four other approaches: nonparametric bootstrapping, two variants of LBCI, LBCI assuming the predictor is fixed (LBCI-Fixed-X) and LBCI based on ADF estimation (LBCI-ADF), and Monte Carlo. A simulation study was conducted using a simple mediation model and a serial mediation model, manipulating the distribution of the predictor. The Monte Carlo method performed worst among the methods. LBCI and LBCI-Fixed-X had suboptimal performance when the distributions had high kurtosis and the population indirect effects were medium to large. In some conditions, the problem was severe even when the sample size was large. LBCI-ADF and nonparametric bootstrapping had coverage probabilities close to the nominal value in nearly all conditions, although the coverage probabilities were still suboptimal for the serial mediation model when the sample size was small with respect to the model. Implications of these findings in the context of this special case of nonnormal data were discussed.


2016 ◽  
Vol 71 (1) ◽  
pp. 70-77 ◽  
Author(s):  
Xingcan Li ◽  
Chengchao Wang ◽  
Junming Zhao ◽  
Linhua Liu

Highly transparent substrates are of interest for a variety of applications, but it is difficult to measure their optical constants precisely, especially the absorption index in the transparent spectral region. In this paper, a combination technique (DOPTM-EM) using both the double optical pathlength transmission method (DOPTM) and the ellipsometry method (EM) is presented to obtain the optical constants of highly transparent substrates, which overcomes the deficiencies of both the two methods. The EM cannot give accurate result of optical constants when the absorption index is very weak. The DOPTM is suitable to retrieve the weak absorption index; however, two sets of solutions exist for the retrieved refractive index and absorption index, and only one is the true value that needs to be identified. In the DOPTM-EM, the optical constants are measured first by using the EM and set as the initial value in the gradient-based inverse method used in the DOPTM, which ensures only the true optical constants are retrieved. The new method simultaneously obtains the refractive index and the absorption index of highly transparent substrate without relying on the Kramers–Kronig relation. The optical constants of three highly transparent substrates (polycrystalline BaF2, CaF2, and MgF2) were experimentally determined within wavelength range from ultraviolet to infrared regions (0.2–14 µm). The presented method will facilitate the measurement of optical constants for highly transparent materials.


2009 ◽  
Vol 33 (2) ◽  
pp. 87-90 ◽  
Author(s):  
Douglas Curran-Everett

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of Explorations in Statistics investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter such as the mean. A confidence interval provides the same statistical information as the P value from a hypothesis test, but it circumvents the drawbacks of that hypothesis test. Even more important, a confidence interval focuses our attention on the scientific importance of some experimental result.


Author(s):  
Giulia Dell’Era ◽  
Mehmet Mersinligil ◽  
Jean-François Brouckaert

With the advancements in miniaturization and temperature capabilities of piezo-resistive pressure sensors, pneumatic probes — which are the long established standard for flow-path pressure measurements in gas turbine environments — are being replaced with unsteady pressure probes. On the other hand, any measured quantity is by definition inherently different from the ‘true’ value, requiring the estimation of the associated errors for determining the validity of the results and establishing respective confidence intervals. In the context of pressure measurements, the calibration uncertainty values, which differ from measurement uncertainties, are typically provided. Even then, the lack of a standard methodology is evident as uncertainties are often reported without appropriate confidence intervals. Moreover, no time-resolved measurement uncertainty analysis has come to the attention of the authors. The objective of this paper is to present a standard method for the estimation of the uncertainties related to measurements performed using single sensor unsteady pressure probes, with the help of measurements obtained in a one and a half stage low pressure high speed axial compressor test rig as an example. The methodology presented is also valid for similar applications involving the use of steady or unsteady sensors and instruments. The static calibration uncertainty, steady measurement uncertainties and unsteady measurement uncertainties based on phase-locked and ensemble averages are presented by the authors in [1]. Depending on the number of points used for the averaging, different values for uncertainty have been observed, underlining the importance of having greater number of samples. For unsteady flows, higher uncertainties have been observed at regions of higher unsteadiness such as tip leakage vortices, hub corner vortices and blade wakes. Unfortunately, the state of the art in single-sensor miniature unsteady pressure probes is comparable to multi-hole pneumatic probes in size, preventing the use of multi-hole unsteady probes in turbomachinery environments. However, the angular calibration properties of a single sensor probe obtained via an aerodynamic calibration may further be exploited as if a three-hole directional probe is employed, yielding corrected total pressure, unsteady yaw angle, static pressure and Mach number distributions based on the phase-locked averages with the expense of losing the time-correlation between the virtual ports. The aerodynamic calibration and derivation process are presented together with the assessment of the uncertainties associated to these derived quantities in this contribution. In the virtual three-hole mode, similar to that of a single-sensor probe, higher uncertainty values are observed at regions of higher unsteadiness.


Sign in / Sign up

Export Citation Format

Share Document