scholarly journals Using Bayesian Analysis to Quantify Uncertainty in Radiometer Measurements

Author(s):  
Jennifer Spinti ◽  
Sean T. Smith ◽  
Philip J. Smith ◽  
N.Stanley Harding ◽  
Kaitlyn Scheib ◽  
...  

Abstract We apply Bayesian inference to the issue of instrument calibration and experimental data uncertainty analysis for the specific application of measuring radiative intensity with a narrow-angle radiometer. We develop a physics-based instrument model that describes intensity, the indirectly-measured quantity of interest, as a function of scenario and uncertain model parameters. We identify a set of five uncertain parameters and find their probability distributions (the posterior or inverse problem) given the calibration data by applying Bayes' Theorem. We then employ the instrument model in a new scenario, a 1.5 MW coal-fired furnace. We obtain values for the five uncertain parameters in the model by sampling from the posterior distribution and then compute the intensity with quantifiable uncertainty at the measurement point of interest (the posterior predictive or forward problem).

2011 ◽  
Vol 31 (4) ◽  
pp. 662-674 ◽  
Author(s):  
Christopher H. Jackson ◽  
Laura Bojke ◽  
Simon G. Thompson ◽  
Karl Claxton ◽  
Linda D. Sharples

Decision analytic models used for health technology assessment are subject to uncertainties. These uncertainties can be quantified probabilistically, by placing distributions on model parameters and simulating from these to generate estimates of cost-effectiveness. However, many uncertain model choices, often termed structural assumptions, are usually only explored informally by presenting estimates of cost-effectiveness under alternative scenarios. The authors show how 2 recent research proposals represent parts of a framework to formally account for all common structural uncertainties. First, the model is expanded to include parameters that encompass all possible structural choices. Uncertainty can then arise because these parameters are estimated imprecisely from data, for example, a treatment effect of doubtful significance. Uncertainty can also arise if there are no relevant data. If there are relevant data, uncertainty can be addressed by averaging expected costs and effects generated from probabilistic analysis of the models with and without the parameter. The weights used for averaging are related to the predictive ability of each model, assessed against the data. If there are no data, additional parameters can often be informed by eliciting expert beliefs as probability distributions. These ideas are illustrated in decision models for antiplatelet therapies for vascular disease and new biologic drugs for the treatment of active psoriatic arthritis.


2012 ◽  
Vol 9 (5) ◽  
pp. 6051-6094 ◽  
Author(s):  
J. Kros ◽  
G. B. M. Heuvelink ◽  
G. J. Reinds ◽  
J. P. Lesschen ◽  
V. Ioannidi ◽  
...  

Abstract. To assess the responses of nitrogen and greenhouse gas emissions to pan-European changes in land cover, land management and climate, an integrated dynamic model, INTEGRATOR, has been developed. This model includes both simple process-based descriptions and empirical relationships, and uses detailed GIS-based environmental and farming data in combination with various downscaling methods. This paper analyses the propagation of uncertainties in model inputs and model parameters to outputs of INTEGRATOR, using a Monte Carlo analysis. Uncertain model inputs and parameters were represented by probability distributions, while spatial correlation in these uncertainties was taken into account by assigning correlation coefficients at various spatial scales. The uncertainty propagation was analysed for the emissions of NH3, N2O and NOx and N leaching to groundwater and N surface runoff to surface water for the entire EU27 and for individual countries. Results show large uncertainties for N leaching and N runoff (relative errors of ~19 % for Europe as a whole), and smaller uncertainties for emission of N2O, NH3 and NOx (relative errors of ~12 %). Uncertainties for Europe as a whole were much smaller compared to uncertainties at Country level, because errors partly cancelled out due to spatial aggregation.


1996 ◽  
Vol 33 (2) ◽  
pp. 79-90 ◽  
Author(s):  
Jian Hua Lei ◽  
Wolfgang Schilling

Physically-based urban rainfall-runoff models are mostly applied without parameter calibration. Given some preliminary estimates of the uncertainty of the model parameters the associated model output uncertainty can be calculated. Monte-Carlo simulation followed by multi-linear regression is used for this analysis. The calculated model output uncertainty can be compared to the uncertainty estimated by comparing model output and observed data. Based on this comparison systematic or spurious errors can be detected in the observation data, the validity of the model structure can be confirmed, and the most sensitive parameters can be identified. If the calculated model output uncertainty is unacceptably large the most sensitive parameters should be calibrated to reduce the uncertainty. Observation data for which systematic and/or spurious errors have been detected should be discarded from the calibration data. This procedure is referred to as preliminary uncertainty analysis; it is illustrated with an example. The HYSTEM program is applied to predict the runoff volume from an experimental catchment with a total area of 68 ha and an impervious area of 20 ha. Based on the preliminary uncertainty analysis, for 7 of 10 events the measured runoff volume is within the calculated uncertainty range, i.e. less than or equal to the calculated model predictive uncertainty. The remaining 3 events include most likely systematic or spurious errors in the observation data (either in the rainfall or the runoff measurements). These events are then discarded from further analysis. After calibrating the model the predictive uncertainty of the model is estimated.


2020 ◽  
Vol 77 (8) ◽  
pp. 2765-2791 ◽  
Author(s):  
Matthew R. Kumjian ◽  
Kelly Lombardo

Abstract A detailed microphysical model of hail growth is developed and applied to idealized numerical simulations of deep convective storms. Hailstone embryos of various sizes and densities may be initialized in and around the simulated convective storm updraft, and then are tracked as they are advected and grow through various microphysical processes. Application to an idealized squall line and supercell storm results in a plausibly realistic distribution of maximum hailstone sizes for each. Simulated hail growth trajectories through idealized supercell storms exhibit many consistencies with previous hail trajectory work that used observed storms. Systematic tests of uncertain model parameters and parameterizations are performed, with results highlighting the sensitivity of hail size distributions to these changes. A set of idealized simulations is performed for supercells in environments with varying vertical wind shear to extend and clarify our prior work. The trajectory calculations reveal that, with increased zonal deep-layer shear, broader updrafts lead to increased residence time and thus larger maximum hail sizes. For cases with increased meridional low-level shear, updraft width is also increased, but hailstone sizes are smaller. This is a result of decreased residence time in the updraft, owing to faster northward flow within the updraft that advects hailstones through the growth region more rapidly. The results suggest that environments leading to weakened horizontal flow within supercell updrafts may lead to larger maximum hailstone sizes.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5549
Author(s):  
Ossi Kaltiokallio ◽  
Roland Hostettler ◽  
Hüseyin Yiğitler ◽  
Mikko Valkama

Received signal strength (RSS) changes of static wireless nodes can be used for device-free localization and tracking (DFLT). Most RSS-based DFLT systems require access to calibration data, either RSS measurements from a time period when the area was not occupied by people, or measurements while a person stands in known locations. Such calibration periods can be very expensive in terms of time and effort, making system deployment and maintenance challenging. This paper develops an Expectation-Maximization (EM) algorithm based on Gaussian smoothing for estimating the unknown RSS model parameters, liberating the system from supervised training and calibration periods. To fully use the EM algorithm’s potential, a novel localization-and-tracking system is presented to estimate a target’s arbitrary trajectory. To demonstrate the effectiveness of the proposed approach, it is shown that: (i) the system requires no calibration period; (ii) the EM algorithm improves the accuracy of existing DFLT methods; (iii) it is computationally very efficient; and (iv) the system outperforms a state-of-the-art adaptive DFLT system in terms of tracking accuracy.


2019 ◽  
Vol 9 (14) ◽  
pp. 2811
Author(s):  
Choi ◽  
Yun ◽  
Kim ◽  
Jin ◽  
Kim

Real wars involve a considerable number of uncertainties when determining firing scheduling. This study proposes a robust optimization model that considers uncertainties in wars. In this model, parameters that are affected by enemy’s behavior and will, i.e., threats from enemy targets and threat time from enemy targets, are assumed as uncertain parameters. The robust optimization model considering these parameters is an intractable model with semi-infinite constraints. Thus, this study proposes an approach to obtain a solution by reformulating this model into a tractable problem; the approach involves developing a robust optimization model using the scenario concept and finding a solution in that model. Here, the combinations that express uncertain parameters are assumed by scenarios. This approach divides problems into master and subproblems to find a robust solution. A genetic algorithm is utilized in the master problem to overcome the complexity of global searches, thereby obtaining a solution within a reasonable time. In the subproblem, the worst scenarios for any solution are searched to find the robust solution even in cases where all scenarios have been expressed. Numerical experiments are conducted to compare robust and nominal solutions for various uncertainty levels to verify the superiority of the robust solution.


Stats ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 259-271 ◽  
Author(s):  
Tom Burr ◽  
Elisa Bonner ◽  
Kamil Krzysztoszek ◽  
Claude Norman

For statistical evaluations that involve within-group and between-group variance components (denoted σ W 2 and σ B 2 , respectively), there is sometimes a need to monitor for a shift in the mean of time-ordered data. Uncertainty in the estimates σ ^ W 2 and σ ^ B 2 should be accounted for when setting alarm thresholds to check for a mean shift as both σ W 2 and σ B 2 must be estimated. One-way random effects analysis of variance (ANOVA) is the main tool for analysing such grouped data. Nearly all of the ANOVA applications assume that both the within-group and between-group components are normally distributed. However, depending on the application, the within-group and/or between-group probability distributions might not be well approximated by a normal distribution. This review paper uses the same example throughout to illustrate the possible approaches to setting alarm limits in grouped data, depending on what is assumed about the within-group and between-group probability distributions. The example involves measurement data, for which systematic errors are assumed to remain constant within a group, and to change between groups. The false alarm probability depends on the assumed measurement error model and its within-group and between-group error variances, which are estimated while using historical data, usually with ample within-group data, but with a small number of groups (three to 10 typically). This paper illustrates the parametric, semi-parametric, and non-parametric options to setting alarm thresholds in such grouped data.


1997 ◽  
Vol 64 (3) ◽  
pp. 413-421 ◽  
Author(s):  
A. B. Pleasants

AbstractA model of a birthdate distribution for a herd of beef cows is constructed using the probability distributions of the variables that affect reproduction in the cow — anoestrous interval, oestrous cycle length, conception to each oestrus, gestation length, period of mating and the prior calving frequency distribution. The model is general and can be reparamaterized to deal with issues such as intervention to synchronize oestrous cycles among cows in the herd by changing the form of the relevant probability distributions.The model is applied to the question of what time to begin mating in a herd of beef cows. The average calf live weight at day 200, herd conception rate and proportion of cows calving before the planned start of calving were calculated from the model output. The model parameters given by the anoestrous period, conception rate to each oestrus and the regression between prior calving date and anoestrous period, were varied in a factorial design to investigate a range of circumstances found on a farm. Prior calving distributions were generated by random sampling from eight actual calving frequency distributions.Generally starling mating earlier produced an advantage in terms of extra calf live weight and herd conception rate. However, the proportion of the herd calving earlier than expected increased with early mating. Thus, the feasibility of early mating depends on the cost to the farmer of dealing with early calving cows as well as the advantage of heavier older calves.Altering the fixed parameters in the model (variances and covariances, prior calving distributions, mating period) to accommodate the circumstances of herds run under different conditions may produce different results. Model structure allows easy alteration of these parameters and also the introduction of different probability distributions for some variables. This might be necessary to model oestrous synchronization and artificial insemination, issues not considered in this paper.


Sign in / Sign up

Export Citation Format

Share Document