A New Interval Area Metric for Model Validation With Limited Experimental Data

2018 ◽  
Vol 140 (6) ◽  
Author(s):  
Ning Wang ◽  
Wen Yao ◽  
Yong Zhao ◽  
Xiaoqian Chen ◽  
Xiang Zhang ◽  
...  

Various stochastic validation metrics have been developed for validating models, among which area metric is frequently used in many practical problems. However, the existing area metric does not consider experimental epistemic uncertainty caused by lack of sufficient physical observations. Therefore, it cannot provide a confidence level associated with the amount of experimental data, which is a desired characteristic of validation metric. In this paper, the concept of area metric is extended to a new metric, namely interval area metric, for single-site model validation with limited experimental data. The kernel of the proposed metric is defining two boundary distribution functions based on Dvoretzky–Kiefer–Wolfowitz inequality, so as to provide an interval at a given confidence level, which covers the true cumulative distribution function (CDF) of physical observations. Based on this interval area metric, the validity of a model can be quantitatively measured with the specific confidence level in association with consideration of the lack of experiment information. The new metric is examined and compared with the existing metrics through numerical case studies to demonstrate its validity and discover its properties. Furthermore, an engineering example is provided to illustrate the effectiveness of the proposed metric in practical satellite structure engineering application.

Author(s):  
Aniruddha Choudhary ◽  
Ian T. Voyles ◽  
Christopher J. Roy ◽  
William L. Oberkampf ◽  
Mayuresh Patil

Our approach to the Sandia Verification and Validation Challenge Problem is to use probability bounds analysis (PBA) based on probabilistic representation for aleatory uncertainties and interval representation for (most) epistemic uncertainties. The nondeterministic model predictions thus take the form of p-boxes, or bounding cumulative distribution functions (CDFs) that contain all possible families of CDFs that could exist within the uncertainty bounds. The scarcity of experimental data provides little support for treatment of all uncertain inputs as purely aleatory uncertainties and also precludes significant calibration of the models. We instead seek to estimate the model form uncertainty at conditions where the experimental data are available, then extrapolate this uncertainty to conditions where no data exist. The modified area validation metric (MAVM) is employed to estimate the model form uncertainty which is important because the model involves significant simplifications (both geometric and physical nature) of the true system. The results of verification and validation processes are treated as additional interval-based uncertainties applied to the nondeterministic model predictions based on which the failure prediction is made. Based on the method employed, we estimate the probability of failure to be as large as 0.0034, concluding that the tanks are unsafe.


2010 ◽  
Vol 29-32 ◽  
pp. 1252-1257 ◽  
Author(s):  
Hui Xin Guo ◽  
Lei Chen ◽  
Hang Min He ◽  
Dai Yong Lin

A method to handle hybrid uncertainties including aleatory and epistemic uncertainty is proposed for the computation of reliability. The aleatory uncertainty is modeled as random variable and the epistemic uncertainty is modeled with evidence theory. The two types of uncertainty are firstly transformed into random set, and the limit-state function of a product is mapped into a random set by using the extension principle of random set. Then, the belief function and the plausibility function of safety event are determined. The two functions are viewed as the lower and the upper cumulative distribution functions of reliability, respectively. The reliability of a product will be bounded by two cumulative distribution functions, and then an interval estimation of reliability can be obtained. The proposed method is demonstrated with an example.


Author(s):  
Anastasia Soloveva ◽  
Sergey Solovev

Reliability is one of the main indicators of structural elements mechanical safety. The choice of stochastic models is an important task in reliability analysis for describing the variability of random variables with aleatory and epistemic uncertainty. The article proposes a method for the reliability analysis of RHS (rectangular hollow sections) steel truss joints based on p-boxes approach. The p-boxes consist of two boundary distribution functions that create an area of possible distribution functions of a random variable. The using of p-boxes make possible to model random variables without making unreasonable assumptions about the exact cumulative distribution functions (CDF) or the exact values of the CDF parameters. The developed approach allows to give an interval estimate of the non-failure probability of the truss joints, which is necessary for a comprehensive (system) reliability analysis of the entire truss.


2017 ◽  
Vol 139 (3) ◽  
Author(s):  
Min-Yeong Moon ◽  
K. K. Choi ◽  
Hyunkyoo Cho ◽  
Nicholas Gaul ◽  
David Lamb ◽  
...  

The conventional reliability-based design optimization (RBDO) methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased. Accordingly, designed product based on the conventional RBDO optimum may either not satisfy the target reliability or be overly conservative design. Therefore, simulation model validation using output experimental data, which corrects model bias, should be integrated in the RBDO process. With particular focus on RBDO, the model validation needs to account for the uncertainty induced by insufficient experimental data as well as the inherent variability of the products. In this paper, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. The developed model validation helps RBDO to obtain a conservative RBDO optimum design at the target confidence level. The RBDO with model validation may have a convergence issue because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve the issue, a practical optimization procedure is proposed. Furthermore, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with confidence-based model validation. Finally, we demonstrate that the proposed RBDO approach can achieve a conservative and practical optimum design given a limited number of experimental data.


2020 ◽  
Vol 501 (1) ◽  
pp. 994-1001
Author(s):  
Suman Sarkar ◽  
Biswajit Pandey ◽  
Snehasish Bhattacharjee

ABSTRACT We use an information theoretic framework to analyse data from the Galaxy Zoo 2 project and study if there are any statistically significant correlations between the presence of bars in spiral galaxies and their environment. We measure the mutual information between the barredness of galaxies and their environments in a volume limited sample (Mr ≤ −21) and compare it with the same in data sets where (i) the bar/unbar classifications are randomized and (ii) the spatial distribution of galaxies are shuffled on different length scales. We assess the statistical significance of the differences in the mutual information using a t-test and find that both randomization of morphological classifications and shuffling of spatial distribution do not alter the mutual information in a statistically significant way. The non-zero mutual information between the barredness and environment arises due to the finite and discrete nature of the data set that can be entirely explained by mock Poisson distributions. We also separately compare the cumulative distribution functions of the barred and unbarred galaxies as a function of their local density. Using a Kolmogorov–Smirnov test, we find that the null hypothesis cannot be rejected even at $75{{\ \rm per\ cent}}$ confidence level. Our analysis indicates that environments do not play a significant role in the formation of a bar, which is largely determined by the internal processes of the host galaxy.


2021 ◽  
Vol 13 (6) ◽  
pp. 1096
Author(s):  
Soi Ahn ◽  
Sung-Rae Chung ◽  
Hyun-Jong Oh ◽  
Chu-Yong Chung

This study aimed to generate a near real time composite of aerosol optical depth (AOD) to improve predictive model ability and provide current conditions of aerosol spatial distribution and transportation across Northeast Asia. AOD, a proxy for aerosol loading, is estimated remotely by various spaceborne imaging sensors capturing visible and infrared spectra. Nevertheless, differences in satellite-based retrieval algorithms, spatiotemporal resolution, sampling, radiometric calibration, and cloud-screening procedures create significant variability among AOD products. Satellite products, however, can be complementary in terms of their accuracy and spatiotemporal comprehensiveness. Thus, composite AOD products were derived for Northeast Asia based on data from four sensors: Advanced Himawari Imager (AHI), Geostationary Ocean Color Imager (GOCI), Moderate Infrared Spectroradiometer (MODIS), and Visible Infrared Imaging Radiometer Suite (VIIRS). Cumulative distribution functions were employed to estimate error statistics using measurements from the Aerosol Robotic Network (AERONET). In order to apply the AERONET point-specific error, coefficients of each satellite were calculated using inverse distance weighting. Finally, the root mean square error (RMSE) for each satellite AOD product was calculated based on the inverse composite weighting (ICW). Hourly AOD composites were generated (00:00–09:00 UTC, 2017) using the regression equation derived from the comparison of the composite AOD error statistics to AERONET measurements, and the results showed that the correlation coefficient and RMSE values of composite were close to those of the low earth orbit satellite products (MODIS and VIIRS). The methodology and the resulting dataset derived here are relevant for the demonstrated successful merging of multi-sensor retrievals to produce long-term satellite-based climate data records.


2021 ◽  
Vol 11 (8) ◽  
pp. 3310
Author(s):  
Marzio Invernizzi ◽  
Federica Capra ◽  
Roberto Sozzi ◽  
Laura Capelli ◽  
Selena Sironi

For environmental odor nuisance, it is extremely important to identify the instantaneous concentration statistics. In this work, a Fluctuating Plume Model for different statistical moments is proposed. It provides data in terms of mean concentrations, variance, and intensity of concentration. The 90th percentile peak-to-mean factor, R90, was tested here by comparing it with the experimental results (Uttenweiler field experiment), considering different Probability Distribution Functions (PDFs): Gamma and the Modified Weibull. Seventy-two percent of the simulated mean concentration values fell within a factor 2 compared to the experimental ones: the model was judged acceptable. Both the modelled results for standard deviation, σC, and concentration intensity, Ic, overestimate the experimental data. This evidence can be due to the non-ideality of the measurement system. The propagation of those errors to the estimation of R90 is complex, but the ranges covered are quite repeatable: the obtained values are 1–3 for the Gamma, 1.5–4 for Modified Weibull PDF, and experimental ones from 1.4 to 3.6.


Author(s):  
Rama Subba Reddy Gorla

Heat transfer from a nuclear fuel rod bumper support was computationally simulated by a finite element method and probabilistically evaluated in view of the several uncertainties in the performance parameters. Cumulative distribution functions and sensitivity factors were computed for overall heat transfer rates due to the thermodynamic random variables. These results can be used to identify quickly the most critical design variables in order to optimize the design and to make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in heat transfer and to the identification of both the most critical measurements and the parameters.


Sign in / Sign up

Export Citation Format

Share Document