Bayesian Approach to Quantifying Epistemic Uncertainty in a Processor Availability model

2012 ◽  
Vol 49 (6) ◽  
pp. 1019-1031 ◽  
Author(s):  
Chester J. Everline
2010 ◽  
Vol 132 (5) ◽  
Author(s):  
Jooho Choi ◽  
Dawn An ◽  
Junho Won

An efficient method for a structural reliability analysis is proposed under the Bayesian framework, which can deal with the epistemic uncertainty arising from a limited amount of data. Until recently, conventional reliability analyses dealt mostly with the aleatory uncertainty, which is related to the inherent physical randomness and its statistical properties are completely known. In reality, however, epistemic uncertainties are prevalent, which makes the existing methods less useful. In the Bayesian approach, the probability itself is treated as a random variable of a beta distribution conditional on the provided data, which is determined by conducting a double loop of reliability analyses. The Kriging dimension reduction method is employed to promote efficient implementation of the reliability analysis, which can construct the PDF of the limit state function with favorable accuracy using a small number of analyses. Mathematical examples are used to demonstrate the proposed method. An engineering design problem is also addressed, which is to find an optimum design of a pigtail spring in a vehicle suspension, taking material uncertainty due to limited test data into account.


Author(s):  
Min-Yeong Moon ◽  
K. K. Choi ◽  
Nicholas Gaul ◽  
David Lamb

To accurately predict the reliability of a physical system under aleatory (i.e., irreducible) uncertainty in system performance, a very large number of physical output test data is required. Alternatively, a simulation-based method can be used to assess reliability, but it remains a challenge as it involves epistemic (i.e., reducible) uncertainties due to imperfections in input distribution models, simulation models, and surrogate models. In practical engineering applications, only a limited number of tests are used to model input distribution. Thus, estimated input distribution models are uncertain. As a result, estimated output distributions, which are the outcomes of input distributions and biased simulation models, and the corresponding reliabilities also become uncertain. Furthermore, only a limited number of output testing is used due to its cost, which results in epistemic uncertainty. To deal with epistemic uncertainties in prediction of reliability, a confidence concept is introduced to properly assess conservative reliability by considering all epistemic uncertainties due to limited numbers of both input test data (i.e., input uncertainty) and output test data (i.e., output uncertainty), biased simulation models, and surrogate models. One way to treat epistemic uncertainties due to limited numbers of both input and output test data and biased models is to use a hierarchical Bayesian approach. However, the hierarchical Bayesian approach could result in an overly conservative reliability assessment by integrating possible candidates of input distribution using a Bayesian analysis. To tackle this issue, a new confidence-based reliability assessment method that reduces unnecessary conservativeness is developed in this paper. In the developed method, the epistemic uncertainty induced by a limited number of input data is treated by approximating an input distribution model using a bootstrap method. Two engineering examples are used to demonstrate that 1) the proposed method can predict the reliability of a physical system that satisfies the user-specified target confidence level and 2) the proposed confidence-based reliability is less conservative than the one that fully integrates possible candidates of input distribution models in the hierarchical Bayesian analysis.


2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


Sign in / Sign up

Export Citation Format

Share Document