scholarly journals Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation

Algorithms ◽  
2020 ◽  
Vol 13 (3) ◽  
pp. 51
Author(s):  
Dimitrios Loukrezis ◽  
Herbert De Gersem

Approximation and uncertainty quantification methods based on Lagrange interpolation are typically abandoned in cases where the probability distributions of one or more system parameters are not normal, uniform, or closely related distributions, due to the computational issues that arise when one wishes to define interpolation nodes for general distributions. This paper examines the use of the recently introduced weighted Leja nodes for that purpose. Weighted Leja interpolation rules are presented, along with a dimension-adaptive sparse interpolation algorithm, to be employed in the case of high-dimensional input uncertainty. The performance and reliability of the suggested approach is verified by four numerical experiments, where the respective models feature extreme value and truncated normal parameter distributions. Furthermore, the suggested approach is compared with a well-established polynomial chaos method and found to be either comparable or superior in terms of approximation and statistics estimation accuracy.

Author(s):  
Djamalddine Boumezerane

Abstract In this study, we use possibility distribution as a basis for parameter uncertainty quantification in one-dimensional consolidation problems. A Possibility distribution is the one-point coverage function of a random set and viewed as containing both partial ignorance and uncertainty. Vagueness and scarcity of information needed for characterizing the coefficient of consolidation in clay can be handled using possibility distributions. Possibility distributions can be constructed from existing data, or based on transformation of probability distributions. An attempt is made to set a systematic approach for estimating uncertainty propagation during the consolidation process. The measure of uncertainty is based on Klir's definition (1995). We make comparisons with results obtained from other approaches (probabilistic…) and discuss the importance of using possibility distributions in this type of problems.


1984 ◽  
Vol 1 (19) ◽  
pp. 164 ◽  
Author(s):  
A. Mol ◽  
R.L. Groeneveld ◽  
A.J. Waanders

This paper discusses the need to incorporate a reliability analysis in the design procedures for rubble mound breakwaters. Such an analysis is defined and a suggested approach is outlined. Failure mechanisms are analysed and categorized in Damage Event Trees. The probability of failure is computed using a level III simulation method to include time and cumulative effects and to account for skewed probability distributions. Typical outputs of the computer program are shown and compared with results according to traditional design approaches. The paper concludes that there is a definite need to include reliability analysis in the design procedures for larger breakwaters and such an analysis must consider the accuracy of design parameters and methods.


Author(s):  
D. Bigoni ◽  
A. P. Engsig-Karup ◽  
H. True

This paper describes the results of the application of Uncertainty Quantification methods to a simple railroad vehicle dynamical example. Uncertainty Quantification methods take the probability distribution of the system parameters that stems from the parameter tolerances into account in the result. In this paper the methods are applied to a low-dimensional vehicle dynamical model composed by a two-axle truck that is connected to a car body by a lateral spring, a lateral damper and a torsional spring, all with linear characteristics. Their characteristics are not deterministically defined, but they are defined by probability distributions. The model — but with deterministically defined parameters — was studied in [1] and [2], and this article will focus on the calculation of the critical speed of the model, when the distribution of the parameters is taken into account. Results of the application of the traditional Monte Carlo sampling method will be compared with the results of the application of advanced Uncertainty Quantification methods [3]. The computational performance and fast convergence that result from the application of advanced Uncertainty Quantification methods is highlighted. Generalized Polynomial Chaos will be presented in the Collocation form with emphasis on the pros and cons of each of those approaches.


2020 ◽  
Vol 6 ◽  
pp. e298
Author(s):  
Fernando Rojas ◽  
Peter Wanke ◽  
Giuliani Coluccio ◽  
Juan Vega-Vargas ◽  
Gonzalo F. Huerta-Canepa

This paper proposes a slow-moving management method for a system using of intermittent demand per unit time and lead time demand of items in service enterprise inventory models. Our method uses zero-inflated truncated normal statistical distribution, which makes it possible to model intermittent demand per unit time using mixed statistical distribution. We conducted numerical experiments based on an algorithm used to forecast intermittent demand over fixed lead time to show that our proposed distributions improved the performance of the continuous review inventory model with shortages. We evaluated multi-criteria elements (total cost, fill-rate, shortage of quantity per cycle, and the adequacy of the statistical distribution of the lead time demand) for decision analysis using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). We confirmed that our method improved the performance of the inventory model in comparison to other commonly used approaches such as simple exponential smoothing and Croston’s method. We found an interesting association between the intermittency of demand per unit of time, the square root of this same parameter and reorder point decisions, that could be explained using classical multiple linear regression model. We confirmed that the parameter of variability of the zero-inflated truncated normal statistical distribution used to model intermittent demand was positively related to the decision of reorder points. Our study examined a decision analysis using illustrative example. Our suggested approach is original, valuable, and, in the case of slow-moving item management for service companies, allows for the verification of decision-making using multiple criteria.


2020 ◽  
Vol 8 ◽  
Author(s):  
Brioch Hemmings ◽  
Matthew J. Knowling ◽  
Catherine R. Moore

Effective decision making for resource management is often supported by combining predictive models with uncertainty analyses. This combination allows quantitative assessment of management strategy effectiveness and risk. Typically, history matching is undertaken to increase the reliability of model forecasts. However, the question of whether the potential benefit of history matching will be realized, or outweigh its cost, is seldom asked. History matching adds complexity to the modeling effort, as information from historical system observations must be appropriately blended with the prior characterization of the system. Consequently, the cost of history matching is often significant. When it is not implemented appropriately, history matching can corrupt model forecasts. Additionally, the available data may offer little decision-relevant information, particularly where data and forecasts are of different types, or represent very different stress regimes. In this paper, we present a decision support modeling workflow where early quantification of model uncertainty guides ongoing model design and deployment decisions. This includes providing justification for undertaking (or forgoing) history matching, so that unnecessary modeling costs can be avoided and model performance can be improved. The workflow is demonstrated using a regional-scale modeling case study in the Wairarapa Valley (New Zealand), where assessments of stream depletion and nitrate-nitrogen contamination risks are used to support water-use and land-use management decisions. The probability of management success/failure is assessed by comparing the proximity of model forecast probability distributions to ecologically motivated decision thresholds. This study highlights several important insights that can be gained by undertaking early uncertainty quantification, including: i) validation of the prior numerical characterization of the system, in terms of its consistency with historical observations; ii) validation of model design or indication of areas of model shortcomings; iii) evaluation of the relative proximity of management decision thresholds to forecast probability distributions, providing a justifiable basis for stopping modeling.


2011 ◽  
Vol 10 (1) ◽  
pp. 140-160 ◽  
Author(s):  
Akil Narayan ◽  
Dongbin Xiu

AbstractIn this work we consider a general notion ofdistributional sensitivity, which measures the variation in solutions of a given physical/mathematical system with respect to the variation of probability distribution of the inputs. This is distinctively different from the classical sensitivity analysis, which studies the changes of solutions with respect to the values of the inputs. The general idea is measurement of sensitivity of outputs with respect to probability distributions, which is a well-studied concept in related disciplines. We adapt these ideas to present a quantitative framework in the context of uncertainty quantification for measuring such a kind of sensitivity and a set of efficient algorithms to approximate the distributional sensitivity numerically. A remarkable feature of the algorithms is that they do not incur additional computational effort in addition to a one-time stochastic solver. Therefore, an accurate stochastic computation with respect to a prior input distribution is needed only once, and the ensuing distributional sensitivity computation for different input distributions is a post-processing step. We prove that an accurate numericalmodel leads to accurate calculations of this sensitivity, which applies not just to slowly-converging Monte-Carlo estimates, but also to exponentially convergent spectral approximations. We provide computational examples to demonstrate the ease of applicability and verify the convergence claims.


Author(s):  
Hesham K. Alfares ◽  
Salih O. Duffuaa

This paper presents a simulation study to assess the performance of the five known methods for converting ranks of several criteria into weights in multi-criteria decision-making. The five methods assessed for converting criteria ranks into weights are: rank- sum (RS) weights, rank reciprocal (RR) weights, rank order centroid (ROC) weights, geometric weights (GW), and variable-slope linear (VSL) weights. The methods are compared in terms of weight estimation accuracy considering different numbers of criteria and decision makers’ (MS) preference structures. Alternative preference structures are represented by different probability distributions of randomly generated criteria weights, namely the uniform, normal, and exponential distributions. Results of the simulation experiments indicate that no single method is consistently superior to all others. On average, RS is best for uniform weights, VSL is best for normal weights, and ROC is best for exponential weights. However, for any multi-criteria decision-making (MCDM) problem, the best method for converting criteria ranks into weights depends on both the number of criteria and the weight distribution.


RBRH ◽  
2020 ◽  
Vol 25 ◽  
Author(s):  
Álvaro José Back

ABSTRACT The objective of this work was to propose an alternative model of IDF equation based on the disaggregation of daily rainfall. Data from the rainfall gauge station code 02649018 of the Brazilian National Water Agency from the period 1968 to 2011 were used. Several extreme event probability distributions were adjusted. Rainfall intensity with a return period of 2 to 100 years and durations of 5 to 1440 minutes were estimated using the relationships between rainfall for different durations. The four coefficients of the traditionally used IDF equation were adjusted, obtaining a sum of squares of deviations equal to 695.1 (mm.h-1)2 and standard error of estimation equal to 2.69 mm.h-1. For the model of the intense rainfall equation proposed, that relates the rain intensity and duration with the maximum daily rainfall, the sum of squares of the deviations was 33.6 (mm.h-1)2 and the standard estimated error was 0.59 mm.h-1. Including the return period in the model, we obtained squares of the deviations of 129.4 (mm.h-1)2, with a standard error of estimation of 1.16 mm.h-1. The models proposed have, besides better estimation accuracy, the advantages of facilitating the updating only by updating the maximum daily precipitation and sub serve the spatial representation.


2021 ◽  
Author(s):  
Y. Curtis Wang ◽  
Nirvik Sinha ◽  
Johann Rudi ◽  
James Velasco ◽  
Gideon Idumah ◽  
...  

Experimental data-based parameter search for Hodgkin-Huxley-style (HH) neuron models is a major challenge for neuroscientists and neuroengineers. Current search strategies are often computationally expensive, are slow to converge, have difficulty handling nonlinearities or multimodalities in the objective function, or require good initial parameter guesses. Most important, many existing approaches lack quantification of uncertainties in parameter estimates even though such uncertainties are of immense biological significance. We propose a novel method for parameter inference and uncertainty quantification in a Bayesian framework using the Markov chain Monte Carlo (MCMC) approach. This approach incorporates prior knowledge about model parameters (as probability distributions) and aims to map the prior to a posterior distribution of parameters informed by both the model and the data. Furthermore, using the adaptive parallel tempering strategy for MCMC, we tackle the highly nonlinear, noisy, and multimodal loss function, which depends on the HH neuron model. We tested the robustness of our approach using the voltage trace data generated from a 9-parameter HH model using five levels of injected currents (0.0, 0.1, 0.2, 0.3, and 0.4 nA). Each test consisted of running the ground truth with its respective currents to estimate the model parameters. To simulate the condition for fitting a frequency-current (F-I) curve, we also introduced an aggregate objective that runs MCMC against all five levels simultaneously. We found that MCMC was able to produce many solutions with acceptable loss values (e.g., for 0.0 nA, 889 solutions were within 0.5% of the best solution and 1,595 solutions within 1% of the best solution). Thus, an adaptive parallel tempering MCMC search provides a "landscape" of the possible parameter sets with acceptable loss values in a tractable manner. Our approach is able to obtain an intelligently sampled global view of the solution distributions within a search range in a single computation. Additionally, the advantage of uncertainty quantification allows for exploration of further solution spaces, which can serve to better inform future experiments.


Sign in / Sign up

Export Citation Format

Share Document