scholarly journals A High-Dimensional Reliability Analysis Method for Simulation-Based Design Under Uncertainty

2018 ◽  
Vol 140 (7) ◽  
Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. MacKenzie ◽  
Soobum Lee ◽  
...  

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem in simulation-based design under uncertainty. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit, and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to construct a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration–exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function (LSF)) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through three mathematical examples and a real world problem, the results of which suggest that the method can achieve an accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.

Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. Mackenzie

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to build a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration-exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through two mathematical examples, the results of which suggest that the method can achieve accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.


2021 ◽  
Vol 144 (3) ◽  
Author(s):  
Dequan Zhang ◽  
Yunfei Liang ◽  
Lixiong Cao ◽  
Jie Liu ◽  
Xu Han

Abstract It is generally understood that intractable computational intensity stemming from repeatedly calling performance function when evaluating the contribution of joint focal elements hinders the application of evidence theory in practical engineering. In order to promote the practicability of evidence theory for the reliability evaluation of engineering structures, an efficient reliability analysis method based on the active learning Kriging model is proposed in this study. To start with, a basic variable is selected according to basic probability assignment (BPA) of evidence variables to divide the evidence space into sub-evidence spaces. Intersection points between the performance function and the sub-evidence spaces are then determined by solving the univariate root-finding problem. Sample points are randomly identified to enhance the accuracy of the subsequently established surrogate model. Initial Kriging model with high approximation accuracy is subsequently established through these intersection points and additional sample points generated by Latin hypercube sampling. An active learning function is employed to sequentially refine the Kriging model with minimal sample points. As a result, belief (Bel) measure and plausibility (Pl) measure are derived efficiently via the surrogate model in the evidence-theory-based reliability analysis. The currently proposed analysis method is exemplified with three numerical examples to demonstrate the efficiency and is applied to reliability analysis of positioning accuracy for an industrial robot.


Author(s):  
Ungki Lee ◽  
Ikjin Lee

Abstract Reliability analysis that evaluates a probabilistic constraint is an important part of reliability-based design optimization (RBDO). Inverse reliability analysis evaluates the percentile value of the performance function that satisfies the reliability. To compute the percentile value, analytical methods, surrogate model based methods, and sampling-based methods are commonly used. In case the dimension or nonlinearity of the performance function is high, sampling-based methods such as Monte Carlo simulation, Latin hypercube sampling, and importance sampling can be directly used for reliability analysis since no analytical formulation or surrogate model is required in these methods. The sampling-based methods have high accuracy but require a large number of samples, which can be very time-consuming. Therefore, this paper proposes methods that can improve the accuracy of reliability analysis when the number of samples is not enough and the sampling-based methods are considered to be better candidates. This study starts with the idea of training the relationship between the realization of the performance function at a small sample size and the corresponding true percentile value of the performance function. Deep feedforward neural network (DFNN), which is one of the promising artificial neural network models that approximates high dimensional models using deep layered structures, is trained using the realization of various performance functions at a small sample size and the corresponding true percentile values as input and target training data, respectively. In this study, various polynomial functions and random variables are used to create training data sets consisting of various realizations and corresponding true percentile values. A method that approximates the realization of the performance function through kernel density estimation and trains the DFNN with the discrete points representing the shape of the kernel distribution to reduce the dimension of the training input data is also presented. Along with the proposed reliability analysis methods, a strategy that reuses samples of the previous design point to enhance the efficiency of the percentile value estimation is explained. The results show that the reliability analysis using the DFNN is more accurate than the method using only samples. In addition, compared to the method that trains the DFNN using the realization of the performance function, the method that trains the DFNN with the discrete points representing the shape of the kernel distribution improves the accuracy of reliability analysis and reduces the training time. The proposed sample reuse strategy is verified that the burden of function evaluation at the new design point can be reduced by reusing the samples of the previous design point when the design point changes while performing RBDO.


Author(s):  
Qian Wang

Engineering reliability analysis has long been an active research area. Surrogate models, or metamodels, are approximate models that can be created to replace implicit performance functions in the probabilistic analysis of engineering systems. Traditional 1st-order or second-order high dimensional model representation (HDMR) methods are shown to construct accurate surrogate models of response functions in an engineering reliability analysis. Although very efficient and easy to implement, 1st-order HDMR models may not be accurate, since the cross-effects of variables are neglected. Second-order HDMR models are more accurate; however they are more complicated to implement. Moreover, they require much more sample points, i.e., finite element (FE) simulations, if FE analyses are employed to compute values of a performance function. In this work, a new probabilistic analysis approach combining iterative HDMR and a first-order reliability method (FORM) is investigated. Once a performance function is replaced by a 1st-order HDMR model, an alternate FORM is applied. In order to include higher-order contributions, additional sample points are generated and HDMR models are updated, before FORM is reapplied. The analysis iteration continues until the reliability index converges. The novelty of the proposed iterative strategy is that it greatly improves the efficiency of the numerical algorithm. As numerical examples, two engineering problems are studied and reliability analyses are performed. Reliability indices are obtained within a few iterations, and they are found to have a good accuracy. The proposed method using iterative HDMR and FORM provides a useful tool for practical engineering applications.


Author(s):  
Wei Chen ◽  
Ruichen Jin ◽  
Agus Sudjianto

The importance of sensitivity analysis in engineering design cannot be over-emphasized. In design under uncertainty, sensitivity analysis is performed with respect to the probabilistic characteristics. Global sensitivity analysis (GSA), in particular, is used to study the impact of variations in input variables on the variation of a model output. One of the most challenging issues for GSA is the intensive computational demand for assessing the impact of probabilistic variations. Existing variance-based GSA methods are developed for general functional relationships but require a large number of samples. In this work, we develop an efficient and accurate approach to GSA that employs analytic formulations derived from metamodels of engineering simulation models. We examine the types of GSA needed for design under uncertainty and derive generalized analytical formulations of GSA based on a variety of metamodels commonly used in engineering applications. The benefits of our proposed techniques are demonstrated and verified through both illustrative mathematical examples and the robust design for improving vehicle handling performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yixuan Dong ◽  
Shijie Wang

Structural reliability analysis is usually realized based on a multivariate performance function that depicts failure mechanisms of a structural system. The intensively computational cost of the brutal-force Monte-Carlo simulation motivates proposing a Gegenbauer polynomial-based surrogate model for effective structural reliability analysis in this paper. By utilizing the orthogonal matching pursuit algorithm to detect significant explanatory variables at first, a small number of samples are used to determine a reliable approximation result of the structural performance function. Several numerical examples in the literature are presented to demonstrate potential applications of the Gegenbauer polynomial-based sparse surrogate model. Accurate results have justified the effectiveness of the proposed approach in dealing with various structural reliability problems.


2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Mian Li ◽  
Sankaran Mahadevan ◽  
Samy Missoum ◽  
Zissimos P. Mourelatos

Sign in / Sign up

Export Citation Format

Share Document