Concurrent Design Optimization and Calibration-Based Validation Using Local Domains Sized by Bootstrapping

Author(s):  
Dorin Drignei ◽  
Zissimos P. Mourelatos ◽  
Vijitashwa Pandey ◽  
Michael Kokkolaras

The design optimization process relies often on computational models for analysis or simulation. These models must be validated to quantify the expected accuracy of the obtained design solutions. It can be argued that validation of computational models in the entire design space is neither affordable nor required. In previous work, motivated by the fact that most numerical optimization algorithms generate a sequence of candidate designs, we proposed a paradigm where design optimization and calibration-based model validation are performed concurrently in a sequence of variable-size local domains that are relatively small compared to the entire design space. A key element of this approach is how to account for variability in test data and model predictions in order to determine the size of the local domains at each stage of the sequential design optimization process. In this paper, we discuss two alternative techniques for accomplishing this: parametric and nonparametric bootstrapping. The parametric bootstrapping assumes a Gaussian distribution for the error between test and model data and uses maximum likelihood estimation to calibrate the prediction model. The nonparametric bootstrapping does not rely on the Gaussian assumption providing therefore, a more general way to size the local domains for applications where distributional assumptions are difficult to verify, or not met at all. If distribution assumptions are met, parametric methods are preferable over nonparametric methods. We use a validation literature benchmark problem to demonstrate the application of the two techniques, emphasizing that results cannot be compared. Which technique to use depends on whether the Gaussian distribution assumption is appropriate based on available information.

2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Dorin Drignei ◽  
Zissimos P. Mourelatos ◽  
Vijitashwa Pandey ◽  
Michael Kokkolaras

The design optimization process relies often on computational models for analysis or simulation. These models must be validated to quantify the expected accuracy of the obtained design solutions. It can be argued that validation of computational models in the entire design space is neither affordable nor required. In previous work, motivated by the fact that most numerical optimization algorithms generate a sequence of candidate designs, we proposed a new paradigm where design optimization and calibration-based model validation are performed concurrently in a sequence of variable-size local domains that are relatively small compared to the entire design space. A key element of this approach is how to account for variability in test data and model predictions in order to determine the size of the local domains at each stage of the sequential design optimization process. In this article, we discuss two alternative techniques for accomplishing this: parametric and nonparametric bootstrapping. The parametric bootstrapping assumes a Gaussian distribution for the error between test and model data and uses maximum likelihood estimation to calibrate the prediction model. The nonparametric bootstrapping does not rely on the Gaussian assumption providing; therefore, a more general way to size the local domains for applications where distributional assumptions are difficult to verify, or not met at all. If distribution assumptions are met, parametric methods are preferable over nonparametric methods. We use a validation literature benchmark problem to demonstrate the application of the two techniques. Which technique to use depends on whether the Gaussian distribution assumption is appropriate based on available information.


Author(s):  
Ahmed H. Bayoumy ◽  
Michael Kokkolaras

We consider the problem of selecting among different computational models of various fidelity for evaluating the objective and constraint functions in numerical design optimization. Typically, higher-fidelity models are associated with higher computational cost. Therefore, it is desirable to employ them only when necessary. We introduce a reference error formulation that aims at determining whether lower-fidelity models (that are computationally cheaper) can be used in certain areas of the design space as the latter is being explored during the optimization process. The proposed approach is implemented using an existing trust region model management framework. We demonstrate the link between feasibility and fidelity and the key features of the proposed approach using the design example of a cantilever flexible beam subject to high accelerations.


Author(s):  
Po Ting Lin ◽  
Wei-Hao Lu ◽  
Shu-Ping Lin

In the past few years, researchers have begun to investigate the existence of arbitrary uncertainties in the design optimization problems. Most traditional reliability-based design optimization (RBDO) methods transform the design space to the standard normal space for reliability analysis but may not work well when the random variables are arbitrarily distributed. It is because that the transformation to the standard normal space cannot be determined or the distribution type is unknown. The methods of Ensemble of Gaussian-based Reliability Analyses (EoGRA) and Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) have been developed to estimate the joint probability density function using the ensemble of kernel functions. EoGRA performs a series of Gaussian-based kernel reliability analyses and merged them together to compute the reliability of the design point. EGTRA transforms the design space to the single-variate design space toward the constraint gradient, where the kernel reliability analyses become much less costly. In this paper, a series of comprehensive investigations were performed to study the similarities and differences between EoGRA and EGTRA. The results showed that EGTRA performs accurate and effective reliability analyses for both linear and nonlinear problems. When the constraints are highly nonlinear, EGTRA may have little problem but still can be effective in terms of starting from deterministic optimal points. On the other hands, the sensitivity analyses of EoGRA may be ineffective when the random distribution is completely inside the feasible space or infeasible space. However, EoGRA can find acceptable design points when starting from deterministic optimal points. Moreover, EoGRA is capable of delivering estimated failure probability of each constraint during the optimization processes, which may be convenient for some applications.


Author(s):  
Myung-Jin Choi ◽  
Min-Geun Kim ◽  
Seonho Cho

We developed a shape-design optimization method for the thermo-elastoplasticity problems that are applicable to the welding or thermal deformation of hull structures. The point is to determine the shape-design parameters such that the deformed shape after welding fits very well to a desired design. The geometric parameters of curved surfaces are selected as the design parameters. The shell finite elements, forward finite difference sensitivity, modified method of feasible direction algorithm and a programming language ANSYS Parametric Design Language in the established code ANSYS are employed in the shape optimization. The objective function is the weighted summation of differences between the deformed and the target geometries. The proposed method is effective even though new design variables are added to the design space during the optimization process since the multiple steps of design optimization are used during the whole optimization process. To obtain the better optimal design, the weights are determined for the next design optimization, based on the previous optimal results. Numerical examples demonstrate that the localized severe deviations from the target design are effectively prevented in the optimal design.


2018 ◽  
Vol 10 (1) ◽  
pp. 168781401875472 ◽  
Author(s):  
Wei Sun ◽  
Xiaobang Wang ◽  
Maolin Shi ◽  
Zhuqing Wang ◽  
Xueguan Song

A multidisciplinary design optimization model is developed in this article to optimize the performance of the hard rock tunnel boring machine using the collaborative optimization architecture. Tunnel boring machine is a complex engineering equipment with many subsystems coupled. In the established multidisciplinary design optimization process of this article, four subsystems are taken into account, which belong to different sub-disciplines/subsytems: the cutterhead system, the thrust system, the cutterhead driving system, and the economic model. The technology models of tunnel boring machine’s subsystems are build and the optimization objective of the multidisciplinary design optimization is to minimize the construction period from the system level of the hard rock tunnel boring machine. To further analyze the established multidisciplinary design optimization, the correlation between the design variables and the tunnel boring machine’s performance is also explored. Results indicate that the multidisciplinary design optimization process has significantly improved the performance of the tunnel boring machine. Based on the optimization results, another two excavating processes under different geological conditions are also optimized complementally using the collaborative optimization architecture, and the corresponding optimum performance of the hard rock tunnel boring machine, such as the cost and energy consumption, is compared and analysed. Results demonstrate that the proposed multidisciplinary design optimization method for tunnel boring machine is reliable and flexible while dealing with different geological conditions in practical engineering.


2000 ◽  
Author(s):  
Xiaoshi Jin

Abstract Runner system design for injection molds with multiple gates or multiple cavities often requires iterative analyses for optimized results, because the gate locations or cavity shapes may not be naturally balanced. In addition, in molds with symmetrical layouts, the required injection pressure may be unnecessarily high if the runners are poorly sized. In this paper, a scheme for quickly optimizing runner system design is presented. The objective of design optimization is to minimize the required injection pressure within the design space defined by a given total runner volume. Each runner segment can be given an upper limit and lower limit to define the range of runner cross sectional dimensional size. Application examples are included to demonstrate the effectiveness of the scheme.


Author(s):  
E. Parsopoulos Konstantinos ◽  
N. Vrahatis Michael

This chapter presents the fundamental concepts regarding the application of PSO on machine learning problems. The main objective in such problems is the training of computational models for performing classification and simulation tasks. It is not our intention to provide a literature review of the numerous relative applications. Instead, we aim at providing guidelines for the application and adaptation of PSO on this problem type. To achieve this, we focus on two representative cases, namely the training of artificial neural networks, and learning in fuzzy cognitive maps. In each case, the problem is first defined in a general framework, and then an illustrative example is provided to familiarize readers with the main procedures and possible obstacles that may arise during the optimization process.


Sign in / Sign up

Export Citation Format

Share Document