A Reference Error Formulation for Multi-Fidelity Design Optimization

Author(s):  
Ahmed H. Bayoumy ◽  
Michael Kokkolaras

We consider the problem of selecting among different computational models of various fidelity for evaluating the objective and constraint functions in numerical design optimization. Typically, higher-fidelity models are associated with higher computational cost. Therefore, it is desirable to employ them only when necessary. We introduce a reference error formulation that aims at determining whether lower-fidelity models (that are computationally cheaper) can be used in certain areas of the design space as the latter is being explored during the optimization process. The proposed approach is implemented using an existing trust region model management framework. We demonstrate the link between feasibility and fidelity and the key features of the proposed approach using the design example of a cantilever flexible beam subject to high accelerations.

2019 ◽  
Vol 142 (2) ◽  
Author(s):  
Ahmed H. Bayoumy ◽  
Michael Kokkolaras

Abstract We consider the problem of selecting among different physics-based computational models of varying, and oftentimes not assessed, fidelity for evaluating the objective and constraint functions in numerical design optimization. Typically, higher-fidelity models are associated with higher computational cost. Therefore, it is desirable to employ them only when necessary. We introduce a relative adequacy framework that aims at determining whether lower-fidelity models (that are typically associated with lower computational cost) can be used in certain areas of the design space as the latter is being explored during the optimization process. We implement our approach by means of a trust-region management framework that utilizes the mesh adaptive direct search derivative-free optimization algorithm. We demonstrate the link between feasibility and fidelity and the key features of the proposed approach using two design optimization examples: a cantilever flexible beam subject to high accelerations and an airfoil in transonic flow conditions.


2000 ◽  
Vol 124 (1-2) ◽  
pp. 139-154 ◽  
Author(s):  
José F. Rodrı́guez ◽  
John E. Renaud ◽  
Brett A. Wujek ◽  
Ravindra V. Tappeta

Author(s):  
Brett A. Wujek ◽  
John E. Renaud

Abstract Approximations play an important role in multidisciplinary design optimization (MDO) by offering system behavior information at a relatively low cost. Most approximate optimization strategies are sequential in which an optimization of an approximate problem subject to design variable move limits is iteratively repeated until convergence. The move limits are imposed to restrict the optimization to regions of the design space in which the approximations provide meaningful information. In order to insure convergence of the sequence of approximate optimizations to a Karush Kuhn Tucker solution a move limit management strategy is required. In this paper, issues of move-limit management are reviewed and a new adaptive strategy for move limit management is developed. With its basis in the provably convergent trust region methodology, the TRAM (Trust region Ratio Approximation Method) strategy utilizes available gradient information and employs a backtracking process using various two-point approximation techniques to provide a flexible move-limit adjustment factor. The new strategy is successfully implemented in application to a suite of multidisciplinary design optimization test problems. These implementation studies highlight the ability of the TRAM strategy to control the amount of approximation error and efficiently manage the convergence to a Karush Kuhn Tucker solution.


Author(s):  
Christopher Chahine ◽  
Joerg R. Seume ◽  
Tom Verstraete

Aerodynamic turbomachinery component design is a very complex task. Although modern CFD solvers allow for a detailed investigation of the flow, the interaction of design changes and the three dimensional flow field are highly complex and difficult to understand. Thus, very often a trial and error approach is applied and a design heavily relies on the experience of the designer and empirical correlations. Moreover, the simultaneous satisfaction of aerodynamic and mechanical requirements leads very often to tedious iterations between the different disciplines. Modern optimization algorithms can support the designer in finding high performing designs. However, many optimization methods require performance evaluations of a large number of different geometries. In the context of turbomachinery design, this often involves computationally expensive Computational Fluid Dynamics and Computational Structural Mechanics calculations. Thus, in order to reduce the total computational time, optimization algorithms are often coupled with approximation techniques often referred to as metamodels in the literature. Metamodels approximate the performance of a design at a very low computational cost and thus allow a time efficient automatic optimization. However, from the experiences gained in past optimizations it can be deduced that metamodel predictions are often not reliable and can even result in designs which are violating the imposed constraints. In the present work, the impact of the inaccuracy of a metamodel on the design optimization of a radial compressor impeller is investigated and it is shown if an optimization without the usage of a metamodel delivers better results. A multidisciplinary, multiobjective optimization system based on a Differential Evolution algorithm is applied which was developed at the von Karman Institute for Fluid Dynamics. The results show that the metamodel can be used efficiently to explore the design space at a low computational cost and to guide the search towards a global optimum. However, better performing designs can be found when excluding the metamodel from the optimization. Though, completely avoiding the metamodel results in a very high computational cost. Based on the obtained results in present work, a method is proposed which combines the advantages of both approaches, by first using the metamodel as a rapid exploration tool and then switching to the accurate optimization without metamodel for further exploitation of the design space.


2016 ◽  
Vol 33 (7) ◽  
pp. 2007-2018 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose Development of techniques for expedited design optimization of complex and numerically expensive electromagnetic (EM) simulation models of antenna structures validated both numerically and experimentally. The paper aims to discuss these issues. Design/methodology/approach The optimization task is performed using a technique that combines gradient search with adjoint sensitivities, trust region framework, as well as EM simulation models with various levels of fidelity (coarse, medium and fine). Adaptive procedure for switching between the models of increasing accuracy in the course of the optimization process is implemented. Numerical and experimental case studies are provided to validate correctness of the design approach. Findings Appropriate combination of suitable design optimization algorithm embedded in a trust region framework, as well as model selection techniques, allows for considerable reduction of the antenna optimization cost compared to conventional methods. Research limitations/implications The study demonstrates feasibility of EM-simulation-driven design optimization of antennas at low computational cost. The presented techniques reach beyond the common design approaches based on direct optimization of EM models using conventional gradient-based or derivative-free methods, particularly in terms of reliability and reduction of the computational costs of the design processes. Originality/value Simulation-driven design optimization of contemporary antenna structures is very challenging when high-fidelity EM simulations are utilized for performance utilization of structure at hand. The proposed variable-fidelity optimization technique with adjoint sensitivity and trust regions permits rapid optimization of numerically demanding antenna designs (here, dielectric resonator antenna and compact monopole), which cannot be achieved when conventional methods are of use. The design cost of proposed strategy is up to 60 percent lower than direct optimization exploiting adjoint sensitivities. Experimental validation of the results is also provided.


Author(s):  
Noriyasu Hirokawa ◽  
Kikuo Fujita

This paper proposes a mini-max type formulation for strict robust design optimization under correlative variation based on design variation hyper sphere and quadratic polynomial approximation. While various types of formulations and techniques have been developed for computational robust design, they confront the compromise among modeling of parameter variation, feasibility assessment, definition of optimality such as sensitivity, and computational cost. The formulation of this paper aims that all points within the distribution region are thoroughly optimized. For this purpose, the design space with correlative variation is diagonalized and isoparameterized into a hyper sphere, and the functions of nominal constraints and the nominal objective are modeled as quadratic polynomials. These transformation and approximation enable the analytical discrimination of inner or boundary type on the worst design and its quantified values with less computation cost under a certain condition, and bring the procedural definition of the strictly robust optimality of a design as a maximization problem. The minimization of this formulation, that is, mini-max type optimization, can find the robust design under the above meaning. Its validity is ascertained through numerical examples.


Author(s):  
José F. Rodríguez ◽  
John E. Renaud ◽  
Layne T. Watson

Abstract A common engineering practice is the use of approximation models in place of expensive computer simulations to drive a multidisciplinary design process based on nonlinear programming techniques. The use of approximation strategies is designed to reduce the number of detailed, costly computer simulations required during optimization while maintaining the pertinent features of the design problem. To date the primary focus of most approximate optimization strategies is that application of the method should lead to improved designs. This is a laudable attribute and certainly relevant for practicing designers. However to date few researchers have focused on the development of approximate optimization strategies that are assured of converging to a solution of the original problem. Recent works based on trust region model management strategies have shown promise in managing convergence in unconstrained approximate minimization. In this research we extend these well established notions from the literature on trust-region methods to manage the convergence of the more general approximate optimization problem where equality, inequality and variable bound constraints are present. The primary concern addressed in this study is how to manage the interaction between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original constrained design problem. Using a trust-region model management strategy, coupled with an augmented Lagrangian approach for constrained approximate optimization, one can show that the optimization process converges to a solution of the original problem. In this research an approximate optimization strategy is developed in which a cumulative response surface approximation of the augmented Lagrangian is sequentially optimized subject to a trust region constraint. Results for several test problems are presented in which convergence to a Karush-Kuhn-Tucker (KKT) point is observed.


1998 ◽  
Vol 120 (1) ◽  
pp. 58-66 ◽  
Author(s):  
J. F. Rodri´guez ◽  
J. E. Renaud ◽  
L. T. Watson

A common engineering practice is the use of approximation models in place of expensive computer simulations to drive a multidisciplinary design process based on nonlinear programming techniques. The use of approximation strategies is designed to reduce the number of detailed, costly computer simulations required during optimization while maintaining the pertinent features of the design problem. To date the primary focus of most approximate optimization strategies is that application of the method should lead to improved designs. This is a laudable attribute and certainly relevant for practicing designers. However to date few researchers have focused on the development of approximate optimization strategies that are assured of converging to a solution of the original problem. Recent works based on trust region model management strategies have shown promise in managing convergence in unconstrained approximate minimization. In this research we extend these well established notions from the literature on trust-region methods to manage the convergence of the more general approximate optimization problem where equality, inequality and variable bound constraints are present. The primary concern addressed in this study is how to manage the interaction between the optimization and the fidelity of the approximation models to ensure that the process converges to a solution of the original constrained design problem. Using a trust-region model management strategy, coupled with an augmented Lagrangian approach for constrained approximate optimization, one can show that the optimization process converges to a solution of the original problem. In this research an approximate optimization strategy is developed in which a cumulative response surface approximation of the augmented Lagrangian is sequentially optimized subject to a trust region constraint. Results for several test problems are presented in which convergence to a Karush-Kuhn-Tucker (KKT) point is observed.


2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Dorin Drignei ◽  
Zissimos P. Mourelatos ◽  
Vijitashwa Pandey ◽  
Michael Kokkolaras

The design optimization process relies often on computational models for analysis or simulation. These models must be validated to quantify the expected accuracy of the obtained design solutions. It can be argued that validation of computational models in the entire design space is neither affordable nor required. In previous work, motivated by the fact that most numerical optimization algorithms generate a sequence of candidate designs, we proposed a new paradigm where design optimization and calibration-based model validation are performed concurrently in a sequence of variable-size local domains that are relatively small compared to the entire design space. A key element of this approach is how to account for variability in test data and model predictions in order to determine the size of the local domains at each stage of the sequential design optimization process. In this article, we discuss two alternative techniques for accomplishing this: parametric and nonparametric bootstrapping. The parametric bootstrapping assumes a Gaussian distribution for the error between test and model data and uses maximum likelihood estimation to calibrate the prediction model. The nonparametric bootstrapping does not rely on the Gaussian assumption providing; therefore, a more general way to size the local domains for applications where distributional assumptions are difficult to verify, or not met at all. If distribution assumptions are met, parametric methods are preferable over nonparametric methods. We use a validation literature benchmark problem to demonstrate the application of the two techniques. Which technique to use depends on whether the Gaussian distribution assumption is appropriate based on available information.


Sign in / Sign up

Export Citation Format

Share Document