Making the Most Out of Surrogate Models: Tricks of the Trade

Author(s):  
Felipe A. C. Viana ◽  
Christian Gogu ◽  
Raphael T. Haftka

Design analysis and optimization based on high-fidelity computer experiments is commonly expensive. Surrogate modeling is often the tool of choice for reducing the computational burden. However, even after years of intensive research, surrogate modeling still involves a struggle to achieve maximum accuracy within limited resources. This work summarizes advanced and yet simple statistical tools that help. We focus on four techniques with increasing popularity in the design automation community: (i) screening and variable reduction in both the input and the output spaces, (ii) simultaneous use of multiple surrogates, (iii) sequential sampling and optimization, and (iv) conservative estimators.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5332
Author(s):  
Carlos A. Duchanoy ◽  
Hiram Calvo ◽  
Marco A. Moreno-Armendáriz

Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method.


2016 ◽  
Vol 33 (4) ◽  
pp. 1095-1113 ◽  
Author(s):  
Slawomir Koziel ◽  
Adrian Bekasiewicz

Purpose – The purpose of this paper is to investigate strategies for expedited dimension scaling of electromagnetic (EM)-simulated microwave and antenna structures, exploiting the concept of variable-fidelity inverse surrogate modeling. Design/methodology/approach – A fast inverse surrogate modeling technique is described for dimension scaling of microwave and antenna structures. The model is established using reference designs obtained for cheap underlying low-fidelity model and corrected to allow structure scaling at high accuracy level. Numerical and experimental case studies are provided demonstrating feasibility of the proposed approach. Findings – It is possible, by appropriate combination of surrogate modeling techniques, to establish an inverse model for explicit determination of geometry dimensions of the structure at hand so as to re-design it for various operating frequencies. The scaling process can be concluded at a low computational cost corresponding to just a few evaluations of the high-fidelity computational model of the structure. Research limitations/implications – The present study is a step toward development of procedures for rapid dimension scaling of microwave and antenna structures at high-fidelity EM-simulation accuracy. Originality/value – The proposed modeling framework proved useful for fast geometry scaling of microwave and antenna structures, which is very laborious when using conventional methods. To the authors’ knowledge, this is one of the first attempts to surrogate-assisted dimension scaling of microwave components at the EM-simulation level.


Author(s):  
Roxanne A. Moore ◽  
David A. Romero ◽  
Christiaan J. J. Paredis

Computer models and simulations are essential system design tools that allow for improved decision making and cost reductions during all phases of the design process. However, the most accurate models tend to be computationally expensive and can therefore only be used sporadically. Consequently, designers are often forced to choose between exploring many design alternatives with less accurate, inexpensive models and evaluating fewer alternatives with the most accurate models. To achieve both broad exploration of the design space and accurate determination of the best alternatives, surrogate modeling and variable accuracy modeling are gaining in popularity. A surrogate model is a mathematically tractable approximation of a more expensive model based on a limited sampling of that model. Variable accuracy modeling involves a collection of different models of the same system with different accuracies and computational costs. We hypothesize that designers can determine the best solutions more efficiently using surrogate and variable accuracy models. This hypothesis is based on the observation that very poor solutions can be eliminated inexpensively by using only less accurate models. The most accurate models are then reserved for discerning the best solution from the set of good solutions. In this paper, a new approach for global optimization is introduced, which uses variable accuracy models in conjuction with a kriging surrogate model and a sequential sampling strategy based on a Value of Information (VOI) metric. There are two main contributions. The first is a novel surrogate modeling method that accommodates data from any number of different models of varying accuracy and cost. The proposed surrogate model is Gaussian process-based, much like classic kriging modeling approaches. However, in this new approach, the error between the model output and the unknown truth (the real world process) is explicitly accounted for. When variable accuracy data is used, the resulting response surface does not interpolate the data points but provides an approximate fit giving the most weight to the most accurate data. The second contribution is a new method for sequential sampling. Information from the current surrogate model is combined with the underlying variable accuracy models’ cost and accuracy to determine where best to sample next using the VOI metric. This metric is used to mathematically determine where next to sample and with which model. In this manner, the cost of further analysis is explicitly taken into account during the optimization process.


Technometrics ◽  
2014 ◽  
Vol 56 (3) ◽  
pp. 372-380 ◽  
Author(s):  
Rui Tuo ◽  
C. F. Jeff Wu ◽  
Dan Yu

2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Haitao Liu ◽  
Shengli Xu ◽  
Ying Ma ◽  
Xudong Chen ◽  
Xiaofang Wang

Computer simulations have been increasingly used to study physical problems in various fields. To relieve computational budgets, the cheap-to-run metamodels, constructed from finite experiment points in the design space using the design of computer experiments (DOE), are employed to replace the costly simulation models. A key issue related to DOE is designing sequential computer experiments to achieve an accurate metamodel with as few points as possible. This article investigates the performance of current Bayesian sampling approaches and proposes an adaptive maximum entropy (AME) approach. In the proposed approach, the leave-one-out (LOO) cross-validation error estimates the error information in an easy way, the local space-filling exploration strategy avoids the clustering problem, and the search pattern from global to local improves the sampling efficiency. A comparison study of six examples with different types of initial points demonstrated that the AME approach is very promising for global metamodeling.


Author(s):  
Yong Hoon Lee ◽  
R. E. Corman ◽  
Randy H. Ewoldt ◽  
James T. Allison

A novel multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) framework is proposed to utilize a minimal number of training samples efficiently for sequential model updates. All the sample points are enforced to be feasible, and to provide coverage of sparsely explored sparse design regions using a new optimization subproblem. The MO-ASMO method only evaluates high-fidelity functions at feasible sample points. During an exploitation sample phase, samples are selected to enhance solution accuracy rather than the global exploration. Sampling tasks are especially challenging for multiobjective optimization; for an n-dimensional design space, a strategy is required for generating model update sample points near an (n − 1)-dimensional hypersurface corresponding to the Pareto set in the design space. This is addressed here using a force-directed layout algorithm, adapted from graph visualization strategies, to distribute feasible sample points evenly near the estimated Pareto set. Model validation samples are chosen uniformly on the Pareto set hypersurface, and surrogate model estimates at these points are compared to high-fidelity model responses. All high-fidelity model evaluations are stored for later use to train an updated surrogate model. The MO-ASMO algorithm, along with the set of new sampling strategies, are tested using two mathematical and one realistic engineering problems. The second mathematical test problems is specifically designed to test the limits of this algorithm to cope with very narrow, non-convex feasible domains. It involves oscillatory objective functions, giving rise to a discontinuous set of Pareto-optimal solutions. Also, the third test problem demonstrates that the MO-ASMO algorithm can handle a practical engineering problem with more than 10 design variables and black-box simulations. The efficiency of the MO-ASMO algorithm is demonstrated by comparing the result of two mathematical problems to the results of the NSGA-II algorithm in terms of the number of high fidelity function evaluations, and is shown to reduce total function evaluations by several orders of magnitude when converging to the same Pareto sets.


Author(s):  
Alexandros Taflanidis ◽  
Jize Zhang ◽  
Aikaterini Kyprioti ◽  
Andrew Kennedy ◽  
Tracy Kijewksi-Correa

Numerical advances in storm surge prediction over the past couple of decades have produced high-fidelity simulation models that permit a detailed representation of hydrodynamic processes and therefore support high accuracy forecasting. Unfortunately, the computational burden of such numerical models is large, requiring thousands of CPU hours for each simulation, something that limits their applicability for hurricane risk assessment. Use of Kriging-based surrogate modeling techniques has been examined to address the aforementioned challenge Jia et al. [2016], Zhang et al. [2018]. This approach can provide fast predictions using a database of high-fidelity, synthetic storms, with the goal of maintaining the accuracy of the numerical model utilized to produce this database, while offering computational efficiency. This contribution overviews initially recent research developments for the application of Kriging for storm surge predictions. Topics discussed include: enhancement of the initial database for nodes (i.e., geographical locations) that have remained dry in some of the database storms; adaptive selection of storms forming the initial database; use of different surrogate modeling tuning techniques and their impact on the metamodel predictive capabilities for storm surge estimation; implementation for estimation of impact due to near-shore processes (breaking waves), something that requires coupling of different numerical models.Recorded Presentation from the vICCE (YouTube Link): https://youtu.be/vL38Kv3kLDM


Sign in / Sign up

Export Citation Format

Share Document