scholarly journals Extending Expected Improvement for High-Dimensional Stochastic Optimization of Expensive Black-Box Functions

2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and require a lot of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations optimally. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) that quantifies the merit of making new objective evaluations. In this work, we reformulate the expected improvement (EI) IAF to filter out parametric and measurement uncertainties. We bypass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters, and we employ a fully Bayesian interpretation of Gaussian processes (GPs) by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo (MCMC) to increase the methods robustness. Also, our approach quantifies the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty and demonstrate it by solving the oil-well placement problem (OWPP) with uncertainties in the permeability field and the oil price time series.

Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and, as a result, they tend to require an excessive number of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations in an optimal way. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) whose role is to quantify the merit of making new objective evaluations. Specifically, BGO iterates between making the observations with the largest expected IAF and rebuilding the probabilistic surrogate, until a convergence criterion is met. In this work, we extend the expected improvement (EI) IAF to the case of design optimization under uncertainty. This involves a reformulation of the EI policy that is able to filter out parametric and measurement uncertainties. We by-pass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters. To increase the robustness of our approach in the low sample regime, we employ a fully Bayesian interpretation of Gaussian processes by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo. An addendum of our approach is that it can quantify the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty. We demonstrate our approach by solving a challenging engineering problem: the oil-well-placement problem with uncertainties in the permeability field and the oil price time series.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.


Author(s):  
Adriana Kiszka ◽  
David Wozabal

Abstract In this paper, we propose a semi-metric for Markov processes that allows to bound optimal values of linear Markovian stochastic optimization problems. Similar to existing notions of distance for general stochastic processes, our distance is based on transportation metrics. As opposed to the extant literature, the proposed distance is problem specific, i.e., dependent on the data of the problem whose objective value we want to bound. As a result, we are able to consider problems with randomness in the constraints as well as in the objective function and therefore relax an assumption in the extant literature. We derive several properties of the proposed semi-metric and demonstrate its use in a stylized numerical example.


Author(s):  
Charanraj Thimmisetty ◽  
Panagiotis Tsilifis ◽  
Roger Ghanem

AbstractA new method is proposed for efficient optimization under uncertainty that addresses the curse of dimensionality as it pertains to the evaluation of probabilistic objectives and constraints. A basis adaptation strategy previously introduced by the authors is integrated into a design optimization framework that construes the optimization cost function as the quantity of interest and computes stochastic adapted bases as functions of design space parameters. With these adapted bases, the stochastic integrations at each design point are evaluated as low-dimensional integrals (mostly one dimensional). The proposed approach is demonstrated on a well-placement problem where the uncertainty is in the form of a stochastic process describing the permeability of the subsurface. An analysis of the method is carried out to better understand the effect of design parameters on the smoothness of the adaptation isometry.


Water ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2688 ◽  
Author(s):  
Milad Hooshyar ◽  
S. Jamshid Mousavi ◽  
Masoud Mahootchi ◽  
Kumaraswamy Ponnambalam

Stochastic dynamic programming (SDP) is a widely-used method for reservoir operations optimization under uncertainty but suffers from the dual curses of dimensionality and modeling. Reinforcement learning (RL), a simulation-based stochastic optimization approach, can nullify the curse of modeling that arises from the need for calculating a very large transition probability matrix. RL mitigates the curse of the dimensionality problem, but cannot solve it completely as it remains computationally intensive in complex multi-reservoir systems. This paper presents a multi-agent RL approach combined with an aggregation/decomposition (AD-RL) method for reducing the curse of dimensionality in multi-reservoir operation optimization problems. In this model, each reservoir is individually managed by a specific operator (agent) while co-operating with other agents systematically on finding a near-optimal operating policy for the whole system. Each agent makes a decision (release) based on its current state and the feedback it receives from the states of all upstream and downstream reservoirs. The method, along with an efficient artificial neural network-based robust procedure for the task of tuning Q-learning parameters, has been applied to a real-world five-reservoir problem, i.e., the Parambikulam–Aliyar Project (PAP) in India. We demonstrate that the proposed AD-RL approach helps to derive operating policies that are better than or comparable with the policies obtained by other stochastic optimization methods with less computational burden.


Author(s):  
К.В. Пушкарев ◽  
В.Д. Кошур

Рассматривается задача нахождения глобального минимума непрерывной целевой функции многих переменных в области, имеющей вид многомерного параллелепипеда. Для решения сложных задач глобальной оптимизации предлагается гибридный эвристический параллельный метод глобальной оптимизации (ГЭПМ), основанный на комбинировании и гибридизации различных методов и технологии многоагентной системы. В состав ГЭПМ включены как новые методы (например, метод нейросетевой аппроксимации инверсных зависимостей, использующий обобщeнно-регрессионные нейронные сети (GRNN), отображающие значения целевой функции в значения координат), так и модифицированные классические методы (например, модифицированный метод Хука-Дживса). Кратко описывается программная реализация ГЭПМ в форме кроссплатформенной (на уровне исходного кода) программной библиотеки на языке C++, использующей обмен сообщениями через интерфейс MPI (Message Passing Interface). Приводятся результаты сравнения ГЭПМ с 21 современным методом глобальной оптимизации и генетическим алгоритмом на 28 тестовых целевых функциях 50 переменных. The problem of finding the global minimum of a continuous objective function of multiple variables in a multidimensional parallelepiped is considered. A hybrid heuristic parallel method for solving of complicated global optimization problems is proposed. The method is based on combining various methods and on the multi-agent technology. It consists of new methods (for example, the method of neural network approximation of inverse coordinate mappings that uses Generalized Regression Neural Networks (GRNN) to map the values of an objective function to coordinates) and modified classical methods (for example, the modified Hooke-Jeeves method). An implementation of the proposed method as a cross-platform (on the source code level) library written in the C++ language is briefly discussed. This implementation uses the message passing via MPI (Message Passing Interface). The method is compared with 21 modern methods of global optimization and with a genetic algorithm using 28 test objective functions of 50 variables.


Author(s):  
Jacob A. Freeman ◽  
Christopher J. Roy

Using a global optimization evolutionary algorithm (EA), propagating aleatory and epistemic uncertainty within the optimization loop, and using computational fluid dynamics (CFD), this study determines a design for a 3D tractor-trailer base (back-end) drag reduction device that reduces the wind-averaged drag coefficient by 41% at 57 mph (92 km/h). Because it is optimized under uncertainty, this design is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading. The model includes five design variables with generous constraints, and this study additionally includes the uncertain effects on drag prediction due to truck speed and elevation, steady Reynolds-averaged Navier–Stokes (RANS) approximation, and numerical approximation. This study uses the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) optimization and uncertainty quantification (UQ) framework to interface the RANS flow solver, grid generator, and optimization algorithm. The computational model is a simplified full-scale tractor-trailer with flow at highway speed. For the optimized design, the estimate of total predictive uncertainty is +15/−42%; 8–10% of this uncertainty comes from model form (computation versus experiment); 3–7% from model input (wind speed and direction, flap angle, and truck speed); and +0.0/−28.5% from numerical approximation (due to the relatively coarse, 6 × 106 cell grid). Relative comparison of designs to the no-flaps baseline should have considerably less uncertainty because numerical error and input variation are nearly eliminated and model form differences are reduced. The total predictive uncertainty is also presented in the form of a probability box, which may be used to decide how to improve the model and reduce uncertainty.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Seif-Eddeen K. Fateen ◽  
Adrián Bonilla-Petriciolet

One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS) and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.


Sign in / Sign up

Export Citation Format

Share Document