Augmented Lagrangian Method for Maximizing Expectation and Minimizing Risk for Optimal Well-Control Problems With Nonlinear Constraints

SPE Journal ◽  
2016 ◽  
Vol 21 (05) ◽  
pp. 1830-1842 ◽  
Author(s):  
Xin Liu ◽  
Albert C. Reynolds

Summary Robust waterflooding optimization commonly refers to the problem of estimating well controls (wellbore pressures or rates at specified control steps) that maximize the expectation of net-present-value (NPV) of the life-cycle production over an ensemble of given reservoir models. Unfortunately, if the reservoir is operated under the “optimal” well controls obtained, the variance in NPV may be large; more importantly, if the smallest NPV obtained is close to the one that would be obtained for the true reservoir, the development of the reservoir might not be commercially viable. Liu and Reynolds (2015b) suggested that one way to manage risk was to consider the problem in which the dual objectives were to maximize the expected value of NPV and to minimize the risk in which the risk was defined as the minimum NPV from an ensemble of models spanning the uncertainty in reservoir description. However, the algorithms presented in Liu and Reynolds (2015b) considered only bound constraints. Here, we develop algorithms to generate points on the Pareto front when nonlinear state (output) constraints are present. The Pareto front is generated either by a constrained weighted-sum (WS) method or a constrained normal-boundary-intersection (NBI) method. In this paper, we extend the augmented Lagrangian approach given in Liu and Reynolds (2015b) for biobjective optimization with bound constraints to biobjective optimization problems in which nonlinear state constraints are present. We provide a detailed derivation of how to incorporate nonlinear constraints for multiobjective optimization problems (MOOPs) and illustrate, by means of example, that the methodology is viable for biobjective optimization with nonlinear constraints.

Author(s):  
Christian Kanzow ◽  
Andreas B. Raharja ◽  
Alexandra Schwartz

AbstractA reformulation of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint has gained some popularity during the last few years. Due to the special structure of the constraints, the reformulation violates many standard assumptions and therefore is often solved using specialized algorithms. In contrast to this, we investigate the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem. We prove global convergence towards an (essentially strongly) stationary point under a suitable problem-tailored quasinormality constraint qualification. Numerical experiments illustrating the performance of the method in comparison to regularization-based approaches are provided.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Yu Gao ◽  
Jingzhi Li ◽  
Yongcun Song ◽  
Chao Wang ◽  
Kai Zhang

Abstract We consider the optimal control problems constrained by Stokes equations. It has been shown in the literature, the problem can be discretized by the finite element method to generate a discrete system, and the error estimate has also been established. In this paper, we focus on solving the discrete system by the alternating splitting augmented Lagrangian method, which is a direct extension of alternating direction method of multipliers and possesses a global O ⁢ ( 1 / k ) \mathcal{O}({1}/{k}) convergence rate. In addition, we propose an acceleration scheme based on the alternating splitting augmented Lagrangian method to improve the efficiency of the algorithm. The error estimates and convergence analysis of our algorithms are presented for several different types of optimization problems. Finally, numerical experiments are performed to verify the efficiency of the algorithms.


SPE Journal ◽  
2021 ◽  
pp. 1-28
Author(s):  
Faruk Alpak ◽  
Vivek Jain ◽  
Yixuan Wang ◽  
Guohua Gao

Summary We describe the development and validation of a novel algorithm for field-development optimization problems and document field-testing results. Our algorithm is founded on recent developments in bound-constrained multiobjective optimization of nonsmooth functions for problems in which the structure of the objective functions either cannot be exploited or are nonexistent. Such situations typically arise when the functions are computed as the result of numerical modeling, such as reservoir-flow simulation within the context of field-development planning and reservoir management. We propose an efficient implementation of a novel parallel algorithm, namely BiMADS++, for the biobjective optimization problem. Biobjective optimization is a special case of multiobjective optimization with the property that Pareto points may be ordered, which is extensively exploited by the BiMADS++ algorithm. The optimization algorithm generates an approximation of the Pareto front by solving a series of single-objective formulations of the biobjective optimization problem. These single-objective problems are solved using a new and more efficient implementation of the mesh adaptive direct search (MADS) algorithm, developed for nonsmooth optimization problems that arise within reservoir-simulation-based optimization workflows. The MADS algorithm is extensively benchmarked against alternative single-objective optimization techniques before the BiMADS++ implementation. Both the MADS optimization engine and the master BiMADS++ algorithm are implemented from the ground up by resorting to a distributed parallel computing paradigm using message passing interface (MPI) for efficiency in industrial-scaleproblems. BiMADS++ is validated and field tested on well-location optimization (WLO) problems. We first validate and benchmark the accuracy and computational performance of the MADS implementation against a number of alternative parallel optimizers [e.g., particle-swarm optimization (PSO), genetic algorithm (GA), and simultaneous perturbation and multivariate interpolation (SPMI)] within the context of single-objective optimization. We also validate the BiMADS++ implementation using a challenging analytical problem that gives rise to a discontinuous Pareto front. We then present BiMADS++ WLO applications on two simple, intuitive, and yet realistic problems, and a model for a real problem with known Pareto front. Finally, we discuss the results of the field-testing work on three real-field deepwater models. The BiMADS++ implementation enables the user to identify various compromise solutions of the WLO problem with a single optimization run without resorting to ad hoc adjustments of penalty weights in the objective function. Elimination of this “trial-and-error” procedure and distributed parallel implementation renders BiMADS++ easy to use and significantly more efficient in terms of computational speed needed to determine alternative compromise solutions of a given WLO problem at hand. In a field-testing example, BiMADS++ delivered a workflow speedup of greater than fourfold with a single biobjective optimization run over the weighted-sumsobjective-function approach, which requires multiple single-objective-function optimization runs.


SPE Journal ◽  
2016 ◽  
Vol 21 (05) ◽  
pp. 1813-1829 ◽  
Author(s):  
Xin Liu ◽  
Albert C. Reynolds

Summary We consider two procedures for multiobjective optimization, the classical weighted-sum (WS) method and the normal-boundary-intersection (NBI) method. To enhance computational efficiency, the methods use gradients calculated with the adjoint method. Our objective is to develop implementations that one can apply for waterflooding optimization under geological uncertainty when we wish to develop well controls that satisfy two objectives: The first is to maximize the expectation of life-cycle net present value (NPV) (commonly referred to as robust optimization), and the second is either to minimize the standard deviation of NPV over that set of plausible reservoir descriptions or to minimize the risk when risk means downside risk. Specifically, minimizing risk refers to maximizing the minimum value of the life-cycle NPV (i.e., is equivalent to a maximum/minimum (max/min) problem). To avoid nondifferentiability issues, we recast the max/min problem as a constrained optimization problem and apply a gradient-based version of either WS or NBI to construct a point on the Pareto front. To deal with the constraints introduced, we derive an augmented-Lagrange algorithm to find points on the Pareto front. To the best of our knowledge, the resulting versions of “constrained” WS and “constrained” NBI were not presented previously in the scientific literature. The methodology is demonstrated for two synthetic reservoirs. We only consider bound constraints in this paper.


Author(s):  
Joachim Giesen ◽  
Soeren Laue

Many machine learning methods entail minimizing a loss-function that is the sum of the losses for each data point. The form of the loss function is exploited algorithmically, for instance in stochastic gradient descent (SGD) and in the alternating direction method of multipliers (ADMM). However, there are also machine learning methods where the entailed optimization problem features the data points not in the objective function but in the form of constraints, typically one constraint per data point. Here, we address the problem of solving convex optimization problems with many convex constraints. Our approach is an extension of ADMM. The straightforward implementation of ADMM for solving constrained optimization problems in a distributed fashion solves constrained subproblems on different compute nodes that are aggregated until a consensus solution is reached. Hence, the straightforward approach has three nested loops: one for reaching consensus, one for the constraints, and one for the unconstrained problems. Here, we show that solving the costly constrained subproblems can be avoided. In our approach, we combine the ability of ADMM to solve convex optimization problems in a distributed setting with the ability of the augmented Lagrangian method to solve constrained optimization problems. Consequently, our algorithm only needs two nested loops. We prove that it inherits the convergence guarantees of both ADMM and the augmented Lagrangian method. Experimental results corroborate our theoretical findings.


Sign in / Sign up

Export Citation Format

Share Document