Solving Optimization Problems: Linear and Quadratic Programming

Author(s):  
Eihab B. M. Bashier
Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.


SPE Journal ◽  
2020 ◽  
Vol 25 (04) ◽  
pp. 1938-1963 ◽  
Author(s):  
Zhe Liu ◽  
Albert C. Reynolds

Summary Solving a large-scale optimization problem with nonlinear state constraints is challenging when adjoint gradients are not available for computing the derivatives needed in the basic optimization algorithm used. Here, we present a methodology for the solution of an optimization problem with nonlinear and linear constraints, where the true gradients that cannot be computed analytically are approximated by ensemble-based stochastic gradients using an improved stochastic simplex approximate gradient (StoSAG). Our discussion is focused on the application of our procedure to waterflooding optimization where the optimization variables are the well controls and the cost function is the life-cycle net present value (NPV) of production. The optimization algorithm used for solving the constrained-optimization problem is sequential quadratic programming (SQP) with constraints enforced using the filter method. We introduce modifications to StoSAG that improve its fidelity [i.e., the improvements give a more accurate approximation to the true gradient (assumed here to equal the gradient computed with the adjoint method) than the approximation obtained using the original StoSAG algorithm]. The modifications to StoSAG vastly improve the performance of the optimization algorithm; in fact, we show that if the basic StoSAG is applied without the improvements, then the SQP might yield a highly suboptimal result for optimization problems with nonlinear state constraints. For robust optimization, each constraint should be satisfied for every reservoir model, which is highly computationally intensive. However, the computationally viable alternative of letting the reservoir simulation enforce the nonlinear state constraints using its internal heuristics yields significantly inferior results. Thus, we develop an alternative procedure for handling nonlinear state constraints, which avoids explicit enforcement of nonlinear constraints for each reservoir model yet yields results where any constraint violation for any model is extremely small.


1974 ◽  
Vol 7 (3) ◽  
pp. 311-322 ◽  
Author(s):  
H. Schmitter ◽  
E. Straub

Abstract and IntroductionQuadratic programming means maximizing or minimizing a quadratic function of one or more variables subject to linear restrictions i.e. linear equations and/or inequalities.Among the numerous insurance problems which can be formulated as quadratic programs we shall only discuss four, namely the Credibility, Retention, IBNR and the Cost Distribution problems.Generally, there is no explicite solution to quadratic optimization problems, only statements about the existence of a solution can be made or some algorithm may be recommended in order to get exact or approximate numerical solutions. Restricting ourselves to typical problems of the above mentioned type, however, enables us to give an explicit solution in terms of general formulae for quite a number of cases, such as the onedimensional credibility problem, the retention problem and—under relatively week assumptions— for the IBNR-problem.The results given here are by no means new. The only goal of this paper is to describe a few fundamental insurance problems from a common mathematical standpoint, namely that of quadratic programming and at the same time, to draw attention to a few special aspects and open questions in this field.


1991 ◽  
Vol 113 (3) ◽  
pp. 280-285 ◽  
Author(s):  
T. J. Beltracchi ◽  
G. A. Gabriele

The Recursive Quadratic Programming (RQP) method has become known as one of the most effective and efficient algorithms for solving engineering optimization problems. The RQP method uses variable metric updates to build approximations of the Hessian of the Lagrangian. If the approximation of the Hessian of the Lagrangian converges to the true Hessian of the Lagrangian, then the RQP method converges quadratically. The choice of a variable metric update has a direct effect on the convergence of the Hessian approximation. Most of the research performed with the RQP method uses some modification of the Broyden-Fletcher-Shanno (BFS) variable metric update. This paper describes a hybrid variable metric update that yields good approximations to the Hessian of the Lagrangian. The hybrid update combines the best features of the Symmetric Rank One and BFS updates, but is less sensitive to inexact line searches than the BFS update, and is more stable than the SR1 update. Testing of the method shows that the efficiency of the RQP method is unaffected by the new update but more accurate Hessian approximations are produced. This should increase the accuracy of the solutions obtained with the RQP method, and more importantly, provide more reliable information for post optimality analyses, such as parameter sensitivity studies.


2015 ◽  
Vol 1113 ◽  
pp. 370-375 ◽  
Author(s):  
Nur Amirah Mohd Ali ◽  
Norashid Aziz

In this work, two types of optimization problems which are crucially related to batch reactor operation are considered. First problem is to maximize the conversion and second problem is to minimize the batch time. Both problems are solved using sequential quadratic programming (SQP) available in Aspen Plus. The manipulated variables i.e. reactor temperature and amount of palm oil are optimized simultaneously based on the specified objective function and equality constraint. Effect of intervals for both optimization problems are also evaluated in this paper. The results show that in maximizing conversion, the number of intervals did not significantly affect the amount of conversion. Meanwhile in minimizing batch time, the introduction of intervals was positively reduced the reactor temperature but negatively minimize the batch time.


1978 ◽  
Vol 18 (1) ◽  
pp. 65-75
Author(s):  
C.H. Scott ◽  
T.R. Jefferson

The idea of duality is now a widely accepted and useful idea in the analysis of optimization problems posed in real finite dimensional vector spaces. Although similar ideas have filtered over to the analysis of optimization problems in complex space, these have mainly been concerned with problems of the linear and quadratic programming variety. In this paper we present a general duality theory for convex mathematical programs in finite dimensional complex space, and, by means of an example, show that this formulation captures all previous results in the area.


2012 ◽  
Vol 134 (10) ◽  
Author(s):  
Jianhua Zhou ◽  
Shuo Cheng ◽  
Mian Li

Uncertainty plays a critical role in engineering design as even a small amount of uncertainty could make an optimal design solution infeasible. The goal of robust optimization is to find a solution that is both optimal and insensitive to uncertainty that may exist in parameters and design variables. In this paper, a novel approach, sequential quadratic programming for robust optimization (SQP-RO), is proposed to solve single-objective continuous nonlinear optimization problems with interval uncertainty in parameters and design variables. This new SQP-RO is developed based on a classic SQP procedure with additional calculations for constraints on objective robustness, feasibility robustness, or both. The obtained solution is locally optimal and robust. Eight numerical and engineering examples with different levels of complexity are utilized to demonstrate the applicability and efficiency of the proposed SQP-RO with the comparison to its deterministic SQP counterpart and RO approaches using genetic algorithms. The objective and/or feasibility robustness are verified via Monte Carlo simulations.


Sign in / Sign up

Export Citation Format

Share Document