Optimization With Discrete Variables Via Recursive Quadratic Programming: Part 1—Concepts and Definitions

1989 ◽  
Vol 111 (1) ◽  
pp. 124-129 ◽  
Author(s):  
J. Z. Cha ◽  
R. W. Mayne

Although a variety of algorithms for discrete nonlinear programming have been proposed, the solution of discrete optimization problems is far from mature compared to continuous optimization techniques. This paper focuses on the recursive quadratic programming strategy which has proven to be efficient and robust for continuous optimization. The procedure is adapted to consider a class of mixed discrete nonlinear programming problems and utilizes the analytical properties of functions and constraints. This first part of the paper considers definitions, concepts, and possible convergence criteria. Part II includes the development and testing of the algorithm.

Author(s):  
J. Z. Cha ◽  
R. W. Mayne

Abstract Although a variety of algorithms for discrete nonlinear programming have been proposed, the solution of discrete optimization problems is far from mature relative to continuous optimization. This paper focuses on the recursive quadratic programming strategy which has proven to be efficient and robust for continuous optimization. The procedure is adapted to handle problems of mixed discrete nonlinear programming and utilizes the analytical properties of functions and constraints. This first part of the paper considers definitions, concepts and convergence criteria. Part II includes the development and testing of the algorithm.


1989 ◽  
Vol 111 (1) ◽  
pp. 130-136 ◽  
Author(s):  
J. Z. Cha ◽  
R. W. Mayne

A discrete recursive quadratic programming algorithm is developed for a class of mixed discrete constrained nonlinear programming (MDCNP) problems. The symmetric rank one (SR1) Hessian update formula is used to generate second order information. Also, strategies, such as the watchdog technique (WT), the monotonicity analysis technique (MA), the contour analysis technique (CA), and the restoration of feasibility have been considered. Heuristic aspects of handling discrete variables are treated via the concepts and convergence discussions of Part I. This paper summarizes the details of the algorithm and its implementation. Test results for 25 different problems are presented to allow evaluation of the approach and provide a basis for performance comparison. The results show that the suggested method is a promising one, efficient and robust for the MDCNP problem.


1996 ◽  
Vol 4 (1) ◽  
pp. 1-32 ◽  
Author(s):  
Zbigniew Michalewicz ◽  
Marc Schoenauer

Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.


1991 ◽  
Vol 113 (3) ◽  
pp. 280-285 ◽  
Author(s):  
T. J. Beltracchi ◽  
G. A. Gabriele

The Recursive Quadratic Programming (RQP) method has become known as one of the most effective and efficient algorithms for solving engineering optimization problems. The RQP method uses variable metric updates to build approximations of the Hessian of the Lagrangian. If the approximation of the Hessian of the Lagrangian converges to the true Hessian of the Lagrangian, then the RQP method converges quadratically. The choice of a variable metric update has a direct effect on the convergence of the Hessian approximation. Most of the research performed with the RQP method uses some modification of the Broyden-Fletcher-Shanno (BFS) variable metric update. This paper describes a hybrid variable metric update that yields good approximations to the Hessian of the Lagrangian. The hybrid update combines the best features of the Symmetric Rank One and BFS updates, but is less sensitive to inexact line searches than the BFS update, and is more stable than the SR1 update. Testing of the method shows that the efficiency of the RQP method is unaffected by the new update but more accurate Hessian approximations are produced. This should increase the accuracy of the solutions obtained with the RQP method, and more importantly, provide more reliable information for post optimality analyses, such as parameter sensitivity studies.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Akemi Gálvez ◽  
Andrés Iglesias

A classical issue in many applied fields is to obtain an approximating surface to a given set of data points. This problem arises in Computer-Aided Design and Manufacturing (CAD/CAM), virtual reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.


Author(s):  
Haoyu Wang ◽  
Nan Shao ◽  
Defu Lian

Fast item recommendation based on implicit feedback is vital in practical scenarios due to data-abundance, but challenging because of the lack of negative samples and the large number of recommended items. Recent adversarial methods unifying generative and discriminative models are promising, since the generative model, as a negative sampler, gradually improves as iteration continues. However, binary-valued generative model is still unexplored within the min-max framework, but important for accelerating item recommendation. Optimizing binary-valued models is difficult due to non-smooth and nondifferentiable. To this end, we propose two novel methods to relax the binarization based on the error function and Gumbel trick so that the generative model can be optimized by many popular solvers, such as SGD and ADMM. The binary-valued generative model is then evaluated within the min-max framework on four real-world datasets and shown its superiority to competing hashing-based recommendation algorithms. In addition, our proposed framework can approximate discrete variables precisely and be applied to solve other discrete optimization problems.


2019 ◽  
Vol 10 (1) ◽  
pp. 62-74
Author(s):  
Rashmi Welekar ◽  
Nileshsingh V. Thakur

This article describes how inspired by the natural process of evolution in genetic algorithms, memetic algorithms (MAs) are a category of cultural evolution phenomenon. The very concept of MA has been discussed in the last few years and is adding newer dimensions to MA and computational skills of algorithms. There are many optimization algorithms which fully exploit the problem under consideration. This article presents a heuristic approach for an improvised algorithm which takes into consideration various optimization parameters in isolation and tries to integrate the self-learning technique of MA. A general structure of MA according to this article should be perfectly in-line with brain activities which are neurotically tested and given maximum emphasis on local search and context-based predictive approaches rather than mathematically computing every event and taking or picking solutions based on results of certain formula. This article goes one step beyond the conventional set of the variety of problem domains, ranging from discrete optimization, continuous optimization, constrained optimization and multi objective optimization in which MAs have been successfully implemented. These optimization techniques must be processed using outcomes of predictive optimization and using a method of elimination to make the search set smaller and smaller as we progress deeper into the search. There is a scarcity of literature and also lack of availability of comprehensive reviews on MAs. The proposed technique is a better approach for solving combinatorial optimization problems. This article gives an overview of various domains and problem types in which MA can be used. Apart from this, the problem of character recognition using predictive optimization and implementation of elimination theory MA is discussed.


Author(s):  
J. C. Cha ◽  
R. W. Wayne

Abstract A discrete recursive quadratic programming algorithm is developed for mixed discrete constrained nonlinear progrmming (MDCNP) problems. The symmetric rank one (SR1) Hessian update formula is used to generate second order information. Also, strategies, such as the watchdog technique (WT), the monotonicity analysis technique (MA), the contour analysis technique (CA) and the restoration strategy of feasibility have been considered. Heuristic aspects of handling discrete variables are treated via the concepts and convergence discussions of Part I. This paper summarizes the details of the algorithm and its implementation. Test results for 25 different problems are presented to allow evaluation of this approach and provide a basis for performance comparison. The results show that the suggested method is a promising one, efficient and robust for the MDCNP problem.


Author(s):  
T. J. Beltracchi ◽  
G. A. Gabriele

Abstract The Recursive Quadratic Programming (RQP) method has been shown to be one of the most effective and efficient algorithms for solving engineering optimization problems. The RQP method uses variable metric updates to build approximations of the Hessian of the Lagrangian. If the approximation of the Hessian of the Lagrangian converges to the true Hessian of the Lagrangian, then the RQP method converges quadratically. The convergence of the Hessian approximation is affected by the choice of the variable metric update. Most of the research that has been performed with the RQP method uses the Broyden Fletcher Shanno (BFS) or Symmetric Rank One (SR1) variable metric update. The SR1 update has been shown to yield better estimates of the Hessian of the Lagrangian than those found when the BFS update is used, though there are cases where the SR1 update becomes unstable. This paper describes a hybrid variable metric update that is shown to yield good approximations of the Hessian of the Lagrangian. The hybrid update combines the best features of the SRI and BFS updates and is more stable than the SR1 update. Testing of the method shows that the efficiency of the RQP method is not affected by the new update, but more accurate Hessian approximations are produced. This should increase the accuracy of the solutions and provide more reliable information for post optimality analyses, such as parameter sensitivity studies.


Sign in / Sign up

Export Citation Format

Share Document