Heuristic Concentration: A Study of Stage One

10.1068/b2526 ◽  
2000 ◽  
Vol 27 (1) ◽  
pp. 137-150 ◽  
Author(s):  
Kenneth E Rosing

Heuristic concentration (HC) is a metaheuristic for the solution of certain combinatorial problems. In stage one, a concentration set (CS), consisting of nodes likely to be in the optimal solution, is developed by multiple runs of an interchange heuristic. In stage two, a good, usually optimal, solution is constructed by selecting the best nodes from the CS. The CS is effective when it is small but comprehensive. Both of these characteristics depend upon: (1) the robustness of the heuristic; (2) the number of times it is run, q; and (3) the number of “best” solutions used to create the CS, m. Stage two is thus totally dependent upon the efficiency of stage one for the improved, and at least potentially optimal, solution. Proper values for the parameters m and q increase the probability of selecting correct elements to construct the optimal solution in stage two and to decrease the work in its identification. After a consideration of the robustness of two alternative interchange heuristics I will concentrate on the appropriate values for the parameters m and q. This is an empirical examination and the p-median problem is used throughout.

Author(s):  
Seamus M. McGovern ◽  
Surendra M. Gupta

NP-complete combinatorial problems often necessitate the use of near-optimal solution techniques including heuristics and metaheuristics. The addition of multiple optimization criteria can further complicate comparison of these solution techniques due to the decision-maker’s weighting schema potentially masking search limitations. In addition, many contemporary problems lack quantitative assessment tools, including benchmark data sets. This chapter proposes the use of lexicographic goal programming for use in comparing combinatorial search techniques. These techniques are implemented here using a recently formulated problem from the area of production analysis. The development of a benchmark data set and other assessment tools is demonstrated, and these are then used to compare the performance of a genetic algorithm and an H-K general-purpose heuristic as applied to the production-related application.


Author(s):  
Laurens Bliek ◽  
Sicco Verwer ◽  
Mathijs de Weerdt

Abstract When a black-box optimization objective can only be evaluated with costly or noisy measurements, most standard optimization algorithms are unsuited to find the optimal solution. Specialized algorithms that deal with exactly this situation make use of surrogate models. These models are usually continuous and smooth, which is beneficial for continuous optimization problems, but not necessarily for combinatorial problems. However, by choosing the basis functions of the surrogate model in a certain way, we show that it can be guaranteed that the optimal solution of the surrogate model is integer. This approach outperforms random search, simulated annealing and a Bayesian optimization algorithm on the problem of finding robust routes for a noise-perturbed traveling salesman benchmark problem, with similar performance as another Bayesian optimization algorithm, and outperforms all compared algorithms on a convex binary optimization problem with a large number of variables.


Author(s):  
Joan Escamilla ◽  
Miguel A Salido

Manufacturing systems involve a huge number of combinatorial problems that must be optimized in an efficient way. One of these problems is related to task scheduling problems. These problems are NP-hard, so most of the complete techniques are not able to obtain an optimal solution in an efficient way. Furthermore, most of real manufacturing problems are dynamic, so the main objective is not only to obtain an optimized solution in terms of makespan, tardiness, and so on but also to obtain a solution able to absorb minor incidences/disruptions presented in any daily process. Most of these industries are also focused on improving the energy efficiency of their industrial processes. In this article, we propose a knowledge-based model to analyse previous incidences occurred in the machines with the aim of modelling the problem to obtain robust and energy-aware solutions. The resultant model (called dual model) will protect the more dynamic and disrupted tasks by assigning buffer times. These buffers will be used to absorb incidences during execution and to reduce the machine rate to minimize energy consumption. This model is solved by a memetic algorithm which combines a genetic algorithm with a local search to obtain robust and energy-aware solutions able to absorb further disruptions. The proposed dual model has been proven to be efficient in terms of energy consumption, robustness and stability in different and well-known benchmarks.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Ľuboš Buzna ◽  
Michal Koháni ◽  
Jaroslav Janáček

We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve thep-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.


Author(s):  
Julien Baste ◽  
Michael R. Fellows ◽  
Lars Jaffke ◽  
Tomáš Masařík ◽  
Mateus de Oliveira Oliveira ◽  
...  

When modeling an application of practical relevance as an instance of a combinatorial problem X, we are often interested not merely in finding one optimal solution for that instance, but in finding a sufficiently diverse collection of good solutions. In this work we initiate a systematic study of diversity from the point of view of fixed-parameter tractability theory. We consider an intuitive notion of diversity of a collection of solutions which suits a large variety of combinatorial problems of practical interest. Our main contribution is an algorithmic framework which --automatically-- converts a tree-decomposition-based dynamic programming algorithm for a given combinatorial problem X into a dynamic programming algorithm for the diverse version of X. Surprisingly, our algorithm has a polynomial dependence on the diversity parameter.


2005 ◽  
Vol 15 (1) ◽  
pp. 53-63 ◽  
Author(s):  
Yuri Kochetov ◽  
Tatyana Levanova ◽  
Ekaterina Alekseeva ◽  
Maxim Loresh

In this paper we consider the well known p-median problem. We introduce a new large neighborhood based on ideas of S.Lin and B.W. Kernighan for the graph partition problem. We study the behavior of the local improvement and Ant Colony algorithms with new neighborhood. Computational experiments show that the local improvement algorithm with the neighborhood is fast and finds feasible solutions with small relative error. The Ant Colony algorithm with new neighborhood as a rule finds an optimal solution for computationally difficult test instances.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Haibin Wang ◽  
Jiaojiao Zhao ◽  
Bosi Wang ◽  
Lian Tong

A quantum approximate optimization algorithm (QAOA) is a polynomial-time approximate optimization algorithm used to solve combinatorial optimization problems. However, the existing QAOA algorithms have poor generalization performance in finding an optimal solution from a feasible solution set of combinatorial problems. In order to solve this problem, a quantum approximate optimization algorithm with metalearning for the MaxCut problem (MetaQAOA) is proposed. Specifically, a quantum neural network (QNN) is constructed in the form of the parameterized quantum circuit to detect different topological phases of matter, and a classical long short-term memory (LSTM) neural network is used as a black-box optimizer, which can quickly assist QNN to find the approximate optimal QAOA parameters. The experiment simulation via TensorFlow Quantum (TFQ) shows that MetaQAOA requires fewer iterations to reach the threshold of the loss function, and the threshold of the loss value after training is smaller than comparison methods. In addition, our algorithm can learn parameter update heuristics which can generalize to larger system sizes and still outperform other initialization strategies of this scale.


Author(s):  
Xiang Li ◽  
Christophe Claramunt ◽  
Xihui Zhang ◽  
Yingping Huang

Finding solutions for the p-median problem is one of the primary research issues in the field of location theory. Since the p-median problem has proved to be a NP-hard problem, several heuristic and approximation methods have been proposed to find near optimal solutions with acceptable computational time. This study introduces a computationally efficient and deterministic algorithm whose objective is to return a near optimal solution for the p-median problem. The merit of the proposed approach, called Relocation Median (RLM), lies in solving the p-median problem in superior computational time with a tiny percentage deviation from the optimal solution. This approach is especially relevant when the problem is enormous where, even when a heuristic method is applied, the computational time is quite high. RLM consists of two parts: The first part uses a greedy search method to find an initial solution; the second part sequentially substitutes medians in the initial solution with additional vertices to reduce the total travel cost. Experiments show that to solve the same p-median problem, the RLM approach can significantly shorten the computational time needed by a genetic algorithm based approach to obtain solutions with similar quality.


Sign in / Sign up

Export Citation Format

Share Document