scholarly journals Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1272
Author(s):  
Konstantin Barkalov ◽  
Ilya Lebedev ◽  
Evgeny Kozinov

This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy).

Author(s):  
Vladimir Norkin

The paper presents the results of testing the stochastic smoothing method for global optimization of a multiextremal function in a convex feasible subset of Euclidean space. Preliminarily, the objective function is extended outside the admissible region so that its global minimum does not change, and it becomes coercive. The smoothing of a function at any point is carried out by averaging the values of the function over some neighborhood of this point. The size of the neighborhood is a smoothing parameter. Smoothing eliminates small local extrema of the original function. With a sufficiently large value of the smoothing parameter, the averaged function can have only one minimum. The smoothing method consists in replacing the original function with a sequence of smoothed approximations with vanishing to zero smoothing parameter and optimization of the latter functions by contemporary stochastic optimization methods. Passing from the minimum of one smoothed function to a close minimum of the next smoothed function, we can gradually come to the region of the global minimum of the original function. The smoothing method is also applicable for the optimization of nonsmooth nonconvex functions. It is shown that the smoothing method steadily solves test global optimization problems of small dimensions from the literature.


2017 ◽  
Vol 26 (2) ◽  
pp. 287-300
Author(s):  
Shikha Mehta

AbstractNature-inspired algorithms are seen as potential tools to solve large-scale global optimization problems. Memetic algorithms (MAs) are nature-inspired techniques based on evolutionary computation. MAs are considered as modified genetic algorithms integrated with a local search mechanism. Conventional MAs perform well for small dimensions; however, their performance starts declining with the increase in dimensions. It is popularly known as the “curse of dimensionality” problem. In order to solve this problem, MA with constrained local search (MACLS) is proposed for single-objective optimization problems. MACLS restricts the local search to be performed after every generation. Controlled local search enhances the optimization capability of the MA. MACLS has been evaluated with respect to GS-MPSO (the latest modification of MA) and MLCC, EPUS-PSO, JDEdynNP-F, MTS, DewSAcc, DMS-PSO, LSEDA-gl, UEP, ALPSEA, classical DE (differential evolution), and real-coded CHC algorithms that participated in the Congress on Evolutionary Computation 2008 competition. The results establish that MACLS significantly outperforms these algorithms in attaining global optima for unimodal and multimodal single-objective optimization problems for small as well as large dimensions.


2010 ◽  
Vol 145 ◽  
pp. 533-536
Author(s):  
Yan Dong Song

The power source of windscreen wiper is motor, which is the core of the entire wiper system and is very high quality requirements. The wiper motor installed on the front windshield and worm gears are generally made of one mechanical part. The four-bar linkage of windshield wiper of automobile is driven by motor through worm gears. The method of simulated annealing is a technique that has attracted significant attention as suitable for optimization problems of large scale, especially ones where a desired global extremum is hidden among many, poorer, local extrema. The simulated annealing algorithm with penalty strategy is adopted to solve the optimal model of windscreen wiper mechanism, thus motion accuracy of wiper mechanism is greatly increased, and the searching number is greatly decreased.


2013 ◽  
Vol 32 (4) ◽  
pp. 981-985
Author(s):  
Ya-fei HUANG ◽  
Xi-ming LIANG ◽  
Yi-xiong CHEN

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1477
Author(s):  
Chun-Yao Lee ◽  
Guang-Lin Zhuo

This paper proposes a hybrid whale optimization algorithm (WOA) that is derived from the genetic and thermal exchange optimization-based whale optimization algorithm (GWOA-TEO) to enhance global optimization capability. First, the high-quality initial population is generated to improve the performance of GWOA-TEO. Then, thermal exchange optimization (TEO) is applied to improve exploitation performance. Next, a memory is considered that can store historical best-so-far solutions, achieving higher performance without adding additional computational costs. Finally, a crossover operator based on the memory and a position update mechanism of the leading solution based on the memory are proposed to improve the exploration performance. The GWOA-TEO algorithm is then compared with five state-of-the-art optimization algorithms on CEC 2017 benchmark test functions and 8 UCI repository datasets. The statistical results of the CEC 2017 benchmark test functions show that the GWOA-TEO algorithm has good accuracy for global optimization. The classification results of 8 UCI repository datasets also show that the GWOA-TEO algorithm has competitive results with regard to comparison algorithms in recognition rate. Thus, the proposed algorithm is proven to execute excellent performance in solving optimization problems.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1055
Author(s):  
Qian Sun ◽  
William Ampomah ◽  
Junyu You ◽  
Martha Cather ◽  
Robert Balch

Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1840
Author(s):  
Nicolás Caselli ◽  
Ricardo Soto ◽  
Broderick Crawford ◽  
Sergio Valdivia ◽  
Rodrigo Olivares

Metaheuristics are intelligent problem-solvers that have been very efficient in solving huge optimization problems for more than two decades. However, the main drawback of these solvers is the need for problem-dependent and complex parameter setting in order to reach good results. This paper presents a new cuckoo search algorithm able to self-adapt its configuration, particularly its population and the abandon probability. The self-tuning process is governed by using machine learning, where cluster analysis is employed to autonomously and properly compute the number of agents needed at each step of the solving process. The goal is to efficiently explore the space of possible solutions while alleviating human effort in parameter configuration. We illustrate interesting experimental results on the well-known set covering problem, where the proposed approach is able to compete against various state-of-the-art algorithms, achieving better results in one single run versus 20 different configurations. In addition, the result obtained is compared with similar hybrid bio-inspired algorithms illustrating interesting results for this proposal.


Sign in / Sign up

Export Citation Format

Share Document