scholarly journals GBUO: “The Good, the Bad, and the Ugly” Optimizer

2021 ◽  
Vol 11 (5) ◽  
pp. 2042
Author(s):  
Hadi Givi ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ruben Morales-Menendez ◽  
Ricardo A. Ramirez-Mendoza ◽  
...  

Optimization problems in various fields of science and engineering should be solved using appropriate methods. Stochastic search-based optimization algorithms are a widely used approach for solving optimization problems. In this paper, a new optimization algorithm called “the good, the bad, and the ugly” optimizer (GBUO) is introduced, based on the effect of three members of the population on the population updates. In the proposed GBUO, the algorithm population moves towards the good member and avoids the bad member. In the proposed algorithm, a new member called ugly member is also introduced, which plays an essential role in updating the population. In a challenging move, the ugly member leads the population to situations contrary to society’s movement. GBUO is mathematically modeled, and its equations are presented. GBUO is implemented on a set of twenty-three standard objective functions to evaluate the proposed optimizer’s performance for solving optimization problems. The mentioned standard objective functions can be classified into three groups: unimodal, multimodal with high-dimension, and multimodal with fixed dimension functions. There was a further analysis carried-out for eight well-known optimization algorithms. The simulation results show that the proposed algorithm has a good performance in solving different optimization problems models and is superior to the mentioned optimization algorithms.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4567
Author(s):  
Mohammad Dehghani ◽  
Pavel Trojovský

Population-based optimization algorithms are one of the most widely used and popular methods in solving optimization problems. In this paper, a new population-based optimization algorithm called the Teamwork Optimization Algorithm (TOA) is presented to solve various optimization problems. The main idea in designing the TOA is to simulate the teamwork behaviors of the members of a team in order to achieve their desired goal. The TOA is mathematically modeled for usability in solving optimization problems. The capability of the TOA in solving optimization problems is evaluated on a set of twenty-three standard objective functions. Additionally, the performance of the proposed TOA is compared with eight well-known optimization algorithms in providing a suitable quasi-optimal solution. The results of optimization of objective functions indicate the ability of the TOA to solve various optimization problems. Analysis and comparison of the simulation results of the optimization algorithms show that the proposed TOA is superior and far more competitive than the eight compared algorithms.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Rahib H. Abiyev ◽  
Mustafa Tunay

A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions.


2021 ◽  
Author(s):  
Yixuan Wang ◽  
Faruk Alpak ◽  
Guohua Gao ◽  
Chaohui Chen ◽  
Jeroen Vink ◽  
...  

Abstract Although it is possible to apply traditional optimization algorithms to determine the Pareto front of a multi-objective optimization problem, the computational cost is extremely high, when the objective function evaluation requires solving a complex reservoir simulation problem and optimization cannot benefit from adjoint-based gradients. This paper proposes a novel workflow to solve bi-objective optimization problems using the distributed quasi-Newton (DQN) method, which is a well-parallelized and derivative-free optimization (DFO) method. Numerical tests confirm that the DQN method performs efficiently and robustly. The efficiency of the DQN optimizer stems from a distributed computing mechanism which effectively shares the available information discovered in prior iterations. Rather than performing multiple quasi-Newton optimization tasks in isolation, simulation results are shared among distinct DQN optimization tasks or threads. In this paper, the DQN method is applied to the optimization of a weighted average of two objectives, using different weighting factors for different optimization threads. In each iteration, the DQN optimizer generates an ensemble of search points (or simulation cases) in parallel and a set of non-dominated points is updated accordingly. Different DQN optimization threads, which use the same set of simulation results but different weighting factors in their objective functions, converge to different optima of the weighted average objective function. The non-dominated points found in the last iteration form a set of Pareto optimal solutions. Robustness as well as efficiency of the DQN optimizer originates from reliance on a large, shared set of intermediate search points. On the one hand, this set of searching points is (much) smaller than the combined sets needed if all optimizations with different weighting factors would be executed separately; on the other hand, the size of this set produces a high fault tolerance. Even if some simulations fail at a given iteration, DQN’s distributed-parallel information-sharing protocol is designed and implemented such that the optimization process can still proceed to the next iteration. The proposed DQN optimization method is first validated on synthetic examples with analytical objective functions. Then, it is tested on well location optimization problems, by maximizing the oil production and minimizing the water production. Furthermore, the proposed method is benchmarked against a bi-objective implementation of the MADS (Mesh Adaptive Direct Search) method, and the numerical results reinforce the auspicious computational attributes of DQN observed for the test problems. To the best of our knowledge, this is the first time that a well-parallelized and derivative-free DQN optimization method has been developed and tested on bi-objective optimization problems. The methodology proposed can help improve efficiency and robustness in solving complicated bi-objective optimization problems by taking advantage of model-based search optimization algorithms with an effective information-sharing mechanism.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2832
Author(s):  
Petr Coufal ◽  
Štěpán Hubálovský ◽  
Marie Hubálovská ◽  
Zoltan Balogh

Numerous optimization problems have been defined in different disciplines of science that must be optimized using effective techniques. Optimization algorithms are an effective and widely used method of solving optimization problems that are able to provide suitable solutions for optimization problems. In this paper, a new nature-based optimization algorithm called Snow Leopard Optimization Algorithm (SLOA) is designed that mimics the natural behaviors of snow leopards. SLOA is simulated in four phases including travel routes, hunting, reproduction, and mortality. The different phases of the proposed algorithm are described and then the mathematical modeling of the SLOA is presented in order to implement it on different optimization problems. A standard set of objective functions, including twenty-three functions, is used to evaluate the ability of the proposed algorithm to optimize and provide appropriate solutions for optimization problems. Also, the optimization results obtained from the proposed SLOA are compared with eight other well-known optimization algorithms. The optimization results show that the proposed SLOA has a high ability to solve various optimization problems. Also, the analysis and comparison of the optimization results obtained from the SLOA with the other eight algorithms shows that the SLOA is able to provide more appropriate quasi-optimal solutions and closer to the global optimal, and with better performance, it is much more competitive than similar algorithms.


Author(s):  
Tomoyuki Miyashita ◽  
Hiroshi Yamakawa

Abstract Many optimization methods and practical softwares have been developing for many years and most of them are very effective, especially to solve practical problems. But, non-linearity of objective functions and constraint functions, which have frequently seen in practical problems, has caused a difficulty in optimization. This difficulty mainly lies in the existence of several local optimum solutions. In this study, we have proposed a new global optimization methodology that provides an information exchange mechanism in the nearest neighbor method. We have developed a simple software system, which treated each design point in optimization as an agent. Many agents can search the optima simultaneously exchanging the their information. We have defined two roles of the agents. Local search agents have roles on searching local optima by such an existing method as the steepest decent method and so on. Stochastic search agents investigate the design space by making use of the information from other agents. Through simple and several structural optimization problems, we have confirmed the advantages of the method.


2020 ◽  
Vol 10 (21) ◽  
pp. 7683 ◽  
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Haidar Samet ◽  
Carlos Sotelo ◽  
...  

In recent decades, many optimization algorithms have been proposed by researchers to solve optimization problems in various branches of science. Optimization algorithms are designed based on various phenomena in nature, the laws of physics, the rules of individual and group games, the behaviors of animals, plants and other living things. Implementation of optimization algorithms on some objective functions has been successful and in others has led to failure. Improving the optimization process and adding modification phases to the optimization algorithms can lead to more acceptable and appropriate solution. In this paper, a new method called Dehghani method (DM) is introduced to improve optimization algorithms. DM effects on the location of the best member of the population using information of population location. In fact, DM shows that all members of a population, even the worst one, can contribute to the development of the population. DM has been mathematically modeled and its effect has been investigated on several optimization algorithms including: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), and grey wolf optimizer (GWO). In order to evaluate the ability of the proposed method to improve the performance of optimization algorithms, the mentioned algorithms have been implemented in both version of original and improved by DM on a set of twenty-three standard objective functions. The simulation results show that the modified optimization algorithms with DM provide more acceptable and competitive performance than the original versions in solving optimization problems.


2019 ◽  
Vol 2 (3) ◽  
pp. 508-517
Author(s):  
FerdaNur Arıcı ◽  
Ersin Kaya

Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.


Author(s):  
Pengfei (Taylor) Li ◽  
Peirong (Slade) Wang ◽  
Farzana Chowdhury ◽  
Li Zhang

Traditional formulations for transportation optimization problems mostly build complicating attributes into constraints while keeping the succinctness of objective functions. A popular solution is the Lagrangian decomposition by relaxing complicating constraints and then solving iteratively. Although this approach is effective for many problems, it generates intractability in other problems. To address this issue, this paper presents an alternative formulation for transportation optimization problems in which the complicating attributes of target problems are partially or entirely built into the objective function instead of into the constraints. Many mathematical complicating constraints in transportation problems can be efficiently modeled in dynamic network loading (DNL) models based on the demand–supply equilibrium, such as the various road or vehicle capacity constraints or “IF–THEN” type constraints. After “pre-building” complicating constraints into the objective functions, the objective function can be approximated well with customized high-fidelity DNL models. Three types of computing benefits can be achieved in the alternative formulation: ( a) the original problem will be kept the same; ( b) computing complexity of the new formulation may be significantly reduced because of the disappearance of hard constraints; ( c) efficiency loss on the objective function side can be mitigated via multiple high-performance computing techniques. Under this new framework, high-fidelity and problem-specific DNL models will be critical to maintain the attributes of original problems. Therefore, the authors’ recent efforts in enhancing the DNL’s fidelity and computing efficiency are also described in the second part of this paper. Finally, a demonstration case study is conducted to validate the new approach.


Author(s):  
Umit Can ◽  
Bilal Alatas

The classical optimization algorithms are not efficient in solving complex search and optimization problems. Thus, some heuristic optimization algorithms have been proposed. In this paper, exploration of association rules within numerical databases with Gravitational Search Algorithm (GSA) has been firstly performed. GSA has been designed as search method for quantitative association rules from the databases which can be regarded as search space. Furthermore, determining the minimum values of confidence and support for every database which is a hard job has been eliminated by GSA. Apart from this, the fitness function used for GSA is very flexible. According to the interested problem, some parameters can be removed from or added to the fitness function. The range values of the attributes have been automatically adjusted during the time of mining of the rules. That is why there is not any requirements for the pre-processing of the data. Attributes interaction problem has also been eliminated with the designed GSA. GSA has been tested with four real databases and promising results have been obtained. GSA seems an effective search method for complex numerical sequential patterns mining, numerical classification rules mining, and clustering rules mining tasks of data mining.


2021 ◽  
Vol 11 (10) ◽  
pp. 4382
Author(s):  
Ali Sadeghi ◽  
Sajjad Amiri Doumari ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Pavel Trojovský ◽  
...  

Optimization is the science that presents a solution among the available solutions considering an optimization problem’s limitations. Optimization algorithms have been introduced as efficient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the influence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better fitness function than other members and the bad group consists of a number of the population members with worse fitness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.


Sign in / Sign up

Export Citation Format

Share Document