scholarly journals A New Two-Stage Algorithm for Solving Optimization Problems

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 491
Author(s):  
Sajjad Amiri Doumari ◽  
Hadi Givi ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Victor Leiva ◽  
...  

Optimization seeks to find inputs for an objective function that result in a maximum or minimum. Optimization methods are divided into exact and approximate (algorithms). Several optimization algorithms imitate natural phenomena, laws of physics, and behavior of living organisms. Optimization based on algorithms is the challenge that underlies machine learning, from logistic regression to training neural networks for artificial intelligence. In this paper, a new algorithm called two-stage optimization (TSO) is proposed. The TSO algorithm updates population members in two steps at each iteration. For this purpose, a group of good population members is selected and then two members of this group are randomly used to update the position of each of them. This update is based on the first selected good member at the first stage, and on the second selected good member at the second stage. We describe the stages of the TSO algorithm and model them mathematically. Performance of the TSO algorithm is evaluated for twenty-three standard objective functions. In order to compare the optimization results of the TSO algorithm, eight other competing algorithms are considered, including genetic, gravitational search, grey wolf, marine predators, particle swarm, teaching-learning-based, tunicate swarm, and whale approaches. The numerical results show that the new algorithm is superior and more competitive in solving optimization problems when compared with other algorithms.

Author(s):  
Lu Chen ◽  
Handing Wang ◽  
Wenping Ma

AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.


2014 ◽  
Vol 984-985 ◽  
pp. 419-424
Author(s):  
P. Sabarinath ◽  
M.R. Thansekhar ◽  
R. Saravanan

Arriving optimal solutions is one of the important tasks in engineering design. Many real-world design optimization problems involve multiple conflicting objectives. The design variables are of continuous or discrete in nature. In general, for solving Multi Objective Optimization methods weight method is preferred. In this method, all the objective functions are converted into a single objective function by assigning suitable weights to each objective functions. The main drawback lies in the selection of proper weights. Recently, evolutionary algorithms are used to find the nondominated optimal solutions called as Pareto optimal front in a single run. In recent years, Non-dominated Sorting Genetic Algorithm II (NSGA-II) finds increasing applications in solving multi objective problems comprising of conflicting objectives because of low computational requirements, elitism and parameter-less sharing approach. In this work, we propose a methodology which integrates NSGA-II and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for solving a two bar truss problem. NSGA-II searches for the Pareto set where two bar truss is evaluated in terms of minimizing the weight of the truss and minimizing the total displacement of the joint under the given load. Subsequently, TOPSIS selects the best compromise solution.


Author(s):  
Mohsen Bayani ◽  
Casper Wickman ◽  
Lars Lindkvist ◽  
Rikard Söderberg

Abstract Squeak and rattle are annoying sounds that are often regarded as the failure indicators by car users. Geometric variation is a key contributor to the generation of squeak and rattle sounds. Optimisation of the connection configuration in assemblies can be a provision to minimise this risk. However, the optimisation process for large assemblies can be computationally expensive. The focus of this work is to propose a two-stage evolutionary optimisation scheme to find the fittest connection configurations that minimise the risk for squeak and rattle. This was done by defining the objective functions as the measured variation and deviation in the rattle direction and the squeak plane. In the first stage, the location of the fasteners primarily contributing to the rattle direction measures are identified. In the second stage, fasteners primarily contributing to the squeak plane measures are added to the fittest configuration from phase one. It was assumed that the fasteners from the squeak group plane have a lower-order effect on the rattle direction measures, compared to the fasteners from the rattle direction group. This assumption was falsified for a set of simplified geometries. Also, a new uniform space filler algorithm was introduced to efficiently generate an inclusive and feasible starting population for the optimisation process by incorporating the problem constraints in the algorithm. For two industrial cases, it was shown that by using the proposed two-stage optimisation scheme the variation and deviation measures in critical interfaces for squeak and rattle improved compared to the baseline results.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1092
Author(s):  
Héctor Migallón ◽  
Akram Belazi ◽  
José-Luis Sánchez-Romero ◽  
Héctor Rico ◽  
Antonio Jimeno-Morenilla

Several population-based metaheuristic optimization algorithms have been proposed in the last decades, none of which are able either to outperform all existing algorithms or to solve all optimization problems according to the No Free Lunch (NFL) theorem. Many of these algorithms behave effectively, under a correct setting of the control parameter(s), when solving different engineering problems. The optimization behavior of these algorithms is boosted by applying various strategies, which include the hybridization technique and the use of chaotic maps instead of the pseudo-random number generators (PRNGs). The hybrid algorithms are suitable for a large number of engineering applications in which they behave more effectively than the thoroughbred optimization algorithms. However, they increase the difficulty of correctly setting control parameters, and sometimes they are designed to solve particular problems. This paper presents three hybridizations dubbed HYBPOP, HYBSUBPOP, and HYBIND of up to seven algorithms free of control parameters. Each hybrid proposal uses a different strategy to switch the algorithm charged with generating each new individual. These algorithms are Jaya, sine cosine algorithm (SCA), Rao’s algorithms, teaching-learning-based optimization (TLBO), and chaotic Jaya. The experimental results show that the proposed algorithms perform better than the original algorithms, which implies the optimal use of these algorithms according to the problem to be solved. One more advantage of the hybrid algorithms is that no prior process of control parameter tuning is needed.


2017 ◽  
Vol 19 (6) ◽  
pp. 890-899 ◽  
Author(s):  
Apostolos Chondronasios ◽  
Konstantinos Gonelas ◽  
Vasilis Kanakoudis ◽  
Menelaos Patelis ◽  
Panagiota Korkana

Abstract Dividing a water distribution network (WDN) into district metered areas (DMAs) is the first vital step towards pressure management and real losses reduction. However, other factors of water quality such as the water age must be taken into account while forming DMAs. The current study uses genetic algorithm (GA) optimization methods to achieve the desired WDN segmentation conditions in terms of: (a) reducing the operating pressure, thus reducing the system's real losses; and (b) reducing the water age, thus improving the feeling of water freshness and preventing growth of disinfection byproducts. Techniques based on GA are a proven way to provide a very good solution to optimization problems. The solution is obtained using an objective function and setting boundary constraints. The formation of the objective functions is tested through Matlab's optimization toolbox. The logic of the objective functions' formulation for both the operating pressure and the water age optimization is recorded and analyzed. The method's application utilized a sample network model assisted by EPANET and Bentley's WaterGEMS software tools. The morphology of the DMAs is presented for each scenario, as well as the results of the network's segmentation according to the operating pressure and the water age.


Author(s):  
Tomoyuki Miyashita ◽  
Hiroshi Yamakawa

Abstract Many optimization methods and practical softwares have been developing for many years and most of them are very effective, especially to solve practical problems. But, non-linearity of objective functions and constraint functions, which have frequently seen in practical problems, has caused a difficulty in optimization. This difficulty mainly lies in the existence of several local optimum solutions. In this study, we have proposed a new global optimization methodology that provides an information exchange mechanism in the nearest neighbor method. We have developed a simple software system, which treated each design point in optimization as an agent. Many agents can search the optima simultaneously exchanging the their information. We have defined two roles of the agents. Local search agents have roles on searching local optima by such an existing method as the steepest decent method and so on. Stochastic search agents investigate the design space by making use of the information from other agents. Through simple and several structural optimization problems, we have confirmed the advantages of the method.


2020 ◽  
Vol 10 (21) ◽  
pp. 7683 ◽  
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Ali Dehghani ◽  
Haidar Samet ◽  
Carlos Sotelo ◽  
...  

In recent decades, many optimization algorithms have been proposed by researchers to solve optimization problems in various branches of science. Optimization algorithms are designed based on various phenomena in nature, the laws of physics, the rules of individual and group games, the behaviors of animals, plants and other living things. Implementation of optimization algorithms on some objective functions has been successful and in others has led to failure. Improving the optimization process and adding modification phases to the optimization algorithms can lead to more acceptable and appropriate solution. In this paper, a new method called Dehghani method (DM) is introduced to improve optimization algorithms. DM effects on the location of the best member of the population using information of population location. In fact, DM shows that all members of a population, even the worst one, can contribute to the development of the population. DM has been mathematically modeled and its effect has been investigated on several optimization algorithms including: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), and grey wolf optimizer (GWO). In order to evaluate the ability of the proposed method to improve the performance of optimization algorithms, the mentioned algorithms have been implemented in both version of original and improved by DM on a set of twenty-three standard objective functions. The simulation results show that the modified optimization algorithms with DM provide more acceptable and competitive performance than the original versions in solving optimization problems.


Author(s):  
Hong-Shuang Li ◽  
Qiao-Yue Dong ◽  
Jiao-Yang Yuan

Stochastic optimization methods have been widely employed to find solutions to structural design optimization problems in the past two decades, especially for truss structures. The primary aim of this study is to introduce a design optimization method combining an augmented Lagrangian function and teaching–learning-based optimization for truss and nontruss structural design optimization. The augmented Lagrangian function serves as a constraint-handling tool in the proposed method and converts a constrained optimization problem into an unconstrained one. On the other hand, teaching–learning-based optimization is employed to resolve the transformed, unconstrained optimization problems. Since the proper values of the Lagrangian multipliers and penalty factors are unknown in advance, the proposed method is implemented in an iterative way to avoid the issue of selecting them, i.e. the Lagrangian multipliers and penalty factors are automatically updated according to the violation level of all constraints. To examine the performance of the proposed method, it is applied on a group of benchmark truss optimization problems and a group of nontruss optimization problems of aircraft wing structures. The computational results obtained by the proposed method are compared to the results produced by both other version of teaching–learning-based optimization and stochastic optimization methods.


Author(s):  
Yeh-Liang Hsu ◽  
Tzyh-Li Sun ◽  
Li-Hwang Leu

Abstract A two-stage sequential approximation method is developed for non-linear discrete-variable optimization. The concept of this technique is similar to that of sequential linear programming (SLP), only in each iteration, the linear programming subproblem in the first stage is modified into a discrete programming subproblem in the second stage in order to solve for a discrete solution. SLP is often impractical when applied to engineering optimization problems with implicit constraints, because of the difficulties in choosing proper move limits. For this reason, in the second stage a “boundary control factor” is introduced to augment the function of move limits. Several mechanical design optimization problems are presented to demonstrate this algorithm.


Author(s):  
Marc Goerigk ◽  
Adam Kasperski ◽  
Paweł Zieliński

AbstractIn this paper a class of robust two-stage combinatorial optimization problems is discussed. It is assumed that the uncertain second-stage costs are specified in the form of a convex uncertainty set, in particular polyhedral or ellipsoidal ones. It is shown that the robust two-stage versions of basic network optimization and selection problems are NP-hard, even in a very restrictive cases. Some exact and approximation algorithms for the general problem are constructed. Polynomial and approximation algorithms for the robust two-stage versions of basic problems, such as the selection and shortest path problems, are also provided.


Sign in / Sign up

Export Citation Format

Share Document