A Nondifferentiable Optimization Algorithm for Constrained Minimax Linkage Function Generation

1993 ◽  
Vol 115 (4) ◽  
pp. 978-987 ◽  
Author(s):  
K. Kurien Issac

This paper describes a nondifferentiable optimization (NDO) algorithm for solving constrained minimax linkage synthesis. Use of a proper characterization of minima makes the algorithm superior to the smooth optimization algorithms for minimax linkage synthesis and the concept of following the curved ravines of the objective function makes it very effective. The results obtained are superior to some of the reported solutions and demonstrate the algorithm’s ability to consistently arrive at actual minima from widely separated starting points. The results indicate that Chebyshev’s characterization is not a necessary condition for minimax linkages, while the characterization used in the algorithm is a proper necessary condition.

Author(s):  
Nelson Ricardo Coelho Flores Zuniga

Even with previous works having studied about the accuracy and objective function of several nonhyperbolic multiparametric travel-time approximations for velocity analysis, they lack tests concerning different optimization algorithms and how they influence the accuracy and processing time. Once many approximations were tested and found the multimodal one which presented the best accuracy results, it is possible to perform a velocity analysis with different global search optimization algorithms. The minimization of the curve calculated with the converted wave moveout equation to the observed curve can be done for each optimization algorithm selected in this work. The travel-time curves tested here are the PP and PS reflection events coming from the interface of the top of an offshore ultra-deep reservoir. After the inversion routine have been performed, it is possible to define the processing time and the accuracy of each optimization algorithm for this kind of problem.


Author(s):  
Arnulf Jentzen ◽  
Benno Kuckuck ◽  
Ariel Neufeld ◽  
Philippe von Wurstemberger

Abstract Stochastic gradient descent (SGD) optimization algorithms are key ingredients in a series of machine learning applications. In this article we perform a rigorous strong error analysis for SGD optimization algorithms. In particular, we prove for every arbitrarily small $\varepsilon \in (0,\infty )$ and every arbitrarily large $p{\,\in\,} (0,\infty )$ that the considered SGD optimization algorithm converges in the strong $L^p$-sense with order $1/2-\varepsilon $ to the global minimum of the objective function of the considered stochastic optimization problem under standard convexity-type assumptions on the objective function and relaxed assumptions on the moments of the stochastic errors appearing in the employed SGD optimization algorithm. The key ideas in our convergence proof are, first, to employ techniques from the theory of Lyapunov-type functions for dynamical systems to develop a general convergence machinery for SGD optimization algorithms based on such functions, then, to apply this general machinery to concrete Lyapunov-type functions with polynomial structures and, thereafter, to perform an induction argument along the powers appearing in the Lyapunov-type functions in order to achieve for every arbitrarily large $ p \in (0,\infty ) $ strong $ L^p $-convergence rates.


Author(s):  
Łukasz Knypiński

Purpose The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor. Design/methodology/approach A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function. Findings The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used. Originality/value The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.


Author(s):  
YUPING WANG

In this paper, we propose a uniform enhancement approach called smoothing function method, which can cooperate any optimization algorithm and improve its performance. The method has two phases. In the first phase, a smoothing function is constructed by using a properly truncated Fourier series. It can preserve the overall shape of the original objective function but eliminate many of its local optimal points, thus it can well approach the objective function. Then, the optimal solution of the smoothing function is searched by an optimization algorithm (e.g. traditional algorithm or evolutionary algorithm) so that the search becomes much easier. In the second phase, we switch to optimize the original function for some iterations by using the best solution(s) obtained in phase 1 as an initial point (population). Thereafter, the smoothing function is updated in order to approximate the original function more accurately. These two phases are repeated until the best solutions obtained in several successively second phases cannot be improved obviously. In this manner, any optimization algorithm will become much easier in searching optimal solution. Finally, we use the proposed approach to enhance two typical optimization algorithms: Powell direct algorithm and a simple genetic algorithm. The simulation results on ten challenging benchmarks indicate the proposed approach can effectively improve the performance of these two algorithms.


2021 ◽  
Vol 11 (10) ◽  
pp. 4382
Author(s):  
Ali Sadeghi ◽  
Sajjad Amiri Doumari ◽  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Pavel Trojovský ◽  
...  

Optimization is the science that presents a solution among the available solutions considering an optimization problem’s limitations. Optimization algorithms have been introduced as efficient tools for solving optimization problems. These algorithms are designed based on various natural phenomena, behavior, the lifestyle of living beings, physical laws, rules of games, etc. In this paper, a new optimization algorithm called the good and bad groups-based optimizer (GBGBO) is introduced to solve various optimization problems. In GBGBO, population members update under the influence of two groups named the good group and the bad group. The good group consists of a certain number of the population members with better fitness function than other members and the bad group consists of a number of the population members with worse fitness function than other members of the population. GBGBO is mathematically modeled and its performance in solving optimization problems was tested on a set of twenty-three different objective functions. In addition, for further analysis, the results obtained from the proposed algorithm were compared with eight optimization algorithms: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), and the whale optimization algorithm (WOA), tunicate swarm algorithm (TSA), and marine predators algorithm (MPA). The results show that the proposed GBGBO algorithm has a good ability to solve various optimization problems and is more competitive than other similar algorithms.


2017 ◽  
Vol 65 (4) ◽  
pp. 479-488 ◽  
Author(s):  
A. Boboń ◽  
A. Nocoń ◽  
S. Paszek ◽  
P. Pruski

AbstractThe paper presents a method for determining electromagnetic parameters of different synchronous generator models based on dynamic waveforms measured at power rejection. Such a test can be performed safely under normal operating conditions of a generator working in a power plant. A generator model was investigated, expressed by reactances and time constants of steady, transient, and subtransient state in the d and q axes, as well as the circuit models (type (3,3) and (2,2)) expressed by resistances and inductances of stator, excitation, and equivalent rotor damping circuits windings. All these models approximately take into account the influence of magnetic core saturation. The least squares method was used for parameter estimation. There was minimized the objective function defined as the mean square error between the measured waveforms and the waveforms calculated based on the mathematical models. A method of determining the initial values of those state variables which also depend on the searched parameters is presented. To minimize the objective function, a gradient optimization algorithm finding local minima for a selected starting point was used. To get closer to the global minimum, calculations were repeated many times, taking into account the inequality constraints for the searched parameters. The paper presents the parameter estimation results and a comparison of the waveforms measured and calculated based on the final parameters for 200 MW and 50 MW turbogenerators.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.


2021 ◽  
Vol 13 (16) ◽  
pp. 8703
Author(s):  
Andrés Alfonso Rosales-Muñoz ◽  
Luis Fernando Grisales-Noreña ◽  
Jhon Montano ◽  
Oscar Danilo Montoya ◽  
Alberto-Jesus Perea-Moreno

This paper addresses the optimal power flow problem in direct current (DC) networks employing a master–slave solution methodology that combines an optimization algorithm based on the multiverse theory (master stage) and the numerical method of successive approximation (slave stage). The master stage proposes power levels to be injected by each distributed generator in the DC network, and the slave stage evaluates the impact of each power configuration (proposed by the master stage) on the objective function and the set of constraints that compose the problem. In this study, the objective function is the reduction of electrical power losses associated with energy transmission. In addition, the constraints are the global power balance, nodal voltage limits, current limits, and a maximum level of penetration of distributed generators. In order to validate the robustness and repeatability of the solution, this study used four other optimization methods that have been reported in the specialized literature to solve the problem addressed here: ant lion optimization, particle swarm optimization, continuous genetic algorithm, and black hole optimization algorithm. All of them employed the method based on successive approximation to solve the load flow problem (slave stage). The 21- and 69-node test systems were used for this purpose, enabling the distributed generators to inject 20%, 40%, and 60% of the power provided by the slack node in a scenario without distributed generation. The results revealed that the multiverse optimizer offers the best solution quality and repeatability in networks of different sizes with several penetration levels of distributed power generation.


2007 ◽  
Vol 7 (7) ◽  
pp. 624-638
Author(s):  
J. de Vicente

We study the separability of bipartite quantum systems in arbitrary dimensions using the Bloch representation of their density matrix. This approach enables us to find an alternative characterization of the separability problem, from which we derive a necessary condition and sufficient conditions for separability. For a certain class of states the necessary condition and a sufficient condition turn out to be equivalent, therefore yielding a necessary and sufficient condition. The proofs of the sufficient conditions are constructive, thus providing decompositions in pure product states for the states that satisfy them. We provide examples that show the ability of these conditions to detect entanglement. In particular, the necessary condition is proved to be strong enough to detect bound entangled states.


2014 ◽  
Vol 986-987 ◽  
pp. 1954-1957
Author(s):  
Hai Feng Sun ◽  
Xiao Ming Wu ◽  
Chen Da Zheng ◽  
Xiao Qian Liu

This paper presents a new modeling method, based on the physical structure of the IGBT module, considering the distribution parameters of high frequency.This method is simple and the wideband model has physical meaning, by choosing suitable model initial parameters as well as the objective function which use the iterative optimization algorithm to solve the model parameters.Comparing the wideband model with the measured results ,the wideband model maintain high accuracy in the range of 100KHz ~ 20MHz.


Sign in / Sign up

Export Citation Format

Share Document