Particle Swarm Optimization With Crossover and Mutation Operators Using the Diversity Criteria

Author(s):  
Guang Dong ◽  
John Cooper

Particle Swarm Optimization is a population based globalized search algorithm that mimics the behavior of swarms. It belongs to the larger class of evolutionary algorithms as widely used stochastic technique in the global optimization field. Since the PSO is population based, it requires no auxiliary information, such as the gradient of the problem. It is known that each particle in the PSO uses only two pieces of information, called the personal best position and the global best position, to update its moving velocity and position by generations. One disadvantage of this algorithm is that it can be easily trapped into some local optimal solutions because of the premature convergence. This may be an issue when solving complex multi-modal functions with multiple local minimums. Hence, the global optimization algorithm should have the ability to prevent being trapped into local optima by keeping wide search space and maintaining the population diversity. In order to improve the performance of the PSO for complex global optimization problems, this paper introduces both crossover and mutation operators to the basic PSO algorithm. The proposed algorithm uses the mechanism that all the particles in the current iteration will have crossover and mutation operations if the diversity criteria of the particles is reduced to be smaller than a predefined limit value. Therefore, the PSO using both crossover and mutation operators can maintain the diversity of population and enhance the search ability as to get better results while solving complex problems. This study adopts the average distance around the swarm center as the diversity measure, and extends the distance metrics to both L1 norm distance and L∞ norm distance. To verify the usability and effectiveness of the proposed algorithm, it is applied to 12 widely used nonlinear benchmark functions. These examples show that the proposed PSO with crossover and mutation operators using the diversity criteria has better optimization performance than the basic PSO by maintaining the swarm diversity. Moreover, the PSO using the L1 norm distance diversity gives better results than both L2 and L∞ norm distance for most cases.

2013 ◽  
Vol 427-429 ◽  
pp. 1934-1938
Author(s):  
Zhong Rong Zhang ◽  
Jin Peng Liu ◽  
Ke De Fei ◽  
Zhao Shan Niu

The aim is to improve the convergence of the algorithm, and increase the population diversity. Adaptively particles of groups fallen into local optimum is adjusted in order to realize global optimal. by judging groups spatial location of concentration and fitness variance. At the same time, the global factors are adjusted dynamically with the action of the current particle fitness. Four typical function optimization problems are drawn into simulation experiment. The results show that the improved particle swarm optimization algorithm is convergent, robust and accurate.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Xiaobing Yu ◽  
Jie Cao ◽  
Haiyan Shan ◽  
Li Zhu ◽  
Jun Guo

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail.


Author(s):  
Hrvoje Markovic ◽  
◽  
Fangyan Dong ◽  
Kaoru Hirota

A parallel multi-population based metaheuristic optimization framework, called Concurrent Societies, inspired by human intellectual evolution, is proposed. It uses population based metaheuristics to evolve its populations, and fitness function approximations as representations of knowledge. By utilizing iteratively refined approximations it reduces the number of required evaluations and, as a byproduct, it produces models of the fitness function. The proposed framework is implemented as two Concurrent Societies: one based on genetic algorithm and one based on particle swarm optimization both using k -nearest neighbor regression as fitness approximation. The performance is evaluated on 10 standard test problems and compared to other commonly used metaheuristics. Results show that the usage of the framework considerably increases efficiency (by a factor of 7.6 to 977) and effectiveness (absolute error reduced by more than few orders of magnitude). The proposed framework is intended for optimization problems with expensive fitness functions, such as optimization in design and interactive optimization.


2012 ◽  
Vol 239-240 ◽  
pp. 1027-1032 ◽  
Author(s):  
Qing Guo Wei ◽  
Yan Mei Wang ◽  
Zong Wu Lu

Applying many electrodes is undesirable for real-life brain-computer interface (BCI) application since the recording preparation can be troublesome and time-consuming. Multi-objective particle swarm optimization (MOPSO) has been widely utilized to solve multi-objective optimization problems and thus can be employed for channel selection. This paper presented a novel method named cultural-based MOPSO (CMOPSO) for channel selection in motor imagery based BCI. The CMOPSO method introduces a cultural framework to adapt the personalized flight parameters of the mutated particles. A comparison between the proposed algorithm and typical L1-norm algorithm was conducted, and the results showed that the proposed approach is more effective in selecting a smaller subset of channels while maintaining the classification accuracy unreduced.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Waqas Haider Bangyal ◽  
Abdul Hameed ◽  
Wael Alosaimi ◽  
Hashem Alyami

Particle swarm optimization (PSO) algorithm is a population-based intelligent stochastic search technique used to search for food with the intrinsic manner of bee swarming. PSO is widely used to solve the diverse problems of optimization. Initialization of population is a critical factor in the PSO algorithm, which considerably influences the diversity and convergence during the process of PSO. Quasirandom sequences are useful for initializing the population to improve the diversity and convergence, rather than applying the random distribution for initialization. The performance of PSO is expanded in this paper to make it appropriate for the optimization problem by introducing a new initialization technique named WELL with the help of low-discrepancy sequence. To solve the optimization problems in large-dimensional search spaces, the proposed solution is termed as WE-PSO. The suggested solution has been verified on fifteen well-known unimodal and multimodal benchmark test problems extensively used in the literature, Moreover, the performance of WE-PSO is compared with the standard PSO and two other initialization approaches Sobol-based PSO (SO-PSO) and Halton-based PSO (H-PSO). The findings indicate that WE-PSO is better than the standard multimodal problem-solving techniques. The results validate the efficacy and effectiveness of our approach. In comparison, the proposed approach is used for artificial neural network (ANN) learning and contrasted to the standard backpropagation algorithm, standard PSO, H-PSO, and SO-PSO, respectively. The results of our technique has a higher accuracy score and outperforms traditional methods. Also, the outcome of our work presents an insight on how the proposed initialization technique has a high effect on the quality of cost function, integration, and diversity aspects.


Sign in / Sign up

Export Citation Format

Share Document