scholarly journals A Novel Bat Algorithm Based on Differential Operator and Lévy Flights Trajectory

2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Jian Xie ◽  
Yongquan Zhou ◽  
Huan Chen

Aiming at the phenomenon of slow convergence rate and low accuracy of bat algorithm, a novel bat algorithm based on differential operator and Lévy flights trajectory is proposed. In this paper, a differential operator is introduced to accelerate the convergence speed of proposed algorithm, which is similar to mutation strategy “DE/best/2” in differential algorithm. Lévy flights trajectory can ensure the diversity of the population against premature convergence and make the algorithm effectively jump out of local minima. 14 typical benchmark functions and an instance of nonlinear equations are tested; the simulation results not only show that the proposed algorithm is feasible and effective, but also demonstrate that this proposed algorithm has superior approximation capabilities in high-dimensional space.

Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 925 ◽  
Author(s):  
Li ◽  
Li ◽  
Liu ◽  
Ruan

This paper proposed an improved bat algorithm based on Lévy flights and adjustment factors (LAFBA). Dynamically decreasing inertia weight is added to the velocity update, which effectively balances the global and local search of the algorithm; the search strategy of Lévy flight is added to the position update, so that the algorithm maintains a good population diversity and the global search ability is improved; and the speed adjustment factor is added, which effectively improves the speed and accuracy of the algorithm. The proposed algorithm was then tested using 10 benchmark functions and 2 classical engineering design optimizations. The simulation results show that the LAFBA has stronger optimization performance and higher optimization efficiency than basic bat algorithm and other bio-inspired algorithms. Furthermore, the results of the real-world engineering problems demonstrate the superiority of LAFBA in solving challenging problems with constrained and unknown search spaces.


2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 20100-20116
Author(s):  
Xianjin Zhou ◽  
Fei Gao ◽  
Xi Fang ◽  
Zehong Lan

2017 ◽  
Vol 50 (46) ◽  
pp. 465002 ◽  
Author(s):  
Satya N Majumdar ◽  
Philippe Mounaix ◽  
Grégory Schehr

Sign in / Sign up

Export Citation Format

Share Document