scholarly journals A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM

Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 291
Author(s):  
Wenbiao Yang ◽  
Kewen Xia ◽  
Tiejun Li ◽  
Min Xie ◽  
Fei Song

A novel semi-supervised learning method is proposed to better utilize labeled and unlabeled samples to improve classification performance. However, there is exists the limitation that Laplace regularization in a semi-supervised extreme learning machine (SSELM) tends to lead to poor generalization ability and it ignores the role of labeled information. To solve the above problems, a Joint Regularized Semi-Supervised Extreme Learning Machine (JRSSELM) is proposed, which uses Hessian regularization instead of Laplace regularization and adds supervised information regularization. In order to solve the problem of slow convergence speed and the easy to fall into local optimum of marine predator algorithm (MPA), a multi-strategy marine predator algorithm (MSMPA) is proposed, which first uses a chaotic opposition learning strategy to generate high-quality initial population, then uses adaptive inertia weights and adaptive step control factor to improve the exploration, utilization, and convergence speed, and then uses neighborhood dimensional learning strategy to maintain population diversity. The parameters in JRSSELM are then optimized using MSMPA. The MSMPA-JRSSELM is applied to logging oil formation identification. The experimental results show that MSMPA shows obvious superiority and strong competitiveness in terms of convergence accuracy and convergence speed. Also, the classification performance of MSMPA-JRSSELM is better than other classification methods, and the practical application is remarkable.

2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Lei Zhao ◽  
Zhicheng Jia ◽  
Lei Chen ◽  
Yanju Guo

Backtracking search algorithm (BSA) is a relatively new evolutionary algorithm, which has a good optimization performance just like other population-based algorithms. However, there is also an insufficiency in BSA regarding its convergence speed and convergence precision. For solving the problem shown in BSA, this article proposes an improved BSA named COBSA. Enlightened by particle swarm optimization (PSO) algorithm, population control factor is added to the variation equation aiming to improve the convergence speed of BSA, so as to make algorithm have a better ability of escaping the local optimum. In addition, enlightened by differential evolution (DE) algorithm, this article proposes a novel evolutionary equation based on the fact that the disadvantaged group will search just around the best individual chosen from previous iteration to enhance the ability of local search. Simulation experiments based on a set of 18 benchmark functions show that, in general, COBSA displays obvious superiority in convergence speed and convergence precision when compared with BSA and the comparison algorithms.


2020 ◽  
Vol 90 (17-18) ◽  
pp. 2007-2021 ◽  
Author(s):  
Zhiyu Zhou ◽  
Ruoxi Zhang ◽  
Jianxin Zhang ◽  
Yaming Wang ◽  
Zefei Zhu ◽  
...  

Because it is difficulty to classify level of fabric wrinkle, this paper proposes a fabric winkle level classification model via online sequential extreme learning machine based on improved sine cosine algorithm (SCA). The SCA has excellent global optimization ability, can explore different search spaces, and effectively avoid falling into local optimum. Because the initial population of SCA will have an impact on its optimization speed and quality, the SCA is initialized by differential evolution (DE) to avoid local optimization, and then the output weight and hidden layer bias are optimized; that is, the improved SCA is used to select the optimal parameters of the online sequential extreme learning machine (OSELM) to improve the generalization performance of the algorithm. To verify the performance of the proposed model DE-SCA-OSELM, it will be compared with other algorithms using a fabric wrinkles dataset collected under standard conditions. The experimental results indicate that the proposed model can effectively find the optimal parameter value of OSELM. The average classification accuracy increased by 6.95%, 3.62%, 6.67%, and 3.34%, respectively, compared with the partial algorithms OSELM, SCAELM, RVFL and PSOSVM, which meets expectations.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Qiang Cai ◽  
Fenghai Li ◽  
Yifan Chen ◽  
Haisheng Li ◽  
Jian Cao ◽  
...  

Along with the strong representation of the convolutional neural network (CNN), image classification tasks have achieved considerable progress. However, majority of works focus on designing complicated and redundant architectures for extracting informative features to improve classification performance. In this study, we concentrate on rectifying the incomplete outputs of CNN. To be concrete, we propose an innovative image classification method based on Label Rectification Learning (LRL) through kernel extreme learning machine (KELM). It mainly consists of two steps: (1) preclassification, extracting incomplete labels through a pretrained CNN, and (2) label rectification, rectifying the generated incomplete labels by the KELM to obtain the rectified labels. Experiments conducted on publicly available datasets demonstrate the effectiveness of our method. Notably, our method is extensible which can be easily integrated with off-the-shelf networks for improving performance.


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 152 ◽  
Author(s):  
Su-qi Zhang ◽  
Kuo-Ping Lin

Short-term traffic flow forecasting is the technical basis of the intelligent transportation system (ITS). Higher precision, short-term traffic flow forecasting plays an important role in alleviating road congestion and improving traffic management efficiency. In order to improve the accuracy of short-term traffic flow forecasting, an improved bird swarm optimizer (IBSA) is used to optimize the random parameters of the extreme learning machine (ELM). In addition, the improved bird swarm optimization extreme learning machine (IBSAELM) model is established to predict short-term traffic flow. The main researches in this paper are as follows: (1) The bird swarm optimizer (BSA) is prone to fall into the local optimum, so the distribution mechanism of the BSA optimizer is improved. The first five percent of the particles with better fitness values are selected as producers. The last ten percent of the particles with worse fitness values are selected as beggars. (2) The one-day and two-day traffic flows are predicted by the support vector machine (SVM), particle swarm optimization support vector machine (PSOSVM), bird swarm optimization extreme learning machine (BSAELM) and IBSAELM models, respectively. (3) The prediction results of the models are evaluated. For the one-day traffic flow sequence, the mean absolute percentage error (MAPE) values of the IBSAELM model are smaller than the SVM, PSOSVM and BSAELM models, respectively. The experimental analysis results show that the IBSAELM model proposed in this study can meet the actual engineering requirements.


2020 ◽  
Vol 10 (21) ◽  
pp. 7488
Author(s):  
Yutu Yang ◽  
Xiaolin Zhou ◽  
Ying Liu ◽  
Zhongkang Hu ◽  
Fenglong Ding

The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.


2019 ◽  
Vol 9 (14) ◽  
pp. 2879 ◽  
Author(s):  
Jun Dong ◽  
Chunming Ye

Production scheduling of semiconductor wafer manufacturing is a challenging research topic in the field of industrial engineering. Based on this, the green manufacturing collaborative optimization problem of the semiconductor wafer distributed heterogeneous factory is first proposed, which is also a typical NP-hard problem with practical application value and significance. To solve this problem, it is very important to find an effective algorithm for rational allocation of jobs among various factories and the production scheduling of allocated jobs within each factory, so as to realize the collaborative optimization of the manufacturing process. In this paper, a scheduling model for green manufacturing collaborative optimization of the semiconductor wafer distributed heterogeneous factory is constructed. By designing a new learning strategy of initial population and leadership level, designing a new search strategy of the predatory behavior for the grey wolf algorithm, which is a new swarm intelligence optimization algorithm proposed in recent years, the diversity of the population is expanded and the local optimum of the algorithm is avoided. In the experimental stage, two factories’ and three factories’ test cases are generated, respectively. The effectiveness and feasibility of the algorithm proposed in this paper are verified through the comparative study with the improved Grey Wolf Algorithms—MODGWO, MOGWO, the fast and elitist multi-objective genetic algorithm—NSGA-II.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Liang-Rui Ren ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Junliang Shang ◽  
Chun-Hou Zheng

Abstract Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Chang-Jian Sun ◽  
Fang Gao

The marine predators algorithm (MPA) is a novel population-based optimization method that has been widely used in real-world optimization applications. However, MPA can easily fall into a local optimum because of the lack of population diversity in the late stage of optimization. To overcome this shortcoming, this paper proposes an MPA variant with a hybrid estimation distribution algorithm (EDA) and a Gaussian random walk strategy, namely, HEGMPA. The initial population is constructed using cubic mapping to enhance the diversity of individuals in the population. Then, EDA is adapted into MPA to modify the evolutionary direction using the population distribution information, thus improving the convergence performance of the algorithm. In addition, a Gaussian random walk strategy with medium solution is used to help the algorithm get rid of stagnation. The proposed algorithm is verified by simulation using the CEC2014 test suite. Simulation results show that the performance of HEGMPA is more competitive than other comparative algorithms, with significant improvements in terms of convergence accuracy and convergence speed.


2021 ◽  
Vol 8 (3) ◽  
pp. 533
Author(s):  
Budi Nugroho ◽  
Eva Yulia Puspaningrum

<p class="Abstrak">Saat ini banyak dikembangkan proses pendeteksian pneumonia berdasarkan citra paru-paru dari hasil foto rontgen (x-ray), sebagaimana juga dilakukan pada penelitian ini. Metode yang digunakan adalah <em>Convolutional Neural Network</em> (CNN) dengan arsitektur yang berbeda dengan sejumlah penelitian sebelumnya. Selain itu, penelitian ini juga memodifikasi model CNN dimana metode <em>Extreme Learning Machine</em> (ELM) digunakan pada bagian klasifikasi, yang kemudian disebut CNN-ELM. Dataset untuk uji coba menggunakan kumpulan citra paru-paru hasil foto rontgen pada Kaggle yang terdiri atas 1.583 citra normal dan 4.237 citra pneumonia. Citra asal pada dataset kaggle ini bervariasi, tetapi hampir semua diatas ukuran 1000x1000 piksel. Ukuran citra yang besar ini dapat membuat pemrosesan klasifikasi kurang efektif, sehingga mesin CNN biasanya memodifikasi ukuran citra menjadi lebih kecil. Pada penelitian ini, pengujian dilakukan dengan variasi ukuran citra input, untuk mengetahui pengaruhnya terhadap kinerja mesin pengklasifikasi. Hasil uji coba menunjukkan bahwa ukuran citra input berpengaruh besar terhadap kinerja klasifikasi pneumonia, baik klasifikasi yang menggunakan metode CNN maupun CNN-ELM. Pada ukuran citra input 200x200, metode CNN dan CNN-ELM menunjukkan kinerja paling tinggi. Jika kinerja kedua metode itu dibandingkan, maka Metode CNN-ELM menunjukkan kinerja yang lebih baik daripada CNN pada semua skenario uji coba. Pada kondisi kinerja paling tinggi, selisih akurasi antara metode CNN-ELM dan CNN mencapai 8,81% dan selisih F1 Score mencapai 0,0729. Hasil penelitian ini memberikan informasi penting bahwa ukuran citra input memiliki pengaruh besar terhadap kinerja klasifikasi pneumonia, baik klasifikasi menggunakan metode CNN maupun CNN-ELM. Selain itu, pada semua ukuran citra input yang digunakan untuk proses klasifikasi, metode CNN-ELM menunjukkan kinerja yang lebih baik daripada metode CNN.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>This research developed a pneumonia detection machine based on the lungs' images from X-rays (x-rays). The method used is the Convolutional Neural Network (CNN) with a different architecture from some previous research. Also, the CNN model is modified, where the classification process uses the Extreme Learning Machine (ELM), which is then called the CNN-ELM method. The empirical experiments dataset used a collection of lung x-ray images on Kaggle consisting of 1,583 normal images and 4,237 pneumonia images. The original image's size on the Kaggle dataset varies, but almost all of the images are more than 1000x1000 pixels. For classification processing to be more effective, CNN machines usually use reduced-size images. In this research, experiments were carried out with various input image sizes to determine the effect on the classifier's performance. The experimental results show that the input images' size has a significant effect on the classification performance of pneumonia, both the CNN and CNN-ELM classification methods. At the 200x200 input image size, the CNN and CNN-ELM methods showed the highest performance. If the two methods' performance is compared, then the CNN-ELM Method shows better performance than CNN in all test scenarios. The difference in accuracy between the CNN-ELM and CNN methods reaches 8.81% at the highest performance conditions, and the difference in F1-Score reaches 0.0729. This research provides important information that the size of the input image has a major influence on the classification performance of pneumonia, both classification using the CNN and CNN-ELM methods. Also, on all input image sizes used for the classification process, the CNN-ELM method shows better performance than the CNN method.</em></p>


Sign in / Sign up

Export Citation Format

Share Document