scholarly journals The Optimisation for Local Coupled Extreme Learning Machine Using Differential Evolution

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Yanpeng Qu ◽  
Ansheng Deng

Many strategies have been exploited for the task of reinforcing the effectiveness and efficiency of extreme learning machine (ELM), from both methodology and structure perspectives. By activating all the hidden nodes with different degrees, local coupled extreme learning machine (LC-ELM) is capable of decoupling the link architecture between the input layer and the hidden layer in ELM. Such activated degrees are jointly determined by the associated addresses and fuzzy membership functions assigned to the hidden nodes. In order to further refine the weight searching space of LC-ELM, this paper implements an optimisation, entitled evolutionary local coupled extreme learning machine (ELC-ELM). This method makes use of the differential evolutionary (DE) algorithm to optimise the hidden node addresses and the radiuses of the fuzzy membership functions, until the qualified fitness or the maximum iteration step is reached. The efficacy of the presented work is verified through systematic simulated experimentations in both regression and classification applications. Experimental results demonstrate that the proposed technique outperforms three ELM alternatives, namely, the classical ELM, LC-ELM, and OSFuzzyELM, according to a series of reliable performances.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhike Zhao ◽  
Xiaoguang Zhang

An improved classification approach is proposed to solve the hot research problem of some complex multiclassification samples based on extreme learning machine (ELM). ELM was proposed based on the single-hidden layer feed-forward neural network (SLFNN). ELM is characterized by the easier parameter selection rules, the faster converge speed, the less human intervention, and so on. In order to further improve the classification precision of ELM, an improved generation method of the network structure of ELM is developed by dynamically adjusting the number of hidden nodes. The number change of the hidden nodes can serve as the computational updated step length of the ELM algorithm. In this paper, the improved algorithm can be called the variable step incremental extreme learning machine (VSI-ELM). In order to verify the effect of the hidden layer nodes on the performance of ELM, an open-source machine learning database (University of California, Irvine (UCI)) is provided by the performance test data sets. The regression and classification experiments are used to study the performance of the VSI-ELM model, respectively. The experimental results show that the VSI-ELM algorithm is valid. The classification of different degrees of broken wires is now still a problem in the nondestructive testing of hoisting wire rope. The magnetic flux leakage (MFL) method of wire rope is an efficient nondestructive method which plays an important role in safety evaluation. Identifying the proposed VSI-ELM model is effective and reliable for actually applying data, and it is used to identify the classification problem of different types of samples from MFL signals. The final experimental results show that the VSI-ELM algorithm is of faster classification speed and higher classification accuracy of different broken wires.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jie Lai ◽  
Xiaodan Wang ◽  
Rui Li ◽  
Yafei Song ◽  
Lei Lei

In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and output of the hidden layer to 0 with a single fixed probability. But the Biased DropConnect and Biased Dropout divide the connection weights and hidden nodes into high and low groups by threshold, and set different groups to 0 with different probabilities. Connection weights with high value and hidden nodes with a high-activated value, which make more contribution to network performance, will be kept by a lower drop probability, while the weights and hidden nodes with a low value will be given a higher drop probability to keep the drop probability of the whole network to a fixed constant. Using Biased DropConnect and Biased Dropout regularization, in BD-ELM, the sparsity of parameters is enhanced and the structural complexity is reduced. Experiments on various benchmark datasets show that Biased DropConnect and Biased Dropout can effectively address the overfitting, and BD-ELM can provide higher classification accuracy than ELM, R-ELM, and Drop-ELM.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Fei Gao ◽  
Jiangang Lv

Single-Stage Extreme Learning Machine (SS-ELM) is presented to dispose of the mechanical fault diagnosis in this paper. Based on it, the traditional mapping type of extreme learning machine (ELM) has been changed and the eigenvectors extracted from signal processing methods are directly regarded as outputs of the network’s hidden layer. Then the uncertainty that training data transformed from the input space to the ELM feature space with the ELM mapping and problem of the selection of the hidden nodes are avoided effectively. The experiment results of diesel engine fault diagnosis show good performance of the SS-ELM algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Qinwei Fan ◽  
Ting Liu

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.


Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.


2021 ◽  
pp. 107482
Author(s):  
Carlos Perales-González ◽  
Francisco Fernández-Navarro ◽  
Javier Pérez-Rodríguez ◽  
Mariano Carbonero-Ruz

Author(s):  
Jia-Bin Zhou ◽  
Yan-Qin Bai ◽  
Yan-Ru Guo ◽  
Hai-Xiang Lin

AbstractIn general, data contain noises which come from faulty instruments, flawed measurements or faulty communication. Learning with data in the context of classification or regression is inevitably affected by noises in the data. In order to remove or greatly reduce the impact of noises, we introduce the ideas of fuzzy membership functions and the Laplacian twin support vector machine (Lap-TSVM). A formulation of the linear intuitionistic fuzzy Laplacian twin support vector machine (IFLap-TSVM) is presented. Moreover, we extend the linear IFLap-TSVM to the nonlinear case by kernel function. The proposed IFLap-TSVM resolves the negative impact of noises and outliers by using fuzzy membership functions and is a more accurate reasonable classifier by using the geometric distribution information of labeled data and unlabeled data based on manifold regularization. Experiments with constructed artificial datasets, several UCI benchmark datasets and MNIST dataset show that the IFLap-TSVM has better classification accuracy than other state-of-the-art twin support vector machine (TSVM), intuitionistic fuzzy twin support vector machine (IFTSVM) and Lap-TSVM.


2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.


Sign in / Sign up

Export Citation Format

Share Document