scholarly journals BD-ELM: A Regularized Extreme Learning Machine Using Biased DropConnect and Biased Dropout

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jie Lai ◽  
Xiaodan Wang ◽  
Rui Li ◽  
Yafei Song ◽  
Lei Lei

In order to prevent the overfitting and improve the generalization performance of Extreme Learning Machine (ELM), a new regularization method, Biased DropConnect, and a new regularized ELM using the Biased DropConnect and Biased Dropout (BD-ELM) are both proposed in this paper. Like the Biased Dropout to hidden nodes, the Biased DropConnect can utilize the difference of connection weights to keep more information of network after dropping. The regular Dropout and DropConnect set the connection weights and output of the hidden layer to 0 with a single fixed probability. But the Biased DropConnect and Biased Dropout divide the connection weights and hidden nodes into high and low groups by threshold, and set different groups to 0 with different probabilities. Connection weights with high value and hidden nodes with a high-activated value, which make more contribution to network performance, will be kept by a lower drop probability, while the weights and hidden nodes with a low value will be given a higher drop probability to keep the drop probability of the whole network to a fixed constant. Using Biased DropConnect and Biased Dropout regularization, in BD-ELM, the sparsity of parameters is enhanced and the structural complexity is reduced. Experiments on various benchmark datasets show that Biased DropConnect and Biased Dropout can effectively address the overfitting, and BD-ELM can provide higher classification accuracy than ELM, R-ELM, and Drop-ELM.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Zhike Zhao ◽  
Xiaoguang Zhang

An improved classification approach is proposed to solve the hot research problem of some complex multiclassification samples based on extreme learning machine (ELM). ELM was proposed based on the single-hidden layer feed-forward neural network (SLFNN). ELM is characterized by the easier parameter selection rules, the faster converge speed, the less human intervention, and so on. In order to further improve the classification precision of ELM, an improved generation method of the network structure of ELM is developed by dynamically adjusting the number of hidden nodes. The number change of the hidden nodes can serve as the computational updated step length of the ELM algorithm. In this paper, the improved algorithm can be called the variable step incremental extreme learning machine (VSI-ELM). In order to verify the effect of the hidden layer nodes on the performance of ELM, an open-source machine learning database (University of California, Irvine (UCI)) is provided by the performance test data sets. The regression and classification experiments are used to study the performance of the VSI-ELM model, respectively. The experimental results show that the VSI-ELM algorithm is valid. The classification of different degrees of broken wires is now still a problem in the nondestructive testing of hoisting wire rope. The magnetic flux leakage (MFL) method of wire rope is an efficient nondestructive method which plays an important role in safety evaluation. Identifying the proposed VSI-ELM model is effective and reliable for actually applying data, and it is used to identify the classification problem of different types of samples from MFL signals. The final experimental results show that the VSI-ELM algorithm is of faster classification speed and higher classification accuracy of different broken wires.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Yanpeng Qu ◽  
Ansheng Deng

Many strategies have been exploited for the task of reinforcing the effectiveness and efficiency of extreme learning machine (ELM), from both methodology and structure perspectives. By activating all the hidden nodes with different degrees, local coupled extreme learning machine (LC-ELM) is capable of decoupling the link architecture between the input layer and the hidden layer in ELM. Such activated degrees are jointly determined by the associated addresses and fuzzy membership functions assigned to the hidden nodes. In order to further refine the weight searching space of LC-ELM, this paper implements an optimisation, entitled evolutionary local coupled extreme learning machine (ELC-ELM). This method makes use of the differential evolutionary (DE) algorithm to optimise the hidden node addresses and the radiuses of the fuzzy membership functions, until the qualified fitness or the maximum iteration step is reached. The efficacy of the presented work is verified through systematic simulated experimentations in both regression and classification applications. Experimental results demonstrate that the proposed technique outperforms three ELM alternatives, namely, the classical ELM, LC-ELM, and OSFuzzyELM, according to a series of reliable performances.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Fei Gao ◽  
Jiangang Lv

Single-Stage Extreme Learning Machine (SS-ELM) is presented to dispose of the mechanical fault diagnosis in this paper. Based on it, the traditional mapping type of extreme learning machine (ELM) has been changed and the eigenvectors extracted from signal processing methods are directly regarded as outputs of the network’s hidden layer. Then the uncertainty that training data transformed from the input space to the ELM feature space with the ELM mapping and problem of the selection of the hidden nodes are avoided effectively. The experiment results of diesel engine fault diagnosis show good performance of the SS-ELM algorithm.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Qinwei Fan ◽  
Ting Liu

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.


Extreme Learning Machine (ELM) is an efficient and effective least-square-based learning algorithm for classification, regression problems based on single hidden layer feed-forward neural network (SLFN). It has been shown in the literature that it has faster convergence and good generalization ability for moderate datasets. But, there is great deal of challenge involved in computing the pseudoinverse when there are large numbers of hidden nodes or for large number of instances to train complex pattern recognition problems. To address this problem, a few approaches such as EM-ELM, DF-ELM have been proposed in the literature. In this paper, a new rank-based matrix decomposition of the hidden layer matrix is introduced to have the optimal training time and reduce the computational complexity for a large number of hidden nodes in the hidden layer. The results show that it has constant training time which is closer towards the minimal training time and very far from worst-case training time of the DF-ELM algorithm that has been shown efficient in the recent literature.


2016 ◽  
Vol 25 (4) ◽  
pp. 555-566 ◽  
Author(s):  
Saif F. Mahmood ◽  
Mohammad H. Marhaban ◽  
Fakhrul Z. Rokhani ◽  
Khairulmizam Samsudin ◽  
Olasimbo Ayodeji Arigbabu

AbstractExtreme Learning Machine provides very competitive performance to other related classical predictive models for solving problems such as regression, clustering, and classification. An ELM possesses the advantage of faster computational time in both training and testing. However, one of the main challenges of an ELM is the selection of the optimal number of hidden nodes. This paper presents a new approach to node selection of an ELM based on a 1-norm support vector machine (SVM). In this method, the targets of SVM yi ∈{+1, –1} are derived using the mean or median of ELM training errors as a threshold for separating the training data, which are projected to SVM dimensions. We present an integrated architecture that exploits the sparseness in solution of SVM to prune out the inactive hidden nodes in ELM. Several experiments are conducted on real-world benchmark datasets, and the results attained attest to the efficiency of the proposed method.


Author(s):  
Yuan Xu ◽  
Liang-Liang Ye ◽  
Qun-Xiong Zhu

In this paper, a new dynamic recurrent online sequential-extreme learning machine (DROS-ELM) OS-ELM with differential vector-kernel based principal component analysis (DV-KPCA) fault recognition approach is proposed to reconstruct the process feature and detect the process faults for real-time nonlinear system. Toward this end, the differential vector plus KPCA is first proposed to reduce the dimension of process data and enlarge the feature difference. In DV-KPCA, the differential vector is the difference between the input sample and the common sample, which is obtained from the historical data and represents the common invariant properties of the class. The optimal feature vectors of input sample and the common sample are obtained by KPCA procedure for the difference vectors. Through the differential operation between the input vectors and the common vectors, the reconstructed feature is derived by calculating the two-norm distance for the result of differential operation. The reconstructed features are then utilized to detect the process faults that may occur. In order to enhance the accuracy of fault recognition, a new DROS-ELM is developed by adding a self-feedback unit from the output of hidden layer to the input of hidden layer to record the sequential information. In the DROS-ELM, the output weight of feedback layer is updated dynamically by the change rate of output of the hidden layer. The DV-KPCA for feature reconstruction is exemplified using UCI handwriting (UCI handwriting recognition data: Database, using “Pen-Based Recognition of Handwritten Digits” produced in the Department of Computer Engineering Bogazici University, Istanbul 80815, Turkey, 1998), which the classification accuracy is obviously enhanced. Meanwhile, the DROS-ELM for process prediction is tested by the sunspot data from 1700 to 1987, which also shows better prediction accuracy than common methods. Finally, the new joint DROS-ELM with DV-KPCA method is exemplified in the complicated Tennessee Eastman (TE) benchmark process to illustrate the efficiencies. The results show that the DROS-ELM with DV-KPCA shows superiority not only in detection sensitivity and stability but also in timely fault recognition.


2021 ◽  
pp. 107482
Author(s):  
Carlos Perales-González ◽  
Francisco Fernández-Navarro ◽  
Javier Pérez-Rodríguez ◽  
Mariano Carbonero-Ruz

2014 ◽  
Vol 989-994 ◽  
pp. 3679-3682 ◽  
Author(s):  
Meng Meng Ma ◽  
Bo He

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.


Sign in / Sign up

Export Citation Format

Share Document