Fractional-Order Extreme Learning Machine With Mittag-Leffler Distribution

Author(s):  
Haoyu Niu ◽  
Yuquan Chen ◽  
YangQuan Chen

Abstract Extreme Learning Machine (ELM) has a powerful capability to approximate the regression and classification problems for a lot of data. ELM does not need to learn parameters in hidden neurons, which enables ELM to learn a thousand times faster than conventional popular learning algorithms. Since the parameters in the hidden layers are randomly generated, what is the optimal randomness? Lévy distribution, a heavy-tailed distribution, has been shown to be the optimal randomness in an unknown environment for finding some targets. Thus, Lévy distribution is used to generate the parameters in the hidden layers (more likely to reach the optimal parameters) and better computational results are then derived. Since Lévy distribution is a special case of Mittag-Leffler distribution, in this paper, the Mittag-Leffler distribution is used in order to get better performance. We show the procedure of generating the Mittag-Leffler distribution and then the training algorithm using Mittag-Leffler distribution is given. The experimental result shows that the Mittag-Leffler distribution performs similarly as the Lévy distribution, both can reach better performance than the conventional method. Some detailed discussions are finally presented to explain the experimental results.

2021 ◽  
Vol 11 (9) ◽  
pp. 3867
Author(s):  
Zhewei Liu ◽  
Zijia Zhang ◽  
Yaoming Cai ◽  
Yilin Miao ◽  
Zhikun Chen

Extreme Learning Machine (ELM) is characterized by simplicity, generalization ability, and computational efficiency. However, previous ELMs fail to consider the inherent high-order relationship among data points, resulting in being powerless on structured data and poor robustness on noise data. This paper presents a novel semi-supervised ELM, termed Hypergraph Convolutional ELM (HGCELM), based on using hypergraph convolution to extend ELM into the non-Euclidean domain. The method inherits all the advantages from ELM, and consists of a random hypergraph convolutional layer followed by a hypergraph convolutional regression layer, enabling it to model complex intraclass variations. We show that the traditional ELM is a special case of the HGCELM model in the regular Euclidean domain. Extensive experimental results show that HGCELM remarkably outperforms eight competitive methods on 26 classification benchmarks.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Derya Avci ◽  
Akif Dogantekin

Parkinson disease is a major public health problem all around the world. This paper proposes an expert disease diagnosis system for Parkinson disease based on genetic algorithm- (GA-) wavelet kernel- (WK-) Extreme Learning Machines (ELM). The classifier used in this paper is single layer neural network (SLNN) and it is trained by the ELM learning method. The Parkinson disease datasets are obtained from the UCI machine learning database. In wavelet kernel-Extreme Learning Machine (WK-ELM) structure, there are three adjustable parameters of wavelet kernel. These parameters and the numbers of hidden neurons play a major role in the performance of ELM. In this study, the optimum values of these parameters and the numbers of hidden neurons of ELM were obtained by using a genetic algorithm (GA). The performance of the proposed GA-WK-ELM method is evaluated using statical methods such as classification accuracy, sensitivity and specificity analysis, and ROC curves. The calculated highest classification accuracy of the proposed GA-WK-ELM method is found as 96.81%.


Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.


Author(s):  
Asım Balbay ◽  
Engin Avci ◽  
Ömer Şahin ◽  
Resul Coteli

Abstract Artificial neural networks (ANNs) have been widely used in modeling of various systems. Training of ANNs is commonly performed by backpropagation based on a gradient-based learning rule. However, it is well-known that such learning rule has several shortcomings such as slow convergence and training failures. This paper proposes a modeling technique based on Extreme Learning Machine (ELM) eliminating disadvantages of backpropagation based on a gradient-based learning rule for the drying of bittim (pistacia terebinthus). The samples for ELM based model are obtained by experimental studies. In experimental studies, the sample mass loss rate as a function time was investigated in different air velocities (0.5 and 1 m/s) and air temperatures (40, 60 and 80°C) in a designed dryer system. The obtained samples from experiments are used for training and testing of ELM. Further, some parameters of ELM such as type of activation function and the number of hidden neurons are set to obtain the best possible modelling results. The obtained prediction results show that ELM algorithm with tangent sigmoid activation function and 20 hidden neurons is appeared to be most optimal topology since maximum R2 and minimum rms (0.0500) and cov (0.2256) values are obtained. Thus, it is concluded that ELM can be used as an effective modelling tool in the drying of bittim (pistacia terebinthus) in fixed bed dryer system.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Liang-Rui Ren ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Junliang Shang ◽  
Chun-Hou Zheng

Abstract Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.


2018 ◽  
Vol 22 (11) ◽  
pp. 3487-3494 ◽  
Author(s):  
Weipeng Cao ◽  
Jinzhu Gao ◽  
Zhong Ming ◽  
Shubin Cai ◽  
Zhiguang Shan

Sign in / Sign up

Export Citation Format

Share Document