scholarly journals Transfer Extreme Learning Machine with Output Weight Alignment

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Shaofei Zang ◽  
Yuhu Cheng ◽  
Xuesong Wang ◽  
Yongyi Yan

Extreme Learning Machine (ELM) as a fast and efficient neural network model in pattern recognition and machine learning will decline when the labeled training sample is insufficient. Transfer learning helps the target task to learn a reliable model by using plentiful labeled samples from the different but relevant domain. In this paper, we propose a supervised Extreme Learning Machine with knowledge transferability, called Transfer Extreme Learning Machine with Output Weight Alignment (TELM-OWA). Firstly, it reduces the distribution difference between domains by aligning the output weight matrix of the ELM trained by the labeled samples from the source and target domains. Secondly, the approximation between the interdomain ELM output weight matrix is added to the objective function to further realize the cross-domain transfer of knowledge. Thirdly, we consider the objective function as the least square problem and transform it into a standard ELM model to be efficiently solved. Finally, the effectiveness of the proposed algorithm is verified by classification experiments on 16 sets of image datasets and 6 sets of text datasets, and the result demonstrates the competitive performance of our method with respect to other ELM models and transfer learning approach.

Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2020 ◽  
Vol 5 (3) ◽  
pp. 4148-4155
Author(s):  
Dandan Zhang ◽  
Zicong Wu ◽  
Junhong Chen ◽  
Anzhu Gao ◽  
Xu Chen ◽  
...  

Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1284
Author(s):  
Licheng Cui ◽  
Huawei Zhai ◽  
Hongfei Lin

An extreme learning machine (ELM) is an innovative algorithm for the single hidden layer feed-forward neural networks and, essentially, only exists to find the optimal output weight so as to minimize output error based on the least squares regression from the hidden layer to the output layer. With a focus on the output weight, we introduce the orthogonal constraint into the output weight matrix, and propose a novel orthogonal extreme learning machine (NOELM) based on the idea of optimization column by column whose main characteristic is that the optimization of complex output weight matrix is decomposed into optimizing the single column vector of the matrix. The complex orthogonal procrustes problem is transformed into simple least squares regression with an orthogonal constraint, which can preserve more information from ELM feature space to output subspace, these make NOELM more regression analysis and discrimination ability. Experiments show that NOELM has better performance in training time, testing time and accuracy than ELM and OELM.


2014 ◽  
Vol 548-549 ◽  
pp. 1735-1738 ◽  
Author(s):  
Jian Tang ◽  
Dong Yan ◽  
Li Jie Zhao

Modeling concrete compressive strength is useful to ensure quality of civil engineering. This paper aims to compare several Extreme learning machines (ELMs) based modeling approaches for predicting the concrete compressive strength. Normal ELM algorithm, Partial least square-based extreme learning machines (PLS-ELMs) algorithm and Kernel ELM (KELM) algorithm are used and evaluated. Results indicate that the normal ELMs algorithm has the highest modeling speed, and the KELM has the best prediction accuracy. Every method is validated for modeling concrete compressive strength. The appropriate modeling approach should be selected according different purposes.


Sensors ◽  
2020 ◽  
Vol 20 (13) ◽  
pp. 3643
Author(s):  
Haining Liu ◽  
Yuping Wu ◽  
Yingchang Cao ◽  
Wenjun Lv ◽  
Hongwei Han ◽  
...  

Recent years have witnessed the development of the applications of machine learning technologies to well logging-based lithology identification. Most of the existing work assumes that the well loggings gathered from different wells share the same probability distribution; however, the variations in sedimentary environment and well-logging technique might cause the data drift problem; i.e., data of different wells have different probability distributions. Therefore, the model trained on old wells does not perform well in predicting the lithologies in newly-coming wells, which motivates us to propose a transfer learning method named the data drift joint adaptation extreme learning machine (DDJA-ELM) to increase the accuracy of the old model applying to new wells. In such a method, three key points, i.e., the project mean maximum mean discrepancy, joint distribution domain adaptation, and manifold regularization, are incorporated into extreme learning machine. As found experimentally in multiple wells in Jiyang Depression, Bohai Bay Basin, DDJA-ELM could significantly increase the accuracy of an old model when identifying the lithologies in new wells.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Liang-Rui Ren ◽  
Ying-Lian Gao ◽  
Jin-Xing Liu ◽  
Junliang Shang ◽  
Chun-Hou Zheng

Abstract Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.


Sign in / Sign up

Export Citation Format

Share Document