Predicting international roughness index by deep neural networks with Levenberg-Marquardt backpropagation learning algorithm

2021 ◽  
Author(s):  
Nicholas Fiorentini ◽  
Pietro Leandri ◽  
Massimo Losa
Author(s):  
Yun-Peng Liu ◽  
Ning Xu ◽  
Yu Zhang ◽  
Xin Geng

The performances of deep neural networks (DNNs) crucially rely on the quality of labeling. In some situations, labels are easily corrupted, and therefore some labels become noisy labels. Thus, designing algorithms that deal with noisy labels is of great importance for learning robust DNNs. However, it is difficult to distinguish between clean labels and noisy labels, which becomes the bottleneck of many methods. To address the problem, this paper proposes a novel method named Label Distribution based Confidence Estimation (LDCE). LDCE estimates the confidence of the observed labels based on label distribution. Then, the boundary between clean labels and noisy labels becomes clear according to confidence scores. To verify the effectiveness of the method, LDCE is combined with the existing learning algorithm to train robust DNNs. Experiments on both synthetic and real-world datasets substantiate the superiority of the proposed algorithm against state-of-the-art methods.


2021 ◽  
pp. 1-29
Author(s):  
Shanshan Qin ◽  
Nayantara Mudur ◽  
Cengiz Pehlevan

We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.


Machine learning in recent years has become an integral part of our day to day life and the ease of use has improved a lot in the past decade.There are various ways to make the model to work in smaller devices.A modest method to advance any machine learning algorithm to work in smaller devices is to provide the output of large complex models as input to smaller models which can be easily deployed into mobile phones .We provided a framework where the large models can even learn the domain knowledge which is integrated as first-order logic rules and explicitly includes that knowledge into the smaller model by simultaneously training of both the models.This can be achieved by transfer learning where the knowledge learned by one model can be used to teach the other model.Domain knowledge integration is the most critical part here and it can be done by using some of the constraint principles where the scope of the data is reduced based upon the constraints mentioned. One of the best representation of domain knowledge is logic rules where the knowledge is encoded as predicates.This framework provides a way to integrate human knowledge into deep neural networks that can be easily deployed into any devices.


2013 ◽  
Vol 341-342 ◽  
pp. 856-860
Author(s):  
Hao Ming Yang ◽  
Lan Qing Zhang

Experiment control platform for the neural network decoupling control is constructed for the glass furnace taking heavy oil as fuel. By dual control, the improving Levenberg-Marquardt learning algorithm is discussed in order to improve the learning speed and to satisfy the real control. The neural network decoupling real control based on C-Script language and PLC S7-400 hard system under WINCC is realized with satisfying control results.


Sign in / Sign up

Export Citation Format

Share Document