weight learning algorithm
Recently Published Documents


TOTAL DOCUMENTS

3
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2010 ◽  
Vol 24 (12) ◽  
pp. 1217-1227 ◽  
Author(s):  
CHOON KI AHN

In this letter, we propose a new weight learning algorithm, called an [Formula: see text] learning law (HLL), for recurrent neural networks with time-delay. Based on the Lyapunov–Krasovskii stability theory, the HLL is presented to not only guarantee asymptotical stability but also reduce the effect of external disturbance to an [Formula: see text] norm constraint. An existence condition for the HLL is represented in terms of linear matrix inequality (LMI). An illustrative example is given to demonstrate the effectiveness of the proposed HLL.


1995 ◽  
Vol 117 (3) ◽  
pp. 411-415 ◽  
Author(s):  
C. James Li ◽  
Taehee Kim

A fully automatic feedforward neural network structural and weight learning algorithm is described. The Augmentation by Training with Residuals, ATR, requires neither guess of initial weight values nor the number of neurons in the hidden layer from users. The algorithm takes an incremental approach in which a hidden neuron is trained to model the mapping between the input and output of current exemplars, and is augmented to the existing network. The exemplars are then made orthogonal to the newly identified hidden neuron and used for the training of next hidden neuron. The improvement continues until a desired accuracy is reached. This new structural and weight learning algorithm is applied to the identification of a two-degree-of-freedom planar robot, a Van der Pol oscillator and a Mackay-Glass equation. The algorithm is shown to be effective in modeling all three systems and is far superior to a linear modeling scheme in the case of the robot.


Sign in / Sign up

Export Citation Format

Share Document