Doubly stochastic Poisson processes in artificial neural learning

1998 ◽  
Vol 9 (1) ◽  
pp. 229-231 ◽  
Author(s):  
H.C. Card
2012 ◽  
Vol 57 (7) ◽  
pp. 1843-1848
Author(s):  
Rosa María Fernandez-Alcala ◽  
Jesús Navarro-Moreno ◽  
Juan Carlos Ruiz-Molina

Author(s):  
Darryl Charles ◽  
Colin Fyfe ◽  
Daniel Livingstone ◽  
Stephen McGlinchey

In this chapter we will look at supervised learning in more detail, beginning with one of the simplest (and earliest) supervised neural learning algorithms – the Delta Rule. The objectives of this chapter are to provide a solid grounding in the theory and practice of problem solving with artificial neural networks – and an appreciation of some of the challenges and practicalities involved in their use.


Author(s):  
Lluís A. Belanche Muñoz

The view of artificial neural networks as adaptive systems has lead to the development of ad-hoc generic procedures known as learning rules. The first of these is the Perceptron Rule (Rosenblatt, 1962), useful for single layer feed-forward networks and linearly separable problems. Its simplicity and beauty, and the existence of a convergence theorem made it a basic departure point in neural learning algorithms. This algorithm is a particular case of the Widrow-Hoff or delta rule (Widrow & Hoff, 1960), applicable to continuous networks with no hidden layers with an error function that is quadratic in the parameters.


Sign in / Sign up

Export Citation Format

Share Document