Perceptron learning for reuse prediction

Author(s):  
Elvira Teran ◽  
Zhe Wang ◽  
Daniel A. Jimenez
Keyword(s):  
2020 ◽  
pp. 251-264
Author(s):  
David Brady ◽  
Demetri Psaltis
Keyword(s):  

2015 ◽  
Vol 9 ◽  
Author(s):  
Hesham Mostafa ◽  
Ali Khiat ◽  
Alexander Serb ◽  
Christian G. Mayr ◽  
Giacomo Indiveri ◽  
...  

1992 ◽  
Vol 4 (6) ◽  
pp. 946-957 ◽  
Author(s):  
Marcus Frean

The thermal perceptron is a simple extension to Rosenblatt's perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, T0, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gradient descent. The learning rule stabilizes the weights (learns) over a fixed training period. For separable problems it finds separating weights much more quickly than the usual rules.


1993 ◽  
Vol 2 (4) ◽  
pp. 385-387 ◽  
Author(s):  
Martin Anthony ◽  
John Shawe-Taylor

The perceptron learning algorithm quite naturally yields an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that, in general, this algorithm is not efficient.


1997 ◽  
Vol 31 (SI) ◽  
pp. 67-73 ◽  
Author(s):  
Hwee Tou Ng ◽  
Wei Boon Goh ◽  
Kok Leong Low

1992 ◽  
Vol 03 (01) ◽  
pp. 83-101 ◽  
Author(s):  
D. Saad

The Minimal Trajectory (MINT) algorithm for training recurrent neural networks with a stable end point is based on an algorithmic search for the systems’ representations in the neighbourhood of the minimal trajectory connecting the input-output representations. The said representations appear to be the most probable set for solving the global perceptron problem related to the common weight matrix, connecting all representations of successive time steps in a recurrent discrete neural networks. The search for a proper set of system representations is aided by representation modification rules similar to those presented in our former paper,1 aimed to support contributing hidden and non-end-point representations while supressing non-contributing ones. Similar representation modification rules were used in other training methods for feed-forward networks,2–4 based on modification of the internal representations. A feed-forward version of the MINT algorithm will be presented in another paper.5 Once a proper set of system representations is chosen, the weight matrix is then modified accordingly, via the Perceptron Learning Rule (PLR) to obtain the proper input-output relation. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm in comparison with other existing algorithms, together with modest memory requirements.


Sign in / Sign up

Export Citation Format

Share Document