ENERGY RELAXATION FOR HOPFIELD NETWORK WITH THE NEW LEARNING RULE

Author(s):  
Saratha Sathasivam ◽  
Abdul Halim Hakim ◽  
Pandian Vasant ◽  
Nader Barsoum
2020 ◽  
Vol 34 (02) ◽  
pp. 1316-1323
Author(s):  
Zuozhu Liu ◽  
Thiparat Chotibut ◽  
Christopher Hillar ◽  
Shaowei Lin

Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks. Our model has a local learning rule, such that the synaptic weight updates depend only on the information directly accessible by the synapse. By exploiting asymmetry in the connections between binary neurons, we show that MPN can be trained to robustly memorize multiple spatiotemporal patterns of binary vectors, generalizing the ability of the symmetric Hopfield network to memorize static spatial patterns. In addition, we demonstrate that the model can efficiently learn sequences of binary pictures as well as generative models for experimental neural spike-train data. Our learning rule is consistent with spike-timing-dependent plasticity (STDP), thus providing a theoretical ground for the systematic design of biologically inspired networks with large and robust long-range sequence storage capacity.


2018 ◽  
Vol 41 (5) ◽  
pp. 1447-1457 ◽  
Author(s):  
Zeinab Aslipour ◽  
Alireza Yazdizadeh

The Damavand tokamak is a small size research machine for fusion-related studies. This paper is motivated by the need to create an accurate nonlinear subspace model that may be used for controller design. The system is identified based on a newly introduced Fractional Order Dynamic Neural Network (FODNN) optimized by evolutionary computation. The proposed method, owing to its rich structure, is appropriate for modeling of the complicated behavior of the plasma and its instability. In the proposed method, a Lyapunov-like analysis is used to derive a stable new learning rule for updating the proposed FODNN weights. To achieve optimal value for fractional order of the proposed FODNN, a Particle Swarm Optimization (PSO) is employed. The performance of the proposed identifier is verified by using experimental data and the results are also compared with the integer order dynamic neural network identifier. The results show that there is a bound for the “identification error” that vanishes to zero as time tends to infinity. Furthermore, the comparison of the results achieved by the proposed method and those of the integer order dynamic neural network depicts higher accuracy of the proposed FODNN.


2006 ◽  
Vol 18 (6) ◽  
pp. 1380-1412 ◽  
Author(s):  
Bernd Porr ◽  
Florentin Wörgötter

Currently all important, low-level, unsupervised network learning algorithms follow the paradigm of Hebb, where input and output activity are correlated to change the connection strength of a synapse. However, as a consequence, classical Hebbian learning always carries a potentially destabilizing autocorrelation term, which is due to the fact that every input is in a weighted form reflected in the neuron's output. This self-correlation can lead to positive feedback, where increasing weights will increase the output, and vice versa, which may result in divergence. This can be avoided by different strategies like weight normalization or weight saturation, which, however, can cause different problems. Consequently, in most cases, high learning rates cannot be used for Hebbian learning, leading to relatively slow convergence. Here we introduce a novel correlation-based learning rule that is related to our isotropic sequence order (ISO) learning rule (Porr & Wörgötter, 2003a), but replaces the derivative of the output in the learning rule with the derivative of the reflex input. Hence, the new rule uses input correlations only, effectively implementing strict heterosynaptic learning. This looks like a minor modification but leads to dramatically improved properties. Elimination of the output from the learning rule removes the unwanted, destabilizing autocorrelation term, allowing us to use high learning rates. As a consequence, we can mathematically show that the theoretical optimum of one-shot learning can be reached under ideal conditions with the new rule. This result is then tested against four different experimental setups, and we will show that in all of them, very few (and sometimes only one) learning experiences are needed to achieve the learning goal. As a consequence, the new learning rule is up to 100 times faster and in general more stable than ISO learning.


2018 ◽  
Author(s):  
James M. Murray

AbstractA longstanding challenge for computational neuroscience has been the development of biologically plausible learning rules for recurrent neural networks (RNNs) enabling the production and processing of time-dependent signals such as those that might drive movement or facilitate working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but they are inconsistent with known biological features of the brain, such as causality and locality. In this work we derive an approximation to gradient-based learning that comports with these biologically motivated constraints. Specifically, the online learning rule for the synaptic weights involves only local information about the pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to successfully perform a variety of tasks. Finally, to overcome the difficulty of training an RNN over a very large number of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into sequences of longer duration.


2017 ◽  
Vol 25 (2) ◽  
pp. 44-59 ◽  
Author(s):  
Peter Krcah

Many animals are able to modify their morphology during their lifetime in response to changes in the environment. Such modifications are often adaptive—they can improve individual’s chances of survival and reproduction. In this paper we explore the effects of such morphological plasticity on body–brain coevolution of virtual creatures. We propose a method where morphological plasticity is achieved through learning during individual’s lifetime allowing each individual to quickly adapt its morphology to the current environment. We show that the resulting plasticity allows evolution of creatures better adapted to different simulated environments. We also show that evolution combined with the new learning rule reduces the total computational cost required to evolve an individual with a given target fitness compared to evolution without learning.


1995 ◽  
Vol 06 (04) ◽  
pp. 425-433
Author(s):  
TAKAMASA KOSHIZEN ◽  
JOHN FULCHER

Classical optimal control methods, notably Pontryagin’s Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagation (SBP) on the XOR problem.


Sign in / Sign up

Export Citation Format

Share Document