MEMORY RETRIEVAL IN OPTIMAL SUBSPACES

1992 ◽  
Vol 03 (supp01) ◽  
pp. 71-77 ◽  
Author(s):  
G. Boffetta ◽  
R. Monasson ◽  
R. Zecchina

A simple dynamical scheme for Attractor Neural Networks with non-monotonic three state effective neurons is discussed. For the unsupervised Hebb learning rule, we give some basic numerical results which are interpreted in terms of a combinatorial task realized by the dynamical process (dynamical selection of optimal subspaces). An analytical estimate of optimal performance is given by resorting to two different simplified versions of the model. We show that replica symmetry breaking is required since the replica symmetric solutions are unstable.

1996 ◽  
Vol 07 (01) ◽  
pp. 19-32 ◽  
Author(s):  
A. LAMURA ◽  
C. MARANGI ◽  
G. NARDULLI

In this paper we analyze replica symmetry breaking in attractor neural networks with non-monotone activation function. We study the non-monotone version of the Edinburgh model, which allows the control of the domains of attraction by the stability parameter K, and we compute, at one step of symmetry breaking, storage capacity and, for the strongly dilute model, the domains of attraction of the stable fixed points.


2020 ◽  
Vol 53 (41) ◽  
pp. 415005 ◽  
Author(s):  
Elena Agliari ◽  
Linda Albanese ◽  
Adriano Barra ◽  
Gabriele Ottaviani

Author(s):  
Elena Agliari ◽  
Linda Albanese ◽  
Francesco Alemanno ◽  
Alberto Fachechi

Abstract We consider a multi-layer Sherrington-Kirkpatrick spin-glass as a model for deep restricted Boltzmann machines with quenched random weights and solve for its free energy in the thermodynamic limit by means of Guerra's interpolating techniques under the RS and 1RSB ansatz. In particular, we recover the expression already known for the replica-symmetric case. Further, we drop the restriction constraint by introducing intra-layer connections among spins and we show that the resulting system can be mapped into a modular Hopfield network, which is also addressed via the same techniques up to the first step of replica symmetry breaking.


1995 ◽  
Vol 28 (24) ◽  
pp. 7105-7111 ◽  
Author(s):  
W Whyte ◽  
D Sherrington ◽  
K Y M Wong

2006 ◽  
Vol 17 (10) ◽  
pp. 1501-1520 ◽  
Author(s):  
FRANK EMMERT-STREIB

In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.


Sign in / Sign up

Export Citation Format

Share Document