scholarly journals Modified STDP Triplet Rule Significantly Increases Neuron Training Stability in the Learning of Spatial Patterns

2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Dalius Krunglevicius

Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules which are based firmly on biological evidence. STDP learning is capable of detecting spatiotemporal patterns highly obscured by noise. This feature appears attractive from the point of view of machine learning. In this paper three different additive STDP models of spike interactions were compared in respect to training performance when the neuron is exposed to a recurrent spatial pattern injected into Poisson noise. The STDP models compared were all-to-all interaction, nearest-neighbor interaction, and the nearest-neighbor triplet interaction. The parameters of the neuron model and STDP training rules were optimized for a range of spatial patterns of different sizes by the means of heuristic algorithm. The size of the pattern, that is, the number of synapses containing the pattern, was gradually decreased from what amounted to a relatively easy task down to a single synapse. Optimization was performed for each size of the pattern. The parameters were allowed to evolve freely. The triplet rule, in most cases, performed better by far than the other two rules, while the evolutionary algorithm immediately switched the polarity of the triplet update. The all-to-all rule achieved moderate results.

2012 ◽  
Vol 588-589 ◽  
pp. 1547-1551 ◽  
Author(s):  
Xiu Qing Wang ◽  
Zeng Guang Hou ◽  
Min Tan ◽  
Yong Ji Wang ◽  
Fei Xie

This paper focuses on the third generation of neural networks- Spiking neural networks (SNNs), the novel Spiking neuron model- probabilistic Spiking neuron model (pSNM), and their applications. pSNM is used in mobile robots' behavior control, and a novel mobile robots' wall-following controller based on pSNM is proposed. In the pSNM controller, Spiking time-delayed coding is used for the sensory neurons of the input layer and pSNM is used for the motor neurons in the output layer. Thorpe and Hebbian learning rules are used in the controller. The experimental results show that the controller can control the mobile robots to follow the wall clockwise and counterclockwise successfully. The structure of the controller is simple, and the controller can study online.


2019 ◽  
Vol 34 (33) ◽  
pp. 1950224
Author(s):  
J. D. García-Aguilar ◽  
Juan Carlos Gómez-Izquierdo

From the mass textures’ point of view, we present a comparative study of the [Formula: see text] flavor symmetry in the left–right symmetry model (LRSM) and the baryon minus lepton model (BLM) taking into account their predictions on the CKM mixing matrix. To do this, we recover the already studied quark mass matrix, that comes from some published papers, and under certain strong assumptions, one can show that there are predictive scenarios in the LRSM and BLM where the modified Fritzsch and nearest neighbor interaction (NNI) textures drive, respectively, the quark mixings. As the main result, the CKM mixing matrix is in good agreement with the last experimental data in the flavored BLM model.


2015 ◽  
Vol 27 (8) ◽  
pp. 1673-1685 ◽  
Author(s):  
Dalius Krunglevicius

Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron’s responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron’s forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.


1997 ◽  
Vol 08 (03) ◽  
pp. 301-315 ◽  
Author(s):  
Marcel J. Nijman ◽  
Hilbert J. Kappen

A Radial Basis Boltzmann Machine (RBBM) is a specialized Boltzmann Machine architecture that combines feed-forward mapping with probability estimation in the input space, and for which very efficient learning rules exist. The hidden representation of the network displays symmetry breaking as a function of the noise in the dynamics. Thus, generalization can be studied as a function of the noise in the neuron dynamics instead of as a function of the number of hidden units. We show that the RBBM can be seen as an elegant alternative of k-nearest neighbor, leading to comparable performance without the need to store all data. We show that the RBBM has good classification performance compared to the MLP. The main advantage of the RBBM is that simultaneously with the input-output mapping, a model of the input space is obtained which can be used for learning with missing values. We derive learning rules for the case of incomplete data, and show that they perform better on incomplete data than the traditional learning rules on a 'repaired' data set.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 500 ◽  
Author(s):  
Sergey A. Lobov ◽  
Andrey V. Chernyshov ◽  
Nadia P. Krilova ◽  
Maxim O. Shamshin ◽  
Victor B. Kazantsev

One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.


1999 ◽  
Vol 16 (6) ◽  
pp. 434-436
Author(s):  
Yun-zhong Lai ◽  
Ai-zhen Zhang ◽  
Zhan-ning Hu ◽  
Jiu-qing Liang ◽  
Fu-ke Pu (Pu Fu-cho)

Sign in / Sign up

Export Citation Format

Share Document