scholarly journals Generalized Memory of STDP-Driven Spiking Neural Network

Author(s):  
S.A. Lobov

We propose a memory model based on the spiking neural network with Spike-Timing-Dependent Plasticity (STDP). In the model, information is recorded using local external stimulation. The memory decoding is a functional response in the form of population bursts of action potentials synchronized with the applied stimuli. In our model, STDP-mediated weights rearrangements are able to encode the localization of the applied stimulation, while the stimulation focus forms the source of the vector field of synaptic connections. Based on the characteristics of this field, we propose a measure of generalized network memory. With repeated stimulations, we can observe a decrease in time until synchronous activity occurs. In this case, the obtained average learning curve and the dependence of the generalized memory on the stimulation number are characterized by a power-law. We show that the maximum time to reach a functional response is determined by the generalized memory remaining as a result of previous stimulations. Thus, the properly learning curves are due to the presence of incomplete forgetting of previous influences. We study the reliability of generalized network memory, determined by the storage time of memory traces after the termination of external stimulation. The reliability depends on the level of neural noise, and this dependence is also power-law. We found that hubs — neurons that can initiate the generation of population bursts in the absence of noise — play a key role in maintaining generalized network memory. The inclusion of neural noise leads to the occurrence of random bursts initiated by neurons that are not hubs. This noise activity destroys memory traces and reduces the reliability of generalized network memory.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Author(s):  
Alexander N. BUSYGIN ◽  
Andrey N. BOBYLEV ◽  
Alexey A. GUBIN ◽  
Alexander D. PISAREV ◽  
Sergey Yu. UDOVICHENKO

This article presents the results of a numerical simulation and an experimental study of the electrical circuit of a hardware spiking perceptron based on a memristor-diode crossbar. That has required developing and manufacturing a measuring bench, the electrical circuit of which consists of a hardware perceptron circuit and an input peripheral electrical circuit to implement the activation functions of the neurons and ensure the operation of the memory matrix in a spiking mode. The authors have performed a study of the operation of the hardware spiking neural network with memristor synapses in the form of a memory matrix in the mode of a single-layer perceptron synapses. The perceptron can be considered as the first layer of a biomorphic neural network that performs primary processing of incoming information in a biomorphic neuroprocessor. The obtained experimental and simulation learning curves show the expected increase in the proportion of correct classifications with an increase in the number of training epochs. The authors demonstrate generating a new association during retraining caused by the presence of new input information. Comparison of the results of modeling and an experiment on training a small neural network with a small crossbar will allow creating adequate models of hardware neural networks with a large memristor-diode crossbar. The arrival of new unknown information at the input of the hardware spiking neural network can be related with the generation of new associations in the biomorphic neuroprocessor. With further improvement of the neural network, this information will be comprehended and, therefore, will allow the transition from weak to strong artificial intelligence.


2021 ◽  
Vol 14 ◽  
Author(s):  
Xueyuan She ◽  
Saurabh Dash ◽  
Daehyun Kim ◽  
Saibal Mukhopadhyay

This paper introduces a heterogeneous spiking neural network (H-SNN) as a novel, feedforward SNN structure capable of learning complex spatiotemporal patterns with spike-timing-dependent plasticity (STDP) based unsupervised training. Within H-SNN, hierarchical spatial and temporal patterns are constructed with convolution connections and memory pathways containing spiking neurons with different dynamics. We demonstrate analytically the formation of long and short term memory in H-SNN and distinct response functions of memory pathways. In simulation, the network is tested on visual input of moving objects to simultaneously predict for object class and motion dynamics. Results show that H-SNN achieves prediction accuracy on similar or higher level than supervised deep neural networks (DNN). Compared to SNN trained with back-propagation, H-SNN effectively utilizes STDP to learn spatiotemporal patterns that have better generalizability to unknown motion and/or object classes encountered during inference. In addition, the improved performance is achieved with 6x fewer parameters than complex DNNs, showing H-SNN as an efficient approach for applications with constrained computation resources.


2008 ◽  
Vol 20 (2) ◽  
pp. 415-435 ◽  
Author(s):  
Ryosuke Hosaka ◽  
Osamu Araki ◽  
Tohru Ikeguchi

Spike-timing-dependent synaptic plasticity (STDP), which depends on the temporal difference between pre- and postsynaptic action potentials, is observed in the cortices and hippocampus. Although several theoretical and experimental studies have revealed its fundamental aspects, its functional role remains unclear. To examine how an input spatiotemporal spike pattern is altered by STDP, we observed the output spike patterns of a spiking neural network model with an asymmetrical STDP rule when the input spatiotemporal pattern is repeatedly applied. The spiking neural network comprises excitatory and inhibitory neurons that exhibit local interactions. Numerical experiments show that the spiking neural network generates a single global synchrony whose relative timing depends on the input spatiotemporal pattern and the neural network structure. This result implies that the spiking neural network learns the transformation from spatiotemporal to temporal information. In the literature, the origin of the synfire chain has not been sufficiently focused on. Our results indicate that spiking neural networks with STDP can ignite synfire chains in the cortices.


2020 ◽  
Author(s):  
Larry Shupe ◽  
Eberhard E. Fetz

AbstractWe describe an integrate-and-fire (IF) spiking neural network that incorporates spike-timing dependent plasticity (STDP) and simulates the experimental outcomes of four different conditioning protocols that produce cortical plasticity. The original conditioning experiments were performed in freely moving non-human primates with an autonomous head-fixed bidirectional brain-computer interface. Three protocols involved closed-loop stimulation triggered from (a) spike activity of single cortical neurons, (b) EMG activity from forearm muscles, and (c) cycles of spontaneous cortical beta activity. A fourth protocol involved open-loop delivery of pairs of stimuli at neighboring cortical sites. The IF network that replicates the experimental results consists of 360 units with simulated membrane potentials produced by synaptic inputs and triggering a spike when reaching threshold. The 240 cortical units produce either excitatory or inhibitory post-synaptic potentials in their target units. In addition to the experimentally observed conditioning effects, the model also allows computation of underlying network behavior not originally documented. Furthermore, the model makes predictions about outcomes from protocols not yet investigated, including spike-triggered inhibition, gamma-triggered stimulation and disynaptic conditioning. The success of the simulations suggests that a simple voltage-based IF model incorporating STDP can capture the essential mechanisms mediating targeted plasticity with closed-loop stimulation.


2021 ◽  
Author(s):  
Jacopo Bono ◽  
Sara Zannone ◽  
Victor Pedrosa ◽  
Claudia Clopath

AbstractWe describe a framework where a biologically plausible spiking neural network mimicking hippocampal layers learns a cognitive map known as the successor representation. We show analytically how, on the algorithmic level, the learning follows the TD(λ) algorithm, which emerges from the underlying spike-timing dependent plasticity rule. We then analyze the implications of this framework, uncovering how behavioural activity and experience replays can play complementary roles when learning the representation of the environment, how we can learn relations over behavioural timescales with synaptic plasticity acting on the range of milliseconds, and how the learned representation can be flexibly encoded by allowing state-dependent delay discounting through neuromodulation and altered firing rates.


2021 ◽  
Vol 11 (5) ◽  
pp. 2059
Author(s):  
Sungmin Hwang ◽  
Hyungjin Kim ◽  
Byung-Gook Park

A hardware-based spiking neural network (SNN) has attracted many researcher’s attention due to its energy-efficiency. When implementing the hardware-based SNN, offline training is most commonly used by which trained weights by a software-based artificial neural network (ANN) are transferred to synaptic devices. However, it is time-consuming to map all the synaptic weights as the scale of the neural network increases. In this paper, we propose a method for quantized weight transfer using spike-timing-dependent plasticity (STDP) for hardware-based SNN. STDP is an online learning algorithm for SNN, but we utilize it as the weight transfer method. Firstly, we train SNN using the Modified National Institute of Standards and Technology (MNIST) dataset and perform weight quantization. Next, the quantized weights are mapped to the synaptic devices using STDP, by which all the synaptic weights connected to a neuron are transferred simultaneously, reducing the number of pulse steps. The performance of the proposed method is confirmed, and it is demonstrated that there is little reduction in the accuracy at more than a certain level of quantization, but the number of pulse steps for weight transfer substantially decreased. In addition, the effect of the device variation is verified.


2021 ◽  
Vol 15 ◽  
Author(s):  
Abderazek Ben Abdallah ◽  
Khanh N. Dang

Spiking Neuromorphic systems have been introduced as promising platforms for energy-efficient spiking neural network (SNNs) execution. SNNs incorporate neuronal and synaptic states in addition to the variant time scale into their computational model. Since each neuron in these networks is connected to many others, high bandwidth is required. Moreover, since the spike times are used to encode information in SNN, a precise communication latency is also needed, although SNN is tolerant to the spike delay variation in some limits when it is seen as a whole. The two-dimensional packet-switched network-on-chip was proposed as a solution to provide a scalable interconnect fabric in large-scale spike-based neural networks. The 3D-ICs have also attracted a lot of attention as a potential solution to resolve the interconnect bottleneck. Combining these two emerging technologies provides a new horizon for IC design to satisfy the high requirements of low power and small footprint in emerging AI applications. Moreover, although fault-tolerance is a natural feature of biological systems, integrating many computation and memory units into neuromorphic chips confronts the reliability issue, where a defective part can affect the overall system's performance. This paper presents the design and simulation of R-NASH-a reliable three-dimensional digital neuromorphic system geared explicitly toward the 3D-ICs biological brain's three-dimensional structure, where information in the network is represented by sparse patterns of spike timing and learning is based on the local spike-timing-dependent-plasticity rule. Our platform enables high integration density and small spike delay of spiking networks and features a scalable design. R-NASH is a design based on the Through-Silicon-Via technology, facilitating spiking neural network implementation on clustered neurons based on Network-on-Chip. We provide a memory interface with the host CPU, allowing for online training and inference of spiking neural networks. Moreover, R-NASH supports fault recovery with graceful performance degradation.


Sign in / Sign up

Export Citation Format

Share Document