scholarly journals Sparse Coding with a Somato-Dendritic Rule

2018 ◽  
Author(s):  
Damien Drix ◽  
Verena V. Hafner ◽  
Michael Schmuker

AbstractCortical neurons are silent most of the time. This sparse activity is energy efficient, and the resulting neural code has favourable properties for associative learning. Most neural models of sparse coding use some form of homeostasis to ensure that each neuron fires infrequently. But homeostatic plasticity acting on a fast timescale may not be biologically plausible, and could lead to catastrophic forgetting in embodied agents that learn continuously. We set out to explore whether inhibitory plasticity could play that role instead, regulating both the population sparseness and the average firing rates. We put the idea to the test in a hybrid network where rate-based dendritic compartments integrate the feedforward input, while spiking somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings con-firm that intrinsic plasticity is not strictly required for regulating sparseness: inhibitory plasticity can have the same effect, although that mechanism comes with its own stability-plasticity dilemma. Going beyond point neuron models, the network illustrates how a learning rule can make use of dendrites and compartmentalised inputs; it also suggests a functional interpretation for clustered somatic inhibition in cortical neurons.

2019 ◽  
Author(s):  
Toshitake Asabuki ◽  
Tomoki Fukai

AbstractThe brain identifies potentially salient features within continuous information streams to appropriately process external and internal temporal events. This requires the compression or abstraction of information streams, for which no effective information principles are known. Here, we propose conditional entropy minimization learning as the fundamental principle of such temporal processing. We show that this learning rule resembles Hebbian learning with backpropagating action potentials in dendritic neuron models. Moreover, networks of the dendritic neurons can perform a surprisingly wide variety of complex unsupervised learning tasks. Our model not only accounts for the mechanisms of chunking of temporal inputs in the human brain but also accomplishes blind source separation of correlated mixed signals, which cannot be solved by conventional machine learning methods, such as independent-component analysis.One Sentence SummaryNeurons use soma-dendrite interactions to self-supervise the learning of characteristic features of various temporal inputs.


2021 ◽  
pp. 1-34
Author(s):  
Xiaolin Hu ◽  
Zhigang Zeng

Abstract The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.


2020 ◽  
Vol 117 (47) ◽  
pp. 29948-29958
Author(s):  
Maxwell Gillett ◽  
Ulises Pereira ◽  
Nicolas Brunel

Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.


2011 ◽  
Vol 106 (1) ◽  
pp. 361-373 ◽  
Author(s):  
Srdjan Ostojic

Interspike interval (ISI) distributions of cortical neurons exhibit a range of different shapes. Wide ISI distributions are believed to stem from a balance of excitatory and inhibitory inputs that leads to a strongly fluctuating total drive. An important question is whether the full range of experimentally observed ISI distributions can be reproduced by modulating this balance. To address this issue, we investigate the shape of the ISI distributions of spiking neuron models receiving fluctuating inputs. Using analytical tools to describe the ISI distribution of a leaky integrate-and-fire (LIF) neuron, we identify three key features: 1) the ISI distribution displays an exponential decay at long ISIs independently of the strength of the fluctuating input; 2) as the amplitude of the input fluctuations is increased, the ISI distribution evolves progressively between three types, a narrow distribution (suprathreshold input), an exponential with an effective refractory period (subthreshold but suprareset input), and a bursting exponential (subreset input); 3) the shape of the ISI distribution is approximately independent of the mean ISI and determined only by the coefficient of variation. Numerical simulations show that these features are not specific to the LIF model but are also present in the ISI distributions of the exponential integrate-and-fire model and a Hodgkin-Huxley-like model. Moreover, we observe that for a fixed mean and coefficient of variation of ISIs, the full ISI distributions of the three models are nearly identical. We conclude that the ISI distributions of spiking neurons in the presence of fluctuating inputs are well described by gamma distributions.


2019 ◽  
Vol 6 (4) ◽  
pp. 181098 ◽  
Author(s):  
Le Zhao ◽  
Jie Xu ◽  
Xiantao Shang ◽  
Xue Li ◽  
Qiang Li ◽  
...  

Non-volatile memristors are promising for future hardware-based neurocomputation application because they are capable of emulating biological synaptic functions. Various material strategies have been studied to pursue better device performance, such as lower energy cost, better biological plausibility, etc. In this work, we show a novel design for non-volatile memristor based on CoO/Nb:SrTiO 3 heterojunction. We found the memristor intrinsically exhibited resistivity switching behaviours, which can be ascribed to the migration of oxygen vacancies and charge trapping and detrapping at the heterojunction interface. The carrier trapping/detrapping level can be finely adjusted by regulating voltage amplitudes. Gradual conductance modulation can therefore be realized by using proper voltage pulse stimulations. And the spike-timing-dependent plasticity, an important Hebbian learning rule, has been implemented in the device. Our results indicate the possibility of achieving artificial synapses with CoO/Nb:SrTiO 3 heterojunction. Compared with filamentary type of the synaptic device, our device has the potential to reduce energy consumption, realize large-scale neuromorphic system and work more reliably, since no structural distortion occurs.


1989 ◽  
Vol 03 (07) ◽  
pp. 555-560 ◽  
Author(s):  
M.V. TSODYKS

We consider the Hopfield model with the most simple form of the Hebbian learning rule, when only simultaneous activity of pre- and post-synaptic neurons leads to modification of synapse. An extra inhibition proportional to full network activity is needed. Both symmetric nondiluted and asymmetric diluted networks are considered. The model performs well at extremely low level of activity p<K−1/2, where K is the mean number of synapses per neuron.


2010 ◽  
Vol 22 (3) ◽  
pp. 689-729 ◽  
Author(s):  
Vilson Luiz Dalle Mole ◽  
Aluizio Fausto Ribeiro Araújo

The growing self-organizing surface map (GSOSM) is a novel map model that learns a folded surface immersed in a 3D space. Starting from a dense point cloud, the surface is reconstructed through an incremental mesh composed of approximately equilateral triangles. Unlike other models such as neural meshes (NM), the GSOSM builds a surface topology while accepting any sequence of sample presentation. The GSOSM model introduces a novel connection learning rule called competitive connection Hebbian learning (CCHL), which produces a complete triangulation. GSOSM reconstructions are accurate and often free of false or overlapping faces. This letter presents and discusses the GSOSM model. It also presents and analyzes a set of results and compares GSOSM with some other models.


Sign in / Sign up

Export Citation Format

Share Document