scholarly journals Localist plasticity identified by mutual information

2019 ◽  
Author(s):  
Gabriele Scheler ◽  
Johann Schumann

AbstractThe issue of memory is difficult for standard neural network models. Ubiquitous synaptic plasticity introduces the problem of interference, which limits pattern recall and introduces conflation errors. We present a lognormal recurrent neural network, load patterns into it (MNIST), and test the resulting neural representation for information content by an output classifier. We identify neurons, which ‘compress’ the pattern information into their own adjacency network, and by stimulating these achieve recall. Learning is limited to intrinsic plasticity and output synapses of these pattern neurons (localist plasticity), which prevents interference.Our first experiments show that this form of storage and recall is possible, with the caveat of a ‘lossy’ recall similar to human memory. Comparing our results with a standard Gaussian network model, we notice that this effect breaks down for the Gaussian model.

2013 ◽  
Vol 109 (1) ◽  
pp. 202-215 ◽  
Author(s):  
Jordan A. Taylor ◽  
Laura L. Hieber ◽  
Richard B. Ivry

Generalization provides a window into the representational changes that occur during motor learning. Neural network models have been integral in revealing how the neural representation constrains the extent of generalization. Specifically, two key features are thought to define the pattern of generalization. First, generalization is constrained by the properties of the underlying neural units; with directionally tuned units, the extent of generalization is limited by the width of the tuning functions. Second, error signals are used to update a sensorimotor map to align the desired and actual output, with a gradient-descent learning rule ensuring that the error produces changes in those units responsible for the error. In prior studies, task-specific effects in generalization have been attributed to differences in neural tuning functions. Here we ask whether differences in generalization functions may arise from task-specific error signals. We systematically varied visual error information in a visuomotor adaptation task and found that this manipulation led to qualitative differences in generalization. A neural network model suggests that these differences are the result of error feedback processing operating on a homogeneous and invariant set of tuning functions. Consistent with novel predictions derived from the model, increasing the number of training directions led to specific distortions of the generalization function. Taken together, the behavioral and modeling results offer a parsimonious account of generalization that is based on the utilization of feedback information to update a sensorimotor map with stable tuning functions.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Jianming Zheng ◽  
Yupu Guo ◽  
Chong Feng ◽  
Honghui Chen

Document representation is widely used in practical application, for example, sentiment classification, text retrieval, and text classification. Previous work is mainly based on the statistics and the neural networks, which suffer from data sparsity and model interpretability, respectively. In this paper, we propose a general framework for document representation with a hierarchical architecture. In particular, we incorporate the hierarchical architecture into three traditional neural-network models for document representation, resulting in three hierarchical neural representation models for document classification, that is, TextHFT, TextHRNN, and TextHCNN. Our comprehensive experimental results on two public datasets, that is, Yelp 2016 and Amazon Reviews (Electronics), show that our proposals with hierarchical architecture outperform the corresponding neural-network models for document classification, resulting in a significant improvement ranging from 4.65% to 35.08% in terms of accuracy with a comparable (or substantially less) expense of time consumption. In addition, we find that the long documents benefit more from the hierarchical architecture than the short ones as the improvement in terms of accuracy on long documents is greater than that on short documents.


MRS Bulletin ◽  
1988 ◽  
Vol 13 (8) ◽  
pp. 30-35 ◽  
Author(s):  
Dana Z. Anderson

From the time of their conception, holography and holograms have evolved as a metaphor for human memory. Holograms can be made so that the information they contain is distributed throughout the holographic medium—destroy part of the hologram and the stored information remains wholly intact, except for a loss of detail. In this property holograms evidently have something in common with human memory, which is to some extent resilient against physical damage to the brain. There is much more to the metaphor than simply that information is stored in a distributed manner.Research in the optics community is now looking to holography, in particular dynamic holography, not only for information storage, but for information processing as well. The ideas are based upon neural network models. Neural networks are models for processing that are inspired by the apparent architecture of the brain. This is a processing paradigm that is new to optics. From within this network paradigm we look to build machines that can store and recall information associatively, play back a chain of recorded events, undergo learning and possibly forgetting, make decisions, adapt to a particular environment, and self-organize to evolve some desirable behavior. We hope that neural network models will give rise to optical machines for memory, speech processing, visual processing, language acquisition, motor control, and so on.


2007 ◽  
Vol 19 (1) ◽  
pp. 194-217 ◽  
Author(s):  
J. V. Stone ◽  
P. E. Jupp

After a language has been learned and then forgotten, relearning some words appears to facilitate spontaneous recovery of other words. More generally, relearning partially forgotten associations induces recovery of other associations in humans, an effect we call free-lunch learning (FLL). Using neural network models, we prove that FLL is a necessary consequence of storing associations as distributed representations. Specifically, we prove that (1) FLL becomes increasingly likely as the number of synapses (connection weights) increases, suggesting that FLL contributes to memory in neurophysiological systems, and (2) the magnitude of FLL is greatest if inactive synapses are removed, suggesting a computational role for synaptic pruning in physiological systems. We also demonstrate that FLL is different from generalization effects conventionally associated with neural network models. As FLL is a generic property of distributed representations, it may constitute an important factor in human memory.


2016 ◽  
Vol 26 (05) ◽  
pp. 1650040 ◽  
Author(s):  
Francisco Javier Ropero Peláez ◽  
Mariana Antonia Aguiar-Furucho ◽  
Diego Andina

In this paper, we use the neural property known as intrinsic plasticity to develop neural network models that resemble the koniocortex, the fourth layer of sensory cortices. These models evolved from a very basic two-layered neural network to a complex associative koniocortex network. In the initial network, intrinsic and synaptic plasticity govern the shifting of the activation function, and the modification of synaptic weights, respectively. In this first version, competition is forced, so that the most activated neuron is arbitrarily set to one and the others to zero, while in the second, competition occurs naturally due to inhibition between second layer neurons. In the third version of the network, whose architecture is similar to the koniocortex, competition also occurs naturally owing to the interplay between inhibitory interneurons and synaptic and intrinsic plasticity. A more complex associative neural network was developed based on this basic koniocortex-like neural network, capable of dealing with incomplete patterns and ideally suited to operating similarly to a learning vector quantization network. We also discuss the biological plausibility of the networks and their role in a more complex thalamocortical model.


2021 ◽  
Author(s):  
Zhenglong Zhou ◽  
Dhairyya Singh ◽  
Marlie C Tandoc ◽  
Anna C Schapiro

Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many related features of the environment, to localist representations, where neurons orthogonalize activity patterns despite any input similarity. Distributed representations support powerful learning in neural network models and have been posited to exist throughout the brain, but it is unclear under what conditions humans acquire these representations and what computational advantages they may confer. In a series of behavioral experiments, we present evidence that interleaved exposure to new information facilitates the rapid formation of distributed representations in humans. As in neural network models with distributed representations, interleaved learning supports fast and automatic recognition of item relatedness, affords efficient generalization, and is especially critical for inference when learning requires statistical integration of noisy information over time. We use the data to adjudicate between several existing computational models of human memory and inference. The results demonstrate the power of interleaved learning and implicate the use of distributed representations in human inference.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Sign in / Sign up

Export Citation Format

Share Document