scholarly journals NDDN: A Cloud-Based Neuroinformation Database for Developing Neuronal Networks

2018 ◽  
Vol 2018 ◽  
pp. 1-8
Author(s):  
Jiangbo Pu ◽  
Xiangning Li

Electrical activity of developing dissociated neuronal networks is of immense significance for understanding the general properties of neural information processing and storage. In addition, the complexity and diversity of network activity patterns make them ideal candidates for developing novel computational models and evaluating algorithms. However, there are rare databases which focus on the changing network dynamics during development. Here, we describe the design and implementation of Neuroinformation Database for Developing Networks (NDDN), a repository for electrophysiological data collected from long-term cultured hippocampal networks. The NDDN contains over 15 terabytes of multielectrode array data consisting of 25,380 items collected from 105 culture batches. Metadata including culturing and recording information and stimulation/drug application protocols are linked to each data item. A Matlab toolbox named MEAKit is also provided with the NDDN to ease the analysis of downloaded data items. We expect that NDDN may contribute to both the fields of experimental and computational neuroscience.

2016 ◽  
Vol 113 (35) ◽  
pp. 9898-9903 ◽  
Author(s):  
Jonathan Mapelli ◽  
Daniela Gandolfi ◽  
Antonietta Vilella ◽  
Michele Zoli ◽  
Albertino Bigiani

Dynamic changes of the strength of inhibitory synapses play a crucial role in processing neural information and in balancing network activity. Here, we report that the efficacy of GABAergic connections between Golgi cells and granule cells in the cerebellum is persistently altered by the activity of glutamatergic synapses. This form of plasticity is heterosynaptic and is expressed as an increase (long-term potentiation, LTPGABA) or a decrease (long-term depression, LTDGABA) of neurotransmitter release. LTPGABA is induced by postsynaptic NMDA receptor activation, leading to calcium increase and retrograde diffusion of nitric oxide, whereas LTDGABA depends on presynaptic NMDA receptor opening. The sign of plasticity is determined by the activation state of target granule and Golgi cells during the induction processes. By controlling the timing of spikes emitted by granule cells, this form of bidirectional plasticity provides a dynamic control of the granular layer encoding capacity.


2019 ◽  
Author(s):  
Matt Udakis ◽  
Victor Pedrosa ◽  
Sophie E.L. Chamberlain ◽  
Claudia Clopath ◽  
Jack R Mellor

SummaryThe formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.


2010 ◽  
Vol 22 (7) ◽  
pp. 1899-1926 ◽  
Author(s):  
Huajin Tang ◽  
Haizhou Li ◽  
Rui Yan

Memory is a fundamental part of computational systems like the human brain. Theoretical models identify memories as attractors of neural network activity patterns based on the theory that attractor (recurrent) neural networks are able to capture some crucial characteristics of memory, such as encoding, storage, retrieval, and long-term and working memory. In such networks, long-term storage of the memory patterns is enabled by synaptic strengths that are adjusted according to some activity-dependent plasticity mechanisms (of which the most widely recognized is the Hebbian rule) such that the attractors of the network dynamics represent the stored memories. Most of previous studies on associative memory are focused on Hopfield-like binary networks, and the learned patterns are often assumed to be uncorrelated in a way that minimal interactions between memories are facilitated. In this letter, we restrict our attention to a more biological plausible attractor network model and study the neuronal representations of correlated patterns. We have examined the role of saliency weights in memory dynamics. Our results demonstrate that the retrieval process of the memorized patterns is characterized by the saliency distribution, which affects the landscape of the attractors. We have established the conditions that the network state converges to unique memory and multiple memories. The analytical result also holds for other cases for variable coding levels and nonbinary levels, indicating a general property emerging from correlated memories. Our results confirmed the advantage of computing with graded-response neurons over binary neurons (i.e., reducing of spurious states). It was also found that the nonuniform saliency distribution can contribute to disappearance of spurious states when they exit.


2018 ◽  
Vol 115 (49) ◽  
pp. 12531-12536 ◽  
Author(s):  
Xiaoyan Gao ◽  
Sergio Castro-Gomez ◽  
Jasper Grendel ◽  
Sabine Graf ◽  
Ute Süsens ◽  
...  

During early postnatal development, sensory regions of the brain undergo periods of heightened plasticity which sculpt neural networks and lay the foundation for adult sensory perception. Such critical periods were also postulated for learning and memory but remain elusive and poorly understood. Here, we present evidence that the activity-regulated and memory-linked gene Arc/Arg3.1 is transiently up-regulated in the hippocampus during the first postnatal month. Conditional removal of Arc/Arg3.1 during this period permanently alters hippocampal oscillations and diminishes spatial learning capacity throughout adulthood. In contrast, post developmental removal of Arc/Arg3.1 leaves learning and network activity patterns intact. Long-term memory storage continues to rely on Arc/Arg3.1 expression throughout life. These results demonstrate that Arc/Arg3.1 mediates a critical period for spatial learning, during which Arc/Arg3.1 fosters maturation of hippocampal network activity necessary for future learning and memory storage.


1993 ◽  
Vol 16 (3) ◽  
pp. 417-451 ◽  
Author(s):  
Lokendra Shastri ◽  
Venkat Ajjanagadde

AbstractHuman agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency – as though these inferences were a reflexive response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability seems paradoxical given the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuronlike elements represent a large body of systemic knowledge and perform a range of inferences with such speed? We describe a computational model that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox. We show how a connectionist network can encode millions of facts and rules involvingn-ary predicates and variables and perform a class of inferences in a few hundred milliseconds. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model (which we refer to as SHRUTI) achieves this by representing (1) dynamic bindings as the synchronous firing of appropriate nodes, (2) rules as interconnection patterns that direct the propagation of rhythmic activity, and (3) long-term facts as temporal pattern-matching subnetworks. The model is consistent with recent neurophysiological evidence that synchronous activity occurs in the brain and may play a representational role in neural information processing. The model also makes specific psychologically significant predictions about the nature of reflexive reasoning. It identifies constraints on the form of rules that may participate in such reasoning and relates the capacity of the working memory underlying reflexive reasoning to biological parameters such as the lowest frequency at which nodes can sustain synchronous oscillations and the coarseness of synchronization.


2004 ◽  
Vol 04 (01) ◽  
pp. L97-L106 ◽  
Author(s):  
HANS LILJENSTRÖM ◽  
GEIR HALNES

The issue of noise in biological systems is primarily a question of relations: between order and disorder, between stability and flexibility, and between processes at different temporal and spatial scales. In this paper, we use computational models of cortical structures to investigate relations between structure, dynamics, and function of such systems. In particular, we investigate the nature and role of noise at different organizational levels of the nervous system, emphasizing the neural network level. We show that microscopic noise can induce global network oscillations and pseudo-chaos, which make neural information processing more efficient. We find optimal noise levels for which the convergence to stored memory attractor states reaches a maximum, akin to stochastic resonance. We finally discuss the relation between neural and mental processes, and how computational models may relate to real neural systems.


e-Neuroforum ◽  
2013 ◽  
Vol 19 (2) ◽  
Author(s):  
Fritjof Helmchen ◽  
Mark Hübener

AbstractNeuronal networks in the spotlight: deciphering cellular activity patterns with fluo­rescent proteins.The brain’s astounding achievements regarding movement control and sensory pro­cessing are based on complex spatiotemporal activity patterns in the relevant neuronal networks. Our understanding of neuronal network activity is, however, still poor, not least because of the experimental difficulties to directly observe neural circuits at work in the living brain (in vivo). Over the last decade, new opportunities have emerged - especially utilizing 2-photon microscopy - to investigate neuronal networks in action. Central to this progress was the development of fluorescent proteins that change their emission depending on cell activity, enabling the visualization of dynamic activity pat­terns in local neuronal populations. Currently, genetically encoded calcium indicators, proteins which indicate neuronal activity based on action potential-evoked calcium influx, are becoming increasingly used. Long-term expression of these indicators allows repeated monitoring of the same neurons over weeks and months, such that stability and plasticity of their functional properties can be characterized. Furthermore, permanent indicator expression facilitates the correlation of cellular activity patterns and behavior in awake animals. Using examples from recent studies of information processing in mouse neocortex, we review in this article these fascinating new possibilities and discuss the great potential of fluorescent proteins to elucidate the mysteries of neural circuits.


2020 ◽  
Author(s):  
Jules Brochard ◽  
Jean Daunizeau

AbstractComputational investigations of learning and decision making suggest that systematic deviations to adaptive behavior may be the incidental outcome of biological constraints imposed on neural information processing. In particular, recent studies indicate that range adaptation, i.e., the mechanism by which neurons dynamically tune their output firing properties to match the changing statistics of their inputs, may drive plastic changes in the brain’s decision system that induce systematic deviations to rationality. Here, we ask whether behaviorally-relevant neural information processing may be distorted by other incidental, hard-wired, biological constraints, in particular: Hebbian plasticity. One of our main contributions is to propose a simple computational method for identifying (and comparing) the neural signature of such biological mechanisms or constraints. Using ANNs (i.e., artificial neural network models) and RSA (i.e., representational similarity analysis), we compare the neural signatures of two types of hard-wired biological mechanisms/constraints: namely, range adaptation and Hebbian plasticity. We apply the approach to two different open fMRI datasets acquired when people make decisions under risk. In both cases, we show that although peoples’ apparent indifferent choices are well explained by biologically-constrained ANNs, choice data alone does not discriminate between range adaptation and Hebbian plasticity. However, RSA shows that neural activity patterns in bilateral Striatum and Amygdala are more compatible with Hebbian plasticity. Finally, the strength of evidence for Hebbian plasticity in these structures predicts inter-individual differences in choice inconsistency.


2013 ◽  
Vol 109 (2) ◽  
pp. 296-305 ◽  
Author(s):  
Michael S. Carroll ◽  
Jan-Marino Ramirez

Rhythmically active networks are typically composed of neurons that can be classified as silent, tonic spiking, or rhythmic bursting based on their intrinsic activity patterns. Within these networks, neurons are thought to discharge in distinct phase relationships with their overall network output, and it has been hypothesized that bursting pacemaker neurons may lead and potentially trigger cycle onsets. We used multielectrode recording from 72 experiments to test these ideas in rhythmically active slices containing the pre-Bötzinger complex, a region critical for breathing. Following synaptic blockade, respiratory neurons exhibited a gradient of intrinsic spiking to rhythmic bursting activities and thus defied an easy classification into bursting pacemaker and nonbursting categories. Features of their firing activity within the functional network were analyzed for correlation with subsequent rhythmic bursting in synaptic isolation. Higher firing rates through all phases of fictive respiration statistically predicted bursting pacemaker behavior. However, a cycle-by-cycle analysis indicated that respiratory neurons were stochastically activated with each burst. Intrinsically bursting pacemakers led some population bursts and followed others. This variability was not reproduced in traditional fully interconnected computational models, while sparsely connected network models reproduced these results both qualitatively and quantitatively. We hypothesize that pacemaker neurons do not act as clock-like drivers of the respiratory rhythm but rather play a flexible and dynamic role in the initiation and stabilization of each burst. Thus, at the behavioral level, each breath can be thought of as de novo assembly of a stochastic collaboration of network topology and intrinsic properties.


Sign in / Sign up

Export Citation Format

Share Document