scholarly journals Dimensionality in recurrent spiking networks: global trends in activity and local origins in connectivity

2018 ◽  
Author(s):  
Stefano Recanatesi ◽  
Gabriel Koch Ocker ◽  
Michael A. Buice ◽  
Eric Shea-Brown

AbstractThe dimensionality of a network’s collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.Author summaryNew recording technologies are producing an amazing explosion of data on neural activity. These data reveal the simultaneous activity of hundreds or even thousands of neurons. In principle, the activity of these neurons could explore a vast space of possible patterns. This is what is meant by high-dimensional activity: the number of degrees of freedom (or “modes”) of multineuron activity is large, perhaps as large as the number of neurons themselves. In practice, estimates of dimensionality differ strongly from case to case, and do so in interesting ways across experiments, species, and brain areas. The outcome is important for much more than just accurately describing neural activity: findings of low dimension have been proposed to allow data compression, denoising, and easily readable neural codes, while findings of high dimension have been proposed as signatures of powerful and general computations. So what is it about a neural circuit that leads to one case or the other? Here, we derive a set of principles that inform how the connectivity of a spiking neural network determines the dimensionality of the activity that it produces. These show that, in some cases, highly localized features of connectivity have strong control over a network’s global dimensionality—an interesting finding in the context of, e.g., learning rules that occur locally. We also show how dimension can be much different than first meets the eye with typical “pairwise” measurements, and how stimuli and intrinsic connectivity interact in shaping the overall dimension of a network’s response.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Ni Ji ◽  
Gurrein K Madan ◽  
Guadalupe I Fabre ◽  
Alyssa Dayan ◽  
Casey M Baker ◽  
...  

To adapt to their environments, animals must generate behaviors that are closely aligned to a rapidly changing sensory world. However, behavioral states such as foraging or courtship typically persist over long time scales to ensure proper execution. It remains unclear how neural circuits generate persistent behavioral states while maintaining the flexibility to select among alternative states when the sensory context changes. Here, we elucidate the functional architecture of a neural circuit controlling the choice between roaming and dwelling states, which underlie exploration and exploitation during foraging in C. elegans. By imaging ensemble-level neural activity in freely-moving animals, we identify stereotyped changes in circuit activity corresponding to each behavioral state. Combining circuit-wide imaging with genetic analysis, we find that mutual inhibition between two antagonistic neuromodulatory systems underlies the persistence and mutual exclusivity of the neural activity patterns observed in each state. Through machine learning analysis and circuit perturbations, we identify a sensory processing neuron that can transmit information about food odors to both the roaming and dwelling circuits and bias the animal towards different states in different sensory contexts, giving rise to context-appropriate state transitions. Our findings reveal a potentially general circuit architecture that enables flexible, sensory-driven control of persistent behavioral states.


2020 ◽  
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

AbstractIt is generally accepted that the number of neurons in a given brain area far exceeds the information that area encodes. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding from the low-dimensional manifold to the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.Author SummaryThe number of neurons that we can record from has increased exponentially for decades; today we can simultaneously record from thousands of neurons. However, the individual firing rates are highly redundant. One approach to identifying important features from redundant data is to estimate the dimensionality of the neural recordings, which represents the number of degrees of freedom required to describe the data without significant information loss. Better understanding of dimensionality may also uncover the mechanisms of computation within a neural circuit. Circuits carrying out complex computations might be higher-dimensional than those carrying out simpler computations. Typically, studies have quantified neural dimensionality using one of several available methods despite a lack of consensus on which method would be most appropriate for neural data. In this work, we used several methods to investigate the accuracy of simulated neural data with properties mimicking those of actual neural recordings. Based on these results, we devised an analysis pipeline to estimate the dimensionality of neural recordings. Our work will allow scientists to extract informative features from a large number of highly redundant neurons, as well as quantify the complexity of information encoded by these neurons.


2021 ◽  
Author(s):  
Nikolai M. Chapochnikov ◽  
Cengiz Pehlevan ◽  
Dmitri B. Chklovskii

AbstractOne major question in neuroscience is how to relate connectomes to neural activity, circuit function, and learning. We offer an answer in the peripheral olfactory circuit of the Drosophila larva, composed of olfactory receptor neurons (ORNs) connected through feedback loops with interconnected inhibitory local neurons (LNs). We combine structural and activity data and, using a holistic normative framework based on similarity-matching, we propose a biologically plausible mechanistic model of the circuit. Our model predicts the ORN → LN synaptic weights found in the connectome and demonstrate that they reflect correlations in ORN activity patterns. Additionally, our model explains the relation between ORN → LN and LN – LN synaptic weight and the arising of different LN types. This global synaptic organization can autonomously arise through Hebbian plasticity, and thus allows the circuit to adapt to different environments in an unsupervised manner. Functionally, we propose LNs extract redundant input correlations and dampen them in ORNs, thus partially whitening and normalizing the stimulus representations in ORNs. Our work proposes a comprehensive framework to combine structure, activity, function, and learning, and uncovers a general and potent circuit motif that can learn and extract significant input features and render stimulus representations more efficient.SignificanceThe brain represents information with patterns of neural activity. At the periphery, due to the properties of the external world and of encoding neurons, these patterns contain correlations, which are detrimental for stimulus discrimination. We study the peripheral olfactory neural circuit of the Drosophila larva, that preprocesses neural representations before relaying them to higher brain areas. A comprehensive understanding of this preprocessing is, however, lacking. Here, we propose a mechanistic and normative framework describing the function of the circuit and predict the circuit’s synaptic organization based on the circuit’s input neural activity. We show how the circuit can autonomously adapt to different environments, extracts stimulus features, and decorrelate and normalize input representations, which facilitates odor discrimination downstream.


2017 ◽  
Author(s):  
Emil Wärnberg ◽  
Arvind Kumar

AbstractSeveral recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. Surprisingly, the structure of the intrinsic manifold of the network activity puts constraints on learning. For instance, animals find it difficult to perform tasks that may require a change in the intrinsic manifold. Here, we demonstrate that the Neural Engineering Framework (NEF) can be adapted to design a biologically plausible spiking neuronal network that exhibit low dimensional activity. Consistent with experimental observations, the resulting synaptic weight distribution is heavy-tailed (log-normal). In our model, a change in the intrinsic manifold of the network activity requires rewiring of the whole network, which may be either not possible or a very slow process. This observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.Significance statementA network in the brain consists of thousands of neurons. A priori, we expect that the network will have as many degrees of freedom as its number of neurons. Surprisingly, experimental evidence suggests that local brain activity is confined to a space spanned by 10 variables. Here, we describe an approach to construct spiking neuronal networks that exhibit low-dimensional activity and address the question: how the intrinsic dimensionality of the network activity restricts the learning as suggested by recent experiments? Specifically, we show that tasks that requires animals to change the network activity outside the intrinsic space would entail large changes in the neuronal connectivity, and therefore, animals are either slow or not able to acquire such tasks.


2020 ◽  
Vol 6 (9) ◽  
pp. eaay4213 ◽  
Author(s):  
Yang Hu ◽  
Fred Florio ◽  
Zhizhong Chen ◽  
W. Adam Phelan ◽  
Maxime A. Siegler ◽  
...  

Spin and valley degrees of freedom in materials without inversion symmetry promise previously unknown device functionalities, such as spin-valleytronics. Control of material symmetry with electric fields (ferroelectricity), while breaking additional symmetries, including mirror symmetry, could yield phenomena where chirality, spin, valley, and crystal potential are strongly coupled. Here we report the synthesis of a halide perovskite semiconductor that is simultaneously photoferroelectricity switchable and chiral. Spectroscopic and structural analysis, and first-principles calculations, determine the material to be a previously unknown low-dimensional hybrid perovskite (R)-(−)-1-cyclohexylethylammonium/(S)-(+)-1 cyclohexylethylammonium) PbI3. Optical and electrical measurements characterize its semiconducting, ferroelectric, switchable pyroelectricity and switchable photoferroelectric properties. Temperature dependent structural, dielectric and transport measurements reveal a ferroelectric-paraelectric phase transition. Circular dichroism spectroscopy confirms its chirality. The development of a material with such a combination of these properties will facilitate the exploration of phenomena such as electric field and chiral enantiomer–dependent Rashba-Dresselhaus splitting and circular photogalvanic effects.


2016 ◽  
Vol 8 (6) ◽  
Author(s):  
Joshua T. Bryson ◽  
Xin Jin ◽  
Sunil K. Agrawal

Designing an effective cable architecture for a cable-driven robot becomes challenging as the number of cables and degrees of freedom of the robot increase. A methodology has been previously developed to identify the optimal design of a cable-driven robot for a given task using stochastic optimization. This approach is effective in providing an optimal solution for robots with high-dimension design spaces, but does not provide insights into the robustness of the optimal solution to errors in the configuration parameters that arise in the implementation of a design. In this work, a methodology is developed to analyze the robustness of the performance of an optimal design to changes in the configuration parameters. This robustness analysis can be used to inform the implementation of the optimal design into a robot while taking into account the precision and tolerances of the implementation. An optimized cable-driven robot leg is used as a motivating example to illustrate the application of the configuration robustness analysis. Following the methodology, the effect on robot performance due to design variations is analyzed, and a modified design is developed which minimizes the potential performance degradations due to implementation errors in the design parameters. A robot leg is constructed and is used to validate the robustness analysis by demonstrating the predicted effects of variations in the design parameters on the performance of the robot.


2018 ◽  
Vol 37 (10) ◽  
pp. 1233-1252 ◽  
Author(s):  
Jonathan Hoff ◽  
Alireza Ramezani ◽  
Soon-Jo Chung ◽  
Seth Hutchinson

In this article, we present methods to optimize the design and flight characteristics of a biologically inspired bat-like robot. In previous, work we have designed the topological structure for the wing kinematics of this robot; here we present methods to optimize the geometry of this structure, and to compute actuator trajectories such that its wingbeat pattern closely matches biological counterparts. Our approach is motivated by recent studies on biological bat flight that have shown that the salient aspects of wing motion can be accurately represented in a low-dimensional space. Although bats have over 40 degrees of freedom (DoFs), our robot possesses several biologically meaningful morphing specializations. We use principal component analysis (PCA) to characterize the two most dominant modes of biological bat flight kinematics, and we optimize our robot’s parametric kinematics to mimic these. The method yields a robot that is reduced from five degrees of actuation (DoAs) to just three, and that actively folds its wings within a wingbeat period. As a result of mimicking synergies, the robot produces an average net lift improvesment of 89% over the same robot when its wings cannot fold.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Kevin A Bolding ◽  
Shivathmihai Nagappan ◽  
Bao-Xia Han ◽  
Fan Wang ◽  
Kevin M Franks

Pattern completion, or the ability to retrieve stable neural activity patterns from noisy or partial cues, is a fundamental feature of memory. Theoretical studies indicate that recurrently connected auto-associative or discrete attractor networks can perform this process. Although pattern completion and attractor dynamics have been observed in various recurrent neural circuits, the role recurrent circuitry plays in implementing these processes remains unclear. In recordings from head-fixed mice, we found that odor responses in olfactory bulb degrade under ketamine/xylazine anesthesia while responses immediately downstream, in piriform cortex, remain robust. Recurrent connections are required to stabilize cortical odor representations across states. Moreover, piriform odor representations exhibit attractor dynamics, both within and across trials, and these are also abolished when recurrent circuitry is eliminated. Here, we present converging evidence that recurrently-connected piriform populations stabilize sensory representations in response to degraded inputs, consistent with an auto-associative function for piriform cortex supported by recurrent circuitry.


Sign in / Sign up

Export Citation Format

Share Document