scholarly journals Comparing average network signals and neural mass signals in systems with low-synchrony

2017 ◽  
Author(s):  
P. Tewarie ◽  
A. Daffertshofer ◽  
B.W. van Dijk

1AbstractNeural mass models are accepted as efficient modelling techniques to model empirical observations such as disturbed oscillations or neuronal synchronization. Neural mass models are based on the mean-field assumption, i.e. they capture the mean-activity of a neuronal population. However, it is unclear if neural mass models still describe the mean activity of a neuronal population when the underlying neural network topology is not homogenous. Here, we test whether the mean activity of a neuronal population can be described by neural mass models when there is neuronal loss and when the connections in the network become sparse. To this end, we derive two neural mass models from a conductance based leaky integrate-and-firing (LIF) model. We then compared the power spectral densities of the mean activity of a network of inhibitory and excitatory LIF neurons with that of neural mass models by computing the Kolmogorov-Smirnov test statistic. Firstly, we found that when the number of neurons in a fully connected LIF-network is larger than 300, the neural mass model is a good description of the mean activity. Secondly, if the connection density in the LIF-network does not exceed a crtical value, this leads to desynchronization of neurons within the LIF-network and to failure of neural mass description. Therefore we conclude that neural mass models can be used for analysing empirical observations if the neuronal network of interest is large enough and when neurons in this system synchronize.

2021 ◽  
Author(s):  
Áine Byrne ◽  
James Ross ◽  
Rachel Nicks ◽  
Stephen Coombes

AbstractNeural mass models have been used since the 1970s to model the coarse-grained activity of large populations of neurons. They have proven especially fruitful for understanding brain rhythms. However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue. Here we consider a simple spiking neuron network model that has recently been shown to admit an exact mean-field description for both synaptic and gap-junction interactions. The mean-field model takes a similar form to a standard neural mass model, with an additional dynamical equation to describe the evolution of within-population synchrony. As well as reviewing the origins of this next generation mass model we discuss its extension to describe an idealised spatially extended planar cortex. To emphasise the usefulness of this model for EEG/MEG modelling we show how it can be used to uncover the role of local gap-junction coupling in shaping large scale synaptic waves.


2021 ◽  
Vol 14 ◽  
Author(s):  
Nicolás Deschle ◽  
Juan Ignacio Gossn ◽  
Prejaas Tewarie ◽  
Björn Schelter ◽  
Andreas Daffertshofer

Modeling the dynamics of neural masses is a common approach in the study of neural populations. Various models have been proven useful to describe a plenitude of empirical observations including self-sustained local oscillations and patterns of distant synchronization. We discuss the extent to which mass models really resemble the mean dynamics of a neural population. In particular, we question the validity of neural mass models if the population under study comprises a mixture of excitatory and inhibitory neurons that are densely (inter-)connected. Starting from a network of noisy leaky integrate-and-fire neurons, we formulated two different population dynamics that both fall into the category of seminal Freeman neural mass models. The derivations contained several mean-field assumptions and time scale separation(s) between membrane and synapse dynamics. Our comparison of these neural mass models with the averaged dynamics of the population reveals bounds in the fraction of excitatory/inhibitory neuron as well as overall network degree for a mass model to provide adequate estimates. For substantial parameter ranges, our models fail to mimic the neural network's dynamics proper, be that in de-synchronized or in (high-frequency) synchronized states. Only around the onset of low-frequency synchronization our models provide proper estimates of the mean potential dynamics. While this shows their potential for, e.g., studying resting state dynamics obtained by encephalography with focus on the transition region, we must accept that predicting the more general dynamic outcome of a neural network via its mass dynamics requires great care.


2021 ◽  
Vol 15 ◽  
Author(s):  
Hongjie Bi ◽  
Matteo di Volo ◽  
Alessandro Torcini

Dynamic excitatory-inhibitory (E-I) balance is a paradigmatic mechanism invoked to explain the irregular low firing activity observed in the cortex. However, we will show that the E-I balance can be at the origin of other regimes observable in the brain. The analysis is performed by combining extensive simulations of sparse E-I networks composed of N spiking neurons with analytical investigations of low dimensional neural mass models. The bifurcation diagrams, derived for the neural mass model, allow us to classify the possible asynchronous and coherent behaviors emerging in balanced E-I networks with structural heterogeneity for any finite in-degree K. Analytic mean-field (MF) results show that both supra and sub-threshold balanced asynchronous regimes are observable in our system in the limit N >> K >> 1. Due to the heterogeneity, the asynchronous states are characterized at the microscopic level by the splitting of the neurons in to three groups: silent, fluctuation, and mean driven. These features are consistent with experimental observations reported for heterogeneous neural circuits. The coherent rhythms observed in our system can range from periodic and quasi-periodic collective oscillations (COs) to coherent chaos. These rhythms are characterized by regular or irregular temporal fluctuations joined to spatial coherence somehow similar to coherent fluctuations observed in the cortex over multiple spatial scales. The COs can emerge due to two different mechanisms. A first mechanism analogous to the pyramidal-interneuron gamma (PING), usually invoked for the emergence of γ-oscillations. The second mechanism is intimately related to the presence of current fluctuations, which sustain COs characterized by an essentially simultaneous bursting of the two populations. We observe period-doubling cascades involving the PING-like COs finally leading to the appearance of coherent chaos. Fluctuation driven COs are usually observable in our system as quasi-periodic collective motions characterized by two incommensurate frequencies. However, for sufficiently strong current fluctuations these collective rhythms can lock. This represents a novel mechanism of frequency locking in neural populations promoted by intrinsic fluctuations. COs are observable for any finite in-degree K, however, their existence in the limit N >> K >> 1 appears as uncertain.


2020 ◽  
Author(s):  
Á. Byrne ◽  
James Ross ◽  
Rachel Nicks ◽  
Stephen Coombes

AbstractNeural mass models have been actively used since the 1970s to model the coarse-grained activity of large populations of neurons. They have proven especially fruitful for understanding brain rhythms. However, although motivated by neurobiological considerations they are phenomeno-logical in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue. Here we consider a simple spiking neuron network model that has recently been shown to admit to an exact mean-field description for both synaptic and gap-junction interactions. The mean-field model takes a similar form to a standard neural mass model, with an additional dynamical equation to describe the evolution of population synchrony. As well as reviewing the origins of this next generation mass model we discuss its extension to describe an idealised spatially extended planar cortex. To emphasise the usefulness of this model for EEG/MEG modelling we show how it can be used to uncover the role of local gap-junction coupling in shaping large scale synaptic waves.


2020 ◽  
Vol 16 (12) ◽  
pp. e1008533
Author(s):  
Halgurd Taher ◽  
Alessandro Torcini ◽  
Simona Olmi

A synaptic theory of Working Memory (WM) has been developed in the last decade as a possible alternative to the persistent spiking paradigm. In this context, we have developed a neural mass model able to reproduce exactly the dynamics of heterogeneous spiking neural networks encompassing realistic cellular mechanisms for short-term synaptic plasticity. This population model reproduces the macroscopic dynamics of the network in terms of the firing rate and the mean membrane potential. The latter quantity allows us to gain insight of the Local Field Potential and electroencephalographic signals measured during WM tasks to characterize the brain activity. More specifically synaptic facilitation and depression integrate each other to efficiently mimic WM operations via either synaptic reactivation or persistent activity. Memory access and loading are related to stimulus-locked transient oscillations followed by a steady-state activity in the β-γ band, thus resembling what is observed in the cortex during vibrotactile stimuli in humans and object recognition in monkeys. Memory juggling and competition emerge already by loading only two items. However more items can be stored in WM by considering neural architectures composed of multiple excitatory populations and a common inhibitory pool. Memory capacity depends strongly on the presentation rate of the items and it maximizes for an optimal frequency range. In particular we provide an analytic expression for the maximal memory capacity. Furthermore, the mean membrane potential turns out to be a suitable proxy to measure the memory load, analogously to event driven potentials in experiments on humans. Finally we show that the γ power increases with the number of loaded items, as reported in many experiments, while θ and β power reveal non monotonic behaviours. In particular, β and γ rhythms are crucially sustained by the inhibitory activity, while the θ rhythm is controlled by excitatory synapses.


2020 ◽  
Author(s):  
Chih-Hsu Huang ◽  
Chou-Ching K. Lin

AbstractNowadays, building low-dimensional mean-field models of neuronal populations is still a critical issue in the computational neuroscience community, because their derivation is difficult for realistic networks of neurons with conductance-based interactions and spike-frequency adaptation that generate nonlinear properties of neurons. Here, based on a colored-noise population density method, we derived a novel neural mass model, termed density-based neural mass model (dNMM), as the mean-field description of network dynamics of adaptive exponential integrate-and-fire neurons. Our results showed that the dNMM was capable of correctly estimating firing rate responses under both steady- and dynamic-input conditions. Finally, it was also able to quantitatively describe the effect of spike-frequency adaptation on the generation of asynchronous irregular activity of excitatory-inhibitory cortical networks. We conclude that in terms of its biological reality and calculation efficiency, the dNMM is a suitable candidate to build very large-scale network models involving multiple brain areas.


2020 ◽  
Author(s):  
Halgurd Taher ◽  
Alessandro Torcini ◽  
Simona Olmi

AbstractA synaptic theory of Working Memory (WM) has been developed in the last decade as a possible alternative to the persistent spiking paradigm. In this context, we have developed a neural mass model able to reproduce exactly the dynamics of heterogeneous spiking neural networks encompassing realistic cellular mechanisms for short-term synaptic plasticity. This population model reproduces the macroscopic dynamics of the network in terms of the firing rate and the mean membrane potential. The latter quantity allows us to get insigth on Local Field Potential and electroencephalographic signals measured during WM tasks to characterize the brain activity. More specifically synaptic facilitation and depression integrate each other to efficiently mimic WM operations via either synaptic reactivation or persistent activity. Memory access and loading are associated to stimulus-locked transient oscillations followed by a steady-state activity in the β-γ band, thus resembling what observed in the cortex during vibrotactile stimuli in humans and object recognition in monkeys. Memory juggling and competition emerge already by loading only two items. However more items can be stored in WM by considering neural architectures composed of multiple excitatory populations and a common inhibitory pool. Memory capacity depends strongly on the presentation rate of the items and it maximizes for an optimal frequency range. In particular we provide an analytic expression for the maximal memory capacity. Furthermore, the mean membrane potential turns out to be a suitable proxy to measure the memory load, analogously to event driven potentials in experiments on humans. Finally we show that the γ power increases with the number of loaded items, as reported in many experiments, while θ and β power reveal non monotonic behaviours. In particular, β and γ rhytms are crucially sustained by the inhibitory activity, while the θ rhythm is controlled by excitatory synapses.Author summaryWorking Memory (WM) is the ability to temporarily store and manipulate stimuli representations that are no longer available to the senses. We have developed an innovative coarse-grained population model able to mimic several operations associated to WM. The novelty of the model consists in reproducing exactly the dynamics of spiking neural networks with realistic synaptic plasticity composed of hundreds of thousands neurons in terms of a few macroscopic variables. These variables give access to experimentally measurable quantities such as local field potentials and electroencephalografic signals. Memory operations are joined to sustained or transient oscillations emerging in different frequency bands, in accordance with experimental results for primate and humans performing WM tasks. We have designed an architecture composed of many excitatory populations and a common inhibitory pool able to store and retain several memory items. The capacity of our multi-item architecture is around 3-5 items, a value corresponding to the WM capacities measured in many experiments. Furthermore, the maximal capacity is achievable only for presentation rates within an optimal frequency range. Finally, we have defined a measure of the memory load analogous to the event-related potentials employed to test humans’ WM capacity during visual memory tasks.


Sign in / Sign up

Export Citation Format

Share Document