scholarly journals Functional determinants of enhanced and depressed inter-areal information flow in NREM sleep between neuronal ensembles in rat cortex and hippocampus

2018 ◽  
Author(s):  
Umberto Olcese ◽  
Jeroen J. Bos ◽  
Martin Vinck ◽  
Cyriel M.A. Pennartz

AbstractCompared to wakefulness, neuronal activity during non-REM sleep is characterized by a decreased ability to integrate information, but also by the re-emergence of task-related information patterns. To investigate the mechanisms underlying these seemingly opposing phenomena, we measured directed information flow by computing transfer entropy between neuronal spiking activity in three cortical regions and the hippocampus of rats across brain states. State-dependent information flow resulted to be jointly determined by the anatomical distance between neurons and by their functional specialization. We distinguished two regimes, operating at short and long time scales, respectively. From wakefulness to non-REM sleep, transfer entropy at short time scales increased for inter-areal connections between neurons showing behavioral task correlates. Conversely, transfer entropy at long time scales became stronger between non-task modulated neurons and weaker between task- modulated neurons. These results may explain how, during non-REM sleep, a global inter-areal disconnection is compatible with highly specific task-related information transfer.Author SummaryThe brain remains active during deep sleep, yet we still do not know which rules govern information processing between neurons across wakefulness and sleep. Here we provide a first study of how information flow at the level of spiking activity varies as a function of brain state, temporal scale, brain area and behavioral task correlates of single neurons. We found that inter-areal communication at millisecond time scales is enhanced during sleep compared to wakefulness between neurons that code for task information. Conversely, non-modulated neurons showed more prominent communication at longer time scales. These results indicate that multiple, functionally determined communicative architectures coexist in the brain, and provide a novel framework to understand information processing and its consequences during sleep.

2016 ◽  
Vol 28 (2) ◽  
pp. 295-307 ◽  
Author(s):  
Alexander Schlegel ◽  
Prescott Alexander ◽  
Peter U. Tse

The brain is a complex, interconnected information processing network. In humans, this network supports a mental workspace that enables high-level abilities such as scientific and artistic creativity. Do the component processes underlying these abilities occur in discrete anatomical modules, or are they distributed widely throughout the brain? How does the flow of information within this network support specific cognitive functions? Current approaches have limited ability to answer such questions. Here, we report novel multivariate methods to analyze information flow within the mental workspace during visual imagery manipulation. We find that mental imagery entails distributed information flow and shared representations throughout the cortex. These findings challenge existing, anatomically modular models of the neural basis of higher-order mental functions, suggesting that such processes may occur at least in part at a fundamentally distributed level of organization. The novel methods we report may be useful in studying other similarly complex, high-level informational processes.


2019 ◽  
Author(s):  
Mike Li ◽  
Yinuo Han ◽  
Matthew J. Aburn ◽  
Michael Breakspear ◽  
Russell A. Poldrack ◽  
...  

AbstractA key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system. In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain led to a ‘critical’ transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.Author summaryHigher brain function relies on a dynamic balance between functional integration and segregation. Previous work has shown that this balance is mediated in part by alterations in neural gain, which are thought to relate to projections from ascending neuromodulatory nuclei, such as the locus coeruleus. Here, we extend this work by demonstrating that the modulation of neural gain alters the information processing dynamics of the neural components of a biophysical neural model. Specifically, we find that low levels of neural gain are characterized by high Active Information Storage, whereas higher levels of neural gain are associated with an increase in inter-regional Transfer Entropy. Our results suggest that the modulation of neural gain via the ascending arousal system may fundamentally alter the information processing mode of the brain, which in turn has important implications for understanding the biophysical basis of cognition.


2019 ◽  
Vol 362 ◽  
pp. 224-239 ◽  
Author(s):  
Hamid Karimi-Rouzbahani ◽  
Ehsan Vahab ◽  
Reza Ebrahimpour ◽  
Mohammad Bagher Menhaj

Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1231
Author(s):  
Carlos Islas ◽  
Pablo Padilla ◽  
Marco Antonio Prado

We consider brain activity from an information theoretic perspective. We analyze the information processing in the brain, considering the optimality of Shannon entropy transport using the Monge–Kantorovich framework. It is proposed that some of these processes satisfy an optimal transport of informational entropy condition. This optimality condition allows us to derive an equation of the Monge–Ampère type for the information flow that accounts for the branching structure of neurons via the linearization of this equation. Based on this fact, we discuss a version of Murray’s law in this context.


2005 ◽  
Vol 94 (2) ◽  
pp. 1528-1540 ◽  
Author(s):  
M. Frerking ◽  
J. Schulte ◽  
S. P. Wiebe ◽  
U. Stäubli

Spike timing is thought to be an important mechanism for transmitting information in the CNS. Recent studies have emphasized millisecond precision in spike timing to allow temporal summation of rapid synaptic signals. However, spike timing over slower time scales could also be important, through mechanisms including activity-dependent synaptic plasticity or temporal summation of slow postsynaptic potentials (PSPs) such as those mediated by kainate receptors. To determine the extent to which these slower mechanisms contribute to information processing, it is first necessary to understand the properties of behaviorally relevant spike timing over this slow time scale. In this study, we examine the activity of CA3 pyramidal cells during the performance of a complex behavioral task in rats. Sustained firing rates vary over a wide range, and the firing rate of a cell is poorly correlated with the behavioral cues to which the cell responds. Nonrandom interactions between successive spikes can last for several seconds, but the nonrandom distribution of interspike intervals (ISIs) can account for the majority of nonrandom multi-spike patterns. During a stimulus, cellular responses are temporally complex, causing a shift in spike timing that favors intermediate ISIs over short and long ISIs. Response discrimination between related stimuli occurs through changes in both response time-course and response intensity. Precise synchrony between cells is limited, but loosely correlated firing between cells is common. This study indicates that spike timing is regulated over long time scales and suggests that slow synaptic mechanisms could play a substantial role in information processing in the CNS.


2020 ◽  
Author(s):  
Lucas Rudelt ◽  
Daniel González Marx ◽  
Michael Wibral ◽  
Viola Priesemann

AbstractInformation processing can leave distinct footprints on the statistical history dependence in single neuron spiking. Statistical history dependence can be quantified using information theory, but its estimation from experimental recordings is only possible for a reduced representation of past spiking, a so called past embedding. Here, we present a novel embedding-optimization approach that optimizes temporal binning of past spiking to capture most history dependence, while a reliable estimation is ensured by regularization. The approach does not only quantify non-linear and higher-order dependencies, but also provides an estimate of the temporal depth that history dependence reaches into the past. We benchmarked the approach on simulated spike recordings of a leaky integrate-and-fire neuron with long lasting spike-frequency-adaptation, where it accurately estimated history dependence over hundreds of milliseconds. In a diversity of extra-cellular spike recordings, including highly parallel recordings using a Neuropixel probe, we found some neurons with surprisingly strong history dependence, which could last up to seconds. Both aspects, the magnitude and the temporal depth of history dependence, showed interesting differences between recorded systems, which points at systematic differences in information processing between these systems. We provide practical guidelines in this paper and a toolbox for Python3 at https://github.com/Priesemann-Group/hdestimator for readers interested in applying the method to their data.Author summaryEven with exciting advances in recording techniques of neural spiking activity, experiments only provide a comparably short glimpse into the activity of only a tiny subset of all neurons. How can we learn from these experiments about the organization of information processing in the brain? To that end, we exploit that different properties of information processing leave distinct footprints on the firing statistics of individual spiking neurons. In our work, we focus on a particular statistical footprint: How much does a single neuron’s spiking depend on its own preceding activity, which we call history dependence. By quantifying history dependence in neural spike recordings, one can, in turn, infer some of the properties of information processing. Because recording lengths are limited in practice, a direct estimation of history dependence from experiments is challenging. The embedding optimization approach that we present in this paper aims at extracting a maximum of history dependence within the limits set by a reliable estimation. The approach is highly adaptive and thereby enables a meaningful comparison of history dependence between neurons with vastly different spiking statistics, which we exemplify on a diversity of spike recordings. In conjunction with recent, highly parallel spike recording techniques, the approach could yield valuable insights on how hierarchical processing is organized in the brain.


2019 ◽  
Author(s):  
Jan Bím ◽  
Vito De Feo ◽  
Daniel Chicharro ◽  
Malte Bieler ◽  
Ileana L. Hanganu-Opatz ◽  
...  

AbstractQuantifying both the amount and content of the information transferred between neuronal populations is crucial to understand brain functions. Traditional data-driven methods based on Wiener-Granger causality quantify information transferred between neuronal signals, but do not reveal whether transmission of information refers to one specific feature of external stimuli or another. Here, we developed a new measure called Feature-specific Information Transfer (FIT), that quantifies the amount of information transferred between neuronal signals about specific stimulus features. The FIT quantifies the feature-related information carried by a receiver that was previously carried by a sender, but that was never carried by the receiver earlier. We tested the FIT on simulated data in various scenarios. We found that, unlike previous measures, FIT successfully disambiguated genuine feature-specific communication from non-feature specific communication, from external confounding inputs and synergistic interactions. Moreover, the FIT had enhanced temporal sensitivity that facilitates the estimation of the directionality of transfer and the communication delay between neuronal signals. We validated the FIT’s ability to track feature-specific information flow using neurophysiological data. In human electroencephalographic data acquired during a face detection task, the FIT demonstrated that information about the eye in face pictures flowed from the hemisphere contralateral to the eye to the ipsilateral one. In multi-unit activity recorded from thalamic nuclei and primary sensory cortices of rats during multimodal stimulation, FIT, unlike Wiener-Granger methods, credibly detected both the direction of information flow and the sensory features about which information was transmitted. In human cortical high-gamma activity recorded with magnetoencephalography during visuomotor mapping, FIT showed that visuomotor-related information flowed from superior parietal to premotor areas. Our work suggests that the FIT measure has the potential to uncover previously hidden feature-specific information transfer in neuronal recordings and to provide a better understanding of brain communication.Author summaryThe emergence of coherent percepts and behavior relies on the processing and flow of information about sensory features, such as the color or shape of an object, across different areas of the brain. To understand how computations within the brain lead to the emergence of these functions, we need to map the flow of information about each specific feature. Traditional methods, such as those based on Wiener-Granger causality, quantify whether information is transmitted from one brain area to another, but do not reveal if the information being transmitted is about a certain feature or another feature. Here, we develop a new mathematical technique for the analysis of brain activity recordings, called Feature-specific Information Transfer (FIT), that can reveal not only if any information is being transmitted across areas, but whether or not such transmitted information is about a certain sensory feature. We validate the method with both simulated and real neuronal data, showing its power in detecting the presence of feature-specific information transmission, as well as the timing and directionality of this transfer. This work provides a tool of high potential significance to map sensory information processing in the brain.


1984 ◽  
Vol 16 (3-4) ◽  
pp. 623-633
Author(s):  
M Loxham ◽  
F Weststrate

It is generally agreed that both the landfill option, or the civil techniques option for the final disposal of contaminated harbour sludge involves the isolation of the sludge from the environment. For short time scales, engineered barriers such as a bentonite screen, plastic sheets, pumping strategies etc. can be used. However for long time scales the effectiveness of such measures cannot be counted upon. It is thus necessary to be able to predict the long term environmenttal spread of contaminants from a mature landfill. A model is presented that considers diffusion and adsorption in the landfill site and convection and adsorption in the underlaying aquifer. From a parameter analysis starting form practical values it is shown that the adsorption behaviour and the molecular diffusion coefficient of the sludge, are the key parameters involved in the near field. The dilution effects of the far field migration patterns are also illustrated.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Jing Guang ◽  
Halen Baker ◽  
Orilia Ben-Yishay Nizri ◽  
Shimon Firman ◽  
Uri Werner-Reiss ◽  
...  

AbstractDeep brain stimulation (DBS) is currently a standard procedure for advanced Parkinson’s disease. Many centers employ awake physiological navigation and stimulation assessment to optimize DBS localization and outcome. To enable DBS under sedation, asleep DBS, we characterized the cortico-basal ganglia neuronal network of two nonhuman primates under propofol, ketamine, and interleaved propofol-ketamine (IPK) sedation. Further, we compared these sedation states in the healthy and Parkinsonian condition to those of healthy sleep. Ketamine increases high-frequency power and synchronization while propofol increases low-frequency power and synchronization in polysomnography and neuronal activity recordings. Thus, ketamine does not mask the low-frequency oscillations used for physiological navigation toward the basal ganglia DBS targets. The brain spectral state under ketamine and propofol mimicked rapid eye movement (REM) and Non-REM (NREM) sleep activity, respectively, and the IPK protocol resembles the NREM-REM sleep cycle. These promising results are a meaningful step toward asleep DBS with nondistorted physiological navigation.


Sign in / Sign up

Export Citation Format

Share Document