The application of Gaussian channel theory to the estimation of information transmission rates in neural systems

Author(s):  
Alexander Nikitin ◽  
Nigel G. Stocks
2019 ◽  
Vol 29 (08) ◽  
pp. 1950003 ◽  
Author(s):  
Agnieszka Pregowska ◽  
Ehud Kaplan ◽  
Janusz Szczepanski

The nature of neural codes is central to neuroscience. Do neurons encode information through relatively slow changes in the firing rates of individual spikes (rate code) or by the precise timing of every spike (temporal code)? Here we compare the loss of information due to correlations for these two possible neural codes. The essence of Shannon’s definition of information is to combine information with uncertainty: the higher the uncertainty of a given event, the more information is conveyed by that event. Correlations can reduce uncertainty or the amount of information, but by how much? In this paper we address this question by a direct comparison of the information per symbol conveyed by the words coming from a binary Markov source (temporal code) with the information per symbol coming from the corresponding Bernoulli source (uncorrelated, rate code). In a previous paper we found that a crucial role in the relation between information transmission rates (ITRs) and firing rates is played by a parameter [Formula: see text], which is the sum of transition probabilities from the no-spike state to the spike state and vice versa. We found that in this case too a crucial role is played by the same parameter [Formula: see text]. We calculated the maximal and minimal bounds of the quotient of ITRs for these sources. Next, making use of the entropy grouping axiom, we determined the loss of information in a Markov source compared with the information in the corresponding Bernoulli source for a given word length. Our results show that in the case of correlated signals the loss of information is relatively small, and thus temporal codes, which are more energetically efficient, can replace rate codes effectively. These results were confirmed by experiments.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 92
Author(s):  
Agnieszka Pregowska

In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.


2020 ◽  
Vol 31 (01) ◽  
pp. 2050042
Author(s):  
Xiaoxiao Song ◽  
Luis Valencia-Cabrera ◽  
Hong Peng ◽  
Jun Wang ◽  
Mario J. Pérez-Jiménez

Based on the feature and communication of neurons in animal neural systems, spiking neural P systems (SN P systems) were proposed as a kind of powerful computing model. Considering the length of axons and the information transmission speed on synapses, SN P systems with delay on synapses (SNP-DS systems) are proposed in this work. Unlike the traditional SN P systems, where all the postsynaptic neurons receive spikes at the same instant from their presynaptic neuron, the postsynaptic neurons in SNP-DS systems would receive spikes at different instants, depending on the delay time on the synapses connecting them. It is proved that the SNP-DS systems are universal as number generators. Two small universal SNP-DS systems, with standard or extended rules, are constructed to compute functions, using 56 and 36 neurons, respectively. Moreover, a simulator has been provided, in order to check the correctness of these two SNP-DS systems, thus providing an experimental validation of the universality of the systems designed.


2011 ◽  
Vol 09 (02) ◽  
pp. 625-635
Author(s):  
ANTONIO D'ARRIGO ◽  
GIULIANO BENENTI ◽  
GIUSEPPE FALCI

Quantum memory channels are attracting growing interest, motivated by both realistic possibilities of transferring information by means of quantum hardware and inadequacies of the memoryless approximation. In fact, subsequent uses of the same quantum transmission resource can be significantly correlated. In this paper we review two Hamiltonian models describing memory effects in a purely dephasing spin-boson channel and in a channel with damping visualized by a micromaser system, respectively. In both cases, we show that the quantum information transmission rates are higher than in the memoryless limit.


2019 ◽  
Author(s):  
Dongqi Han ◽  
Erik De Schutter ◽  
Sungho Hong

AbstractFeedforward networks (FFN) are ubiquitous structures in neural systems and have been studied to understand mechanisms of reliable signal and information transmission. In many FFNs, neurons in one layer have intrinsic properties that are distinct from those in their pre-/postsynaptic layers, but how this affects network-level information processing remains unexplored. Here we show that layer-to-layer heterogeneity arising from lamina-specific cellular properties facilitates signal and information transmission in FFNs. Specifically, we found that signal transformations, made by neighboring layers of neurons on an input-driven spike signal, are complementary to each other. This mechanism boosts information transfer carried by a propagating spike signal, and thereby supports reliable spike signal and information transmission in a deep FFN. Our study suggests that distinct cell types in neural circuits have complementary computational functions and facilitate information processing on the whole.Significance StatementNeural systems have many cell types that differ in properties such as size, shape, cellular mechanisms, etc. Furthermore, neurons often propagate signals to other neurons that have properties very different from their own. We investigated what this phenomenon implies in neural information processing by using computational network models, inspired by a recent experimental study on the olfactory neural pathway of fruit flies. We found that different types of neurons can perform complementary functions in a network, which boosts information transfer on the whole and supports robust, stable signal propagation in a “deep” network with many layers. Our study demonstrates that diverse cell types with different intrinsic properties are crucial for optimal signal and information transfer in neural networks.


2015 ◽  
Vol 62 (1) ◽  
pp. 342-351 ◽  
Author(s):  
Mircea F. Lupu ◽  
Mingui Sun ◽  
Fei-Yue Wang ◽  
Zhi-Hong Mao

2014 ◽  
Vol 108 (3) ◽  
pp. 305-320 ◽  
Author(s):  
Irina Ignatova ◽  
Andrew S. French ◽  
Esa-Ville Immonen ◽  
Roman Frolov ◽  
Matti Weckström

Author(s):  
Agnieszka Pregowska

(1) Background: In nervous system information is conveyed by a sequence of action potentials, called spikes-trains. As MacKay and McCulloch proposed, spike-trains can be represented as bits sequences coming from Information Sources. Previously, we studied relations between Information Transmission Rates (ITR) carried out by the spikes, their correlations, and frequencies. Here, we concentrate on the problem of how spikes fluctuations affect ITR. (2) Methods: The Information Theory Method developed by Shannon is applied. Information Sources are modeled as stationary stochastic processes. We assume such sources as two states Markov processes. As a spike-trains' fluctuation measure, we consider the Standard Deviation sigma, which, in fact, measures average fluctuation of spikes around the average spike frequency. (3) Results: We found that character of ITR and signal fluctuations relation strongly depends on the parameter s which is a sum of transitions probabilities from no spike state to spike state and vice versa. It turned out that for smaller s (s&lt;1) the quotient ITR\sigma has a maximum and can tend to zero depending on transition probabilities. While for s large enough (1&lt;s) the ITR\sigma is separated from 0 for each s. Similar behavior was observed also when we replaced Shannon entropy terms in Markov entropy formula by their approximation with polynomials. We also show that the ITR quotient by Variance behaves in a completely different way. (4) Conclusions: Our results show that for large transition parameter s the Information Transmission Rate by sigma will never decrease to zero. Specifically, for 1&lt;s&lt;1.7 the ITR will be always, independently on transition probabilities which form this s, above the level of fluctuations, i.e. in this case we have sigma&lt;ITR. Thus, we conclude that in a more noisy environment, to get appropriate reliability and efficiency of transmission, Information Sources with higher tendency of transition from the state no spike to spike state and vice versa should be applied.


Sign in / Sign up

Export Citation Format

Share Document