scholarly journals Theory of neural coding predicts an upper bound on estimates of memory variability

2019 ◽  
Author(s):  
Robert Taylor ◽  
Paul M Bays

AbstractObservers reproducing elementary visual features from memory after a short delay produce errors consistent with the encoding-decoding properties of neural populations. While inspired by electrophysiological observations of sensory neurons in cortex, the population coding account of these errors is based on a mathematical idealization of neural response functions that abstracts away most of the heterogeneity and complexity of real neuronal populations. Here we examine a more physiologically grounded model based on the tuning of a large set of neurons recorded in macaque V1, and show that key predictions of the idealized model are preserved. Both models predict long-tailed distributions of error when memory resources are taxed, as observed empirically in behavioral experiments and commonly approximated with a mixture of normal and uniform error components. Specifically, for an idealized homogeneous neural population, the width of the fitted normal distribution cannot exceed the average tuning width of the component neurons, and this also holds to a good approximation for more biologically realistic populations. Examining eight published studies of orientation recall, we find a consistent pattern of results suggestive of a median tuning width of approximately 20 degrees, which compares well with neurophysiological observations. The finding that estimates of variability obtained by the normal-plus-uniform mixture method are bounded from above leads us to reevaluate previous studies that interpreted a saturation in width of the normal component as evidence for fundamental limits on the precision of perception, working memory and long-term memory.

2016 ◽  
Author(s):  
Paul M Bays

AbstractSimple visual features, such as orientation, are thought to be represented in the spiking of visual neurons using population codes. I show that optimal decoding of such activity predicts characteristic deviations from the normal distribution of errors at low gains. Examining human perception of orientation stimuli, I show that these predicted deviations are present at near-threshold levels of contrast. The findings may provide a neural-level explanation for the appearance of a threshold in perceptual awareness, whereby stimuli are categorized as seen or unseen. As well as varying in error magnitude, perceptual judgments differ in certainty about what was observed. I demonstrate that variations in the total spiking activity of a neural population can account for the empirical relationship between subjective confidence and precision. These results establish population coding and decoding as the neural basis of perception and perceptual confidence.


2005 ◽  
Vol 17 (10) ◽  
pp. 2215-2239 ◽  
Author(s):  
Si Wu ◽  
Shun-ichi Amari

Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.


2016 ◽  
Vol 28 (2) ◽  
pp. 305-326 ◽  
Author(s):  
Xue-Xin Wei ◽  
Alan A. Stocker

Fisher information is generally believed to represent a lower bound on mutual information (Brunel & Nadal, 1998 ), a result that is frequently used in the assessment of neural coding efficiency. However, we demonstrate that the relation between these two quantities is more nuanced than previously thought. For example, we find that in the small noise regime, Fisher information actually provides an upper bound on mutual information. Generally our results show that it is more appropriate to consider Fisher information as an approximation rather than a bound on mutual information. We analytically derive the correspondence between the two quantities and the conditions under which the approximation is good. Our results have implications for neural coding theories and the link between neural population coding and psychophysically measurable behavior. Specifically, they allow us to formulate the efficient coding problem of maximizing mutual information between a stimulus variable and the response of a neural population in terms of Fisher information. We derive a signature of efficient coding expressed as the correspondence between the population Fisher information and the distribution of the stimulus variable. The signature is more general than previously proposed solutions that rely on specific assumptions about the neural tuning characteristics. We demonstrate that it can explain measured tuning characteristics of cortical neural populations that do not agree with previous models of efficient coding.


1999 ◽  
Vol 11 (1) ◽  
pp. 75-84 ◽  
Author(s):  
Kechen Zhang ◽  
Terrence J. Sejnowski

Sensory and motor variables are typically represented by a population of broadly tuned neurons. A coarser representation with broader tuning can often improve coding accuracy, but sometimes the accuracy may also improve with sharper tuning. The theoretical analysis here shows that the relationship between tuning width and accuracy depends crucially on the dimension of the encoded variable. A general rule is derived for how the Fisher information scales with the tuning width, regardless of the exact shape of the tuning function, the probability distribution of spikes, and allowing some correlated noise between neurons. These results demonstrate a universal dimensionality effect in neural population coding.


2017 ◽  
Author(s):  
Sander W. Keemink ◽  
Mark C. W. van Rossum

AbstractThroughout the nervous system information is typically coded in activity distributed over large population of neurons with broad tuning curves. In idealized situations where a single, continuous stimulus is encoded in a homogeneous population code, the value of an encoded stimulus can be read out without bias. Here we find that when multiple stimuli are simultaneously coded in the population, biases in the estimates of the stimuli and strong correlations between estimates can emerge. Although bias produced via this novel mechanism can be reduced by competitive coding and disappears in the complete absence of noise, the bias diminishes only slowly as a function of neural noise level. A Gaussian Process framework allows for accurate calculation of the bias and shows that a bimodal estimate distribution underlies the bias. The results have implications for neural coding and behavioral experiments.


2005 ◽  
Vol 94 (3) ◽  
pp. 2182-2194 ◽  
Author(s):  
Katja Karmeier ◽  
Holger G. Krapp ◽  
Martin Egelhaaf

Coding of sensory information often involves the activity of neuronal populations. We demonstrate how the accuracy of a population code depends on integration time, the size of the population, and noise correlation between the participating neurons. The population we study consists of 10 identified visual interneurons in the blowfly Calliphora vicina involved in optic flow processing. These neurons are assumed to encode the animal's head or body rotations around horizontal axes by means of graded potential changes. From electrophysiological experiments we obtain parameters for modeling the neurons' responses. From applying a Bayesian analysis on the modeled population response we draw three major conclusions. First, integration of neuronal activities over a time period of only 5 ms after response onset is sufficient to decode accurately the rotation axis. Second, noise correlation between neurons has only little impact on the population's performance. And third, although a population of only two neurons would be sufficient to encode any horizontal rotation axis, the population of 10 vertical system neurons is advantageous if the available integration time is short. For the fly, short integration times to decode neuronal responses are important when controlling rapid flight maneuvers.


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Rava Azeredo da Silveira ◽  
Fred Rieke

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Sign in / Sign up

Export Citation Format

Share Document