scholarly journals A Sparse Unreliable Distributed Code Underlies the Limits of Behavioral Discrimination

2018 ◽  
Author(s):  
Balaji Sriram ◽  
Alberto Cruz-Martin ◽  
Lillian Li ◽  
Pamela Reinagel ◽  
Anirvan Ghosh

ABSTRACTThe cortical code that underlies perception must enable subjects to perceive the world at timescales relevant for behavior. We find that mice can integrate visual stimuli very quickly (<100 ms) to reach plateau performance in an orientation discrimination task. To define features of cortical activity that underlie performance at these timescales, we measured single unit responses in the mouse visual cortex at timescales relevant to this task. In contrast to high contrast stimuli of longer duration, which elicit reliable activity in individual neurons, stimuli at the threshold of perception elicit extremely sparse and unreliable responses in V1 such that the activity of individual neurons do not reliably report orientation. Integrating information across neurons, however, quickly improves performance. Using a linear decoding model, we estimate that integrating information over 50-100 neurons is sufficient to account for behavioral performance. Thus, at the limits of perception the visual system is able to integrate information across a relatively small number of highly unreliable single units to generate reliable behavior.

2019 ◽  
Vol 30 (3) ◽  
pp. 1040-1055 ◽  
Author(s):  
Balaji Sriram ◽  
Lillian Li ◽  
Alberto Cruz-Martín ◽  
Anirvan Ghosh

Abstract The cortical code that underlies perception must enable subjects to perceive the world at time scales relevant for behavior. We find that mice can integrate visual stimuli very quickly (&lt;100 ms) to reach plateau performance in an orientation discrimination task. To define features of cortical activity that underlie performance at these time scales, we measured single-unit responses in the mouse visual cortex at time scales relevant to this task. In contrast to high-contrast stimuli of longer duration, which elicit reliable activity in individual neurons, stimuli at the threshold of perception elicit extremely sparse and unreliable responses in the primary visual cortex such that the activity of individual neurons does not reliably report orientation. Integrating information across neurons, however, quickly improves performance. Using a linear decoding model, we estimate that integrating information over 50–100 neurons is sufficient to account for behavioral performance. Thus, at the limits of visual perception, the visual system integrates information encoded in the probabilistic firing of unreliable single units to generate reliable behavior.


2018 ◽  
Author(s):  
Miaomiao Jin ◽  
Jeffrey M. Beck ◽  
Lindsey L. Glickfeld

AbstractSensory information is encoded by populations of cortical neurons. Yet, it is unknown how this information is used for even simple perceptual choices such as discriminating orientation. To determine the computation underlying this perceptual choice, we took advantage of the robust adaptation in the mouse visual system. We find that adaptation increases animals’ thresholds for orientation discrimination. This was unexpected since optimal computations that take advantage of all available sensory information predict that the shift in tuning and increase in signal-to-noise ratio in the adapted condition should improve discrimination. Instead, we find that the effects of adaptation on behavior can be explained by the appropriate reliance of the perceptual choice circuits on target preferring neurons, but the failure to discount neurons that prefer the distractor. This suggests that to solve this task the circuit has adopted a suboptimal strategy that discards important task-related information to implement a feed-forward visual computation.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


2021 ◽  
Author(s):  
Aran Nayebi ◽  
Nathan C. L. Kong ◽  
Chengxu Zhuang ◽  
Justin L. Gardner ◽  
Anthony M. Norcia ◽  
...  

Task-optimized deep convolutional neural networks are the most quantitatively accurate models of the primate ventral visual stream. However, such networks are implausible as a model of the mouse visual system because mouse visual cortex has a known shallower hierarchy and the supervised objectives these networks are typically trained with are likely neither ethologically relevant in content nor in quantity. Here we develop shallow network architectures that are more consistent with anatomical and physiological studies of mouse visual cortex than current models. We demonstrate that hierarchically shallow architectures trained using contrastive objective functions applied to visual-acuity-adapted images achieve neural prediction performance that exceed those of the same architectures trained in a supervised manner and result in the most quantitatively accurate models of the mouse visual system. Moreover, these models' neural predictivity significantly surpasses those of supervised, deep architectures that are known to correspond well to the primate ventral visual stream. Finally, we derive a novel measure of inter-animal consistency, and show that the best models closely match this quantity across visual areas. Taken together, our results suggest that contrastive objectives operating on shallow architectures with ethologically-motivated image transformations may be a biologically-plausible computational theory of visual coding in mice.


2016 ◽  
Author(s):  
Inbal Ayzenshtat ◽  
Jesse Jackson ◽  
Rafael Yuste

AbstractThe response properties of neurons to sensory stimuli have been used to identify their receptive fields and functionally map sensory systems. In primary visual cortex, most neurons are selective to a particular orientation and spatial frequency of the visual stimulus. Using two-photon calcium imaging of neuronal populations from the primary visual cortex of mice, we have characterized the response properties of neurons to various orientations and spatial frequencies. Surprisingly, we found that the orientation selectivity of neurons actually depends on the spatial frequency of the stimulus. This dependence can be easily explained if one assumed spatially asymmetric Gabor-type receptive fields. We propose that receptive fields of neurons in layer 2/3 of visual cortex are indeed spatially asymmetric, and that this asymmetry could be used effectively by the visual system to encode natural scenes.Significance StatementIn this manuscript we demonstrate that the orientation selectivity of neurons in primary visual cortex of mouse is highly dependent on the stimulus SF. This dependence is realized quantitatively in a decrease in the selectivity strength of cells in non-optimum SF, and more importantly, it is also evident qualitatively in a shift in the preferred orientation of cells in non-optimum SF. We show that a receptive-field model of a 2D asymmetric Gabor, rather than a symmetric one, can explain this surprising observation. Therefore, we propose that the receptive fields of neurons in layer 2/3 of mouse visual cortex are spatially asymmetric and this asymmetry could be used effectively by the visual system to encode natural scenes.Highlights–Orientation selectivity is dependent on spatial frequency.–Asymmetric Gabor model can explain this dependence.


2005 ◽  
Vol 94 (1) ◽  
pp. 282-294 ◽  
Author(s):  
Alan B Saul ◽  
Peter L Carras ◽  
Allen L Humphrey

Motion in the visual scene is processed by direction-selective neurons in primary visual cortex. These cells receive inputs that differ in space and time. What are these inputs? A previous single-unit recording study in anesthetized monkey V1 proposed that the two major streams arising in the primate retina, the M and P pathways, differed in space and time as required to create direction selectivity. We confirmed that cortical cells driven by P inputs tend to have sustained responses. The M pathway, however, as assessed by recordings in layer 4Cα and from cells with high contrast sensitivity, is not purely transient. The diversity of timing in the M stream suggests that combinations of M inputs, as well as of M and P inputs, create direction selectivity.


2014 ◽  
Vol 26 (10) ◽  
pp. 2187-2200 ◽  
Author(s):  
Hamed Zivari Adab ◽  
Ivo D. Popivanov ◽  
Wim Vanduffel ◽  
Rufin Vogels

Practicing simple visual detection and discrimination tasks improves performance, a signature of adult brain plasticity. The neural mechanisms that underlie these changes in performance are still unclear. Previously, we reported that practice in discriminating the orientation of noisy gratings (coarse orientation discrimination) increased the ability of single neurons in the early visual area V4 to discriminate the trained stimuli. Here, we ask whether practice in this task also changes the stimulus tuning properties of later visual cortical areas, despite the use of simple grating stimuli. To identify candidate areas, we used fMRI to map activations to noisy gratings in trained rhesus monkeys, revealing a region in the posterior inferior temporal (PIT) cortex. Subsequent single unit recordings in PIT showed that the degree of orientation selectivity was similar to that of area V4 and that the PIT neurons discriminated the trained orientations better than the untrained orientations. Unlike in previous single unit studies of perceptual learning in early visual cortex, more PIT neurons preferred trained compared with untrained orientations. The effects of training on the responses to the grating stimuli were also present when the animals were performing a difficult orthogonal task in which the grating stimuli were task-irrelevant, suggesting that the training effect does not need attention to be expressed. The PIT neurons could support orientation discrimination at low signal-to-noise levels. These findings suggest that extensive practice in discriminating simple grating stimuli not only affects early visual cortex but also changes the stimulus tuning of a late visual cortical area.


Perception ◽  
1988 ◽  
Vol 17 (5) ◽  
pp. 597-602 ◽  
Author(s):  
Alan Slater ◽  
Victoria Morison ◽  
Marcia Somers

There is some controversy concerning whether or not the visual abilities of the newborn are mediated entirely through subcortical pathways or whether the visual cortex is functioning at birth. A critical test of cortical functioning is discrimination of orientation: orientation-selective neurons are found in the visual cortex but not in subcortical parts of the visual system. An experiment is described in which newborn infants were habituated to a square-wave grating oriented 45° from vertical. After habituation, significant preferences for the novel, mirror-image, grating were found, a result which argues for some degree of visual cortical functioning at birth.


2020 ◽  
Author(s):  
Jianghong Shi ◽  
Michael A. Buice ◽  
Eric Shea-Brown ◽  
Stefan Mihalas ◽  
Bryan Tripp

Convolutional neural networks trained on object recognition derive some inspiration from the neuroscience of the visual system in primates, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the hierarchical organization of primates, the visual system of the mouse has flatter hierarchy. Since mice are capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a framework for building a biologically constrained convolutional neural network model of lateral areas of the mouse visual cortex. The structural parameters of the network are derived from experimental measurements, specifically estimates of numbers of neurons in each area and cortical layer, the interareal connec-tome, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. The code is freely available to support such research.


2021 ◽  
Author(s):  
Jianghong Shi ◽  
Bryan Tripp ◽  
Eric Shea-Brown ◽  
Stefan Mihalas ◽  
Michael Buice

Convolutional neural networks trained on object recognition derive inspiration from the neural architecture of the visual system in primates, and have been used as models of the feedforward computation performed in the primate ventral stream. In contrast to the deep hierarchical organization of primates, the visual system of the mouse has a shallower arrangement. Since mice and primates are both capable of visually guided behavior, this raises questions about the role of architecture in neural computation. In this work, we introduce a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex. The architecture and structural parameters of the network are derived from experimental measurements, specifically the 100-micrometer resolution interareal connectome, the estimates of numbers of neurons in each area and cortical layer, and the statistics of connections between cortical layers. This network is constructed to support detailed task-optimized models of mouse visual cortex, with neural populations that can be compared to specific corresponding populations in the mouse brain. Using a well-studied image classification task as our working example, we demonstrate the computational capability of this mouse-sized network. Given its relatively small size, MouseNet achieves roughly 2/3rds the performance level on ImageNet as VGG16. In combination with the large scale Allen Brain Observatory Visual Coding dataset, we use representational similarity analysis to quantify the extent to which MouseNet recapitulates the neural representation in mouse visual cortex. Importantly, we provide evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point. We demonstrate that the distributions of some physiological quantities are closer to the observed distributions in the mouse brain after task training. We encourage the use of the MouseNet architecture by making the code freely available.


Sign in / Sign up

Export Citation Format

Share Document