scholarly journals How Are Complex Cell Properties Adapted to the Statistics of Natural Stimuli?

2004 ◽  
Vol 91 (1) ◽  
pp. 206-212 ◽  
Author(s):  
Konrad P. Körding ◽  
Christoph Kayser ◽  
Wolfgang Einhäuser ◽  
Peter König

Sensory areas should be adapted to the properties of their natural stimuli. What are the underlying rules that match the properties of complex cells in primary visual cortex to their natural stimuli? To address this issue, we sampled movies from a camera carried by a freely moving cat, capturing the dynamics of image motion as the animal explores an outdoor environment. We use these movie sequences as input to simulated neurons. Following the intuition that many meaningful high-level variables, e.g., identities of visible objects, do not change rapidly in natural visual stimuli, we adapt the neurons to exhibit firing rates that are stable over time. We find that simulated neurons, which have optimally stable activity, display many properties that are observed for cortical complex cells. Their response is invariant with respect to stimulus translation and reversal of contrast polarity. Furthermore, spatial frequency selectivity and the aspect ratio of the receptive field quantitatively match the experimentally observed characteristics of complex cells. Hence, the population of complex cells in the primary visual cortex can be described as forming an optimally stable representation of natural stimuli.

2017 ◽  
Author(s):  
Santiago A. Cadena ◽  
George H. Denfield ◽  
Edgar Y. Walker ◽  
Leon A. Gatys ◽  
Andreas S. Tolias ◽  
...  

AbstractDespite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have been successfully applied to neural data: On the one hand, transfer learning from networks trained on object recognition worked remarkably well for predicting neural responses in higher areas of the primate ventral stream, but has not yet been used to model spiking activity in early stages such as V1. On the other hand, data-driven models have been used to predict neural responses in the early visual system (retina and V1) of mice, but not primates. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. Even though V1 is rather at an early to intermediate stage of the visual system, we found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.Author summaryPredicting the responses of sensory neurons to arbitrary natural stimuli is of major importance for understanding their function. Arguably the most studied cortical area is primary visual cortex (V1), where many models have been developed to explain its function. However, the most successful models built on neurophysiologists’ intuitions still fail to account for spiking responses to natural images. Here, we model spiking activity in primary visual cortex (V1) of monkeys using deep convolutional neural networks (CNNs), which have been successful in computer vision. We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images. Our results have two important implications. First, since V1 is the result of several nonlinear stages, it should be modeled as such. Second, functional models of entire visual pathways, of which V1 is an early stage, do not only account for higher areas of such pathways, but also provide useful representations for V1 predictions.


2012 ◽  
Vol 1470 ◽  
pp. 17-23 ◽  
Author(s):  
Zhen Liang ◽  
Hongxin Li ◽  
Yun Yang ◽  
Guangxing Li ◽  
Yong Tang ◽  
...  

2012 ◽  
Vol 24 (5) ◽  
pp. 1271-1296 ◽  
Author(s):  
Michael Teichmann ◽  
Jan Wiltschut ◽  
Fred Hamker

The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.


2004 ◽  
Author(s):  
Tatyana Sharpee ◽  
Hiroki Sugihara ◽  
A. V. Kurgansky ◽  
S. Rebrik ◽  
M. P. Stryker ◽  
...  

2021 ◽  
Vol 14 ◽  
Author(s):  
Huijun Pan ◽  
Shen Zhang ◽  
Deng Pan ◽  
Zheng Ye ◽  
Hao Yu ◽  
...  

Previous studies indicate that top-down influence plays a critical role in visual information processing and perceptual detection. However, the substrate that carries top-down influence remains poorly understood. Using a combined technique of retrograde neuronal tracing and immunofluorescent double labeling, we characterized the distribution and cell type of feedback neurons in cat’s high-level visual cortical areas that send direct connections to the primary visual cortex (V1: area 17). Our results showed: (1) the high-level visual cortex of area 21a at the ventral stream and PMLS area at the dorsal stream have a similar proportion of feedback neurons back projecting to the V1 area, (2) the distribution of feedback neurons in the higher-order visual area 21a and PMLS was significantly denser than in the intermediate visual cortex of area 19 and 18, (3) feedback neurons in all observed high-level visual cortex were found in layer II–III, IV, V, and VI, with a higher proportion in layer II–III, V, and VI than in layer IV, and (4) most feedback neurons were CaMKII-positive excitatory neurons, and few of them were identified as inhibitory GABAergic neurons. These results may argue against the segregation of ventral and dorsal streams during visual information processing, and support “reverse hierarchy theory” or interactive model proposing that recurrent connections between V1 and higher-order visual areas constitute the functional circuits that mediate visual perception. Also, the corticocortical feedback neurons from high-level visual cortical areas to the V1 area are mostly excitatory in nature.


2017 ◽  
Author(s):  
Maria C. Dadarlat ◽  
Michael P. Stryker

AbstractNeurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are initiated by projections from the mesencephalic locomotor region (MLR), the latter through a disinhibitory circuit in V1. The function served by this change in cortical state is unknown. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population during locomotion. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a three- to five-fold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving.Significance StatementThis paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of three the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations.


2020 ◽  
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L. Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

AbstractVisual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognised as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we show that the balance between selectivity and invariance in complex cells is more diverse than thought. Phase invariance is frequently partial, thus retaining sensitivity to brightness polarity, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


2017 ◽  
Author(s):  
Jan Homann ◽  
Sue Ann Koay ◽  
Alistair M. Glidden ◽  
David W. Tank ◽  
Michael J. Berry

AbstractTo explore theories of predictive coding, we presented mice with repeated sequences of images with novel images sparsely substituted. Under these conditions, mice could be rapidly trained to lick in response to a novel image, demonstrating a high level of performance on the first day of testing. Using 2-photon calcium imaging to record from layer 2/3 neurons in the primary visual cortex, we found that novel images evoked excess activity in the majority of neurons. When a new stimulus sequence was repeatedly presented, a majority of neurons had similarly elevated activity for the first few presentations, which then decayed to almost zero activity. The decay time of these transient responses was not fixed, but instead scaled with the length of the stimulus sequence. However, at the same time, we also found a small fraction of the neurons within the population (∼2%) that continued to respond strongly and periodically to the repeated stimulus. Decoding analysis demonstrated that both the transient and sustained responses encoded information about stimulus identity. We conclude that the layer 2/3 population uses a two-channel predictive code: a dense transient code for novel stimuli and a sparse sustained code for familiar stimuli. These results extend and unify existing theories about the nature of predictive neural codes.


2020 ◽  
Vol 30 (9) ◽  
pp. 5067-5087
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

Abstract Visual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognized as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we quantitatively describe the balance between selectivity and invariance in complex cells. Phase invariance is frequently partial, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


Sign in / Sign up

Export Citation Format

Share Document