scholarly journals Hue tuning curves in V4 change with visual context

2019 ◽  
Author(s):  
Ari S. Benjamin ◽  
Pavan Ramkumar ◽  
Hugo Fernandes ◽  
Matthew Smith ◽  
Konrad P. Kording

SummaryTo understand activity in the higher visual cortex, researchers typically investigate how parametric changes in stimuli affect neural activity. These experiments reveal neurons’ general response properties only when the effect of a parameter in synthetic stimuli is representative of its effect in other visual contexts. However, in higher visual cortex it is rarely verified how well tuning to parameters of simplified experimental stimuli represents tuning to those parameters in complex or naturalistic stimuli. To evaluate precisely how much tuning curves can change with context, we developed a methodology to estimate tuning from neural responses to natural scenes. For neurons in macaque V4, we then estimated tuning curves for hue from both natural scene responses and responses to artificial stimuli of varying hue. We found that neurons’ hue tuning on artificial stimuli was not representative of their hue tuning on natural images, even if the neurons were strongly modulated by hue. These neurons thus respond strongly to interactions between hue and other visual features. We argue that such feature interactions are generally to be expected if the cortex takes an optimal coding strategy. This finding illustrates that tuning curves in higher visual cortex may only be accurate for similar stimuli as shown in the lab, and do not generalize for all neurons to naturalistic and behaviorally relevant stimuli.

2003 ◽  
Vol 20 (1) ◽  
pp. 77-84 ◽  
Author(s):  
AN CAO ◽  
PETER H. SCHILLER

Relative motion information, especially relative speed between different input patterns, is required for solving many complex tasks of the visual system, such as depth perception by motion parallax and motion-induced figure/ground segmentation. However, little is known about the neural substrate for processing relative speed information. To explore the neural mechanisms for relative speed, we recorded single-unit responses to relative motion in the primary visual cortex (area V1) of rhesus monkeys while presenting sets of random-dot arrays moving at different speeds. We found that most V1 neurons were sensitive to the existence of a discontinuity in speed, that is, they showed higher responses when relative motion was presented compared to homogenous field motion. Seventy percent of the neurons in our sample responded predominantly to relative rather than to absolute speed. Relative speed tuning curves were similar at different center–surround velocity combinations. These relative motion-sensitive neurons in macaque area V1 probably contribute to figure/ground segmentation and motion discontinuity detection.


2017 ◽  
Author(s):  
Santiago A. Cadena ◽  
George H. Denfield ◽  
Edgar Y. Walker ◽  
Leon A. Gatys ◽  
Andreas S. Tolias ◽  
...  

AbstractDespite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have been successfully applied to neural data: On the one hand, transfer learning from networks trained on object recognition worked remarkably well for predicting neural responses in higher areas of the primate ventral stream, but has not yet been used to model spiking activity in early stages such as V1. On the other hand, data-driven models have been used to predict neural responses in the early visual system (retina and V1) of mice, but not primates. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. Even though V1 is rather at an early to intermediate stage of the visual system, we found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.Author summaryPredicting the responses of sensory neurons to arbitrary natural stimuli is of major importance for understanding their function. Arguably the most studied cortical area is primary visual cortex (V1), where many models have been developed to explain its function. However, the most successful models built on neurophysiologists’ intuitions still fail to account for spiking responses to natural images. Here, we model spiking activity in primary visual cortex (V1) of monkeys using deep convolutional neural networks (CNNs), which have been successful in computer vision. We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images. Our results have two important implications. First, since V1 is the result of several nonlinear stages, it should be modeled as such. Second, functional models of entire visual pathways, of which V1 is an early stage, do not only account for higher areas of such pathways, but also provide useful representations for V1 predictions.


2007 ◽  
Vol 24 (1) ◽  
pp. 65-77 ◽  
Author(s):  
YUNING SONG ◽  
CURTIS L. BAKER

Natural scenes contain a variety of visual cues that facilitate boundary perception (e.g., luminance, contrast, and texture). Here we explore whether single neurons in early visual cortex can process both contrast and texture cues. We recorded neural responses in cat A18 to both illusory contours formed by abutting gratings (ICs, texture-defined) and contrast-modulated gratings (CMs, contrast-defined). We found that if a neuron responded to one of the two stimuli, it also responded to the other. These neurons signaled similar contour orientation, spatial frequency, and movement direction of the two stimuli. A given neuron also exhibited similar selectivity for spatial frequency of the fine, stationary grating components (carriers) of the stimuli. These results suggest that the cue-invariance of early cortical neurons extends to different kinds of texture or contrast cues, and might arise from a common nonlinear mechanism.


2018 ◽  
Author(s):  
Takashi Yoshida ◽  
Kenichi Ohki

AbstractNatural scenes sparsely activate neurons in the primary visual cortex (V1). However, how sparsely active neurons robustly represent natural images and how the information is optimally decoded from the representation have not been revealed. We reconstructed natural images from V1 activity in anaesthetized and awake mice. A single natural image was linearly decodable from a surprisingly small number of highly responsive neurons, and an additional use of remaining neurons even degraded the decoding. This representation was achieved by diverse receptive fields (RFs) of the small number of highly responsive neurons. Furthermore, these neurons reliably represented the image across trials, regardless of trial-to-trial response variability. The reliable representation was supported by multiple neurons with overlapping RFs. Based on our results, the diverse, partially overlapping RFs ensure sparse and reliable representation. We propose a new representation scheme in which information is reliably represented while the representing neuronal patterns change across trials and that collecting only the activity of highly responsive neurons is an optimal decoding strategy for the downstream neurons


2018 ◽  
Author(s):  
João Barbosa ◽  
Albert Compte

AbstractSerial dependence, how recent experiences bias our current estimations, has been described experimentally during delayed-estimation of many different visual features, with subjects tending to make estimates biased towards previous ones. It has been proposed that these attractive biases help perception stabilization in the face of correlated natural scene statistics as an adaptive mechanism, although this remains mostly theoretical. Color, which is strongly correlated in natural scenes, has never been studied with regard to its serial dependencies. Here, we found significant serial dependence in 6 out of 7 datasets with behavioral data of humans (total n=111) performing delayed-estimation of color with uncorrelated sequential stimuli. Consistent with a drifting memory model, serial dependence was stronger when referenced relative to previous report, rather than to previous stimulus. In addition, it built up through the experimental session, suggesting metaplastic mechanisms operating at a slower time scale than previously proposed (e.g. short-term synaptic facilitation). Because, in contrast with natural scenes, stimuli were temporally uncorrelated, this build-up casts doubt on serial dependencies being an ongoing adaptation to the stable statistics of the environment.


2020 ◽  
Author(s):  
Nina Kowalewski ◽  
Janne Kauttonen ◽  
Patricia L. Stan ◽  
Brian B. Jeon ◽  
Thomas Fuchs ◽  
...  

SummaryThe development of the visual system is known to be shaped by early-life experience. To identify response properties that contribute to enhanced natural scene representation, we performed calcium imaging of excitatory neurons in the primary visual cortex (V1) of awake mice raised in three different conditions (standard-reared, dark-reared, and delayed-visual experience) and compared neuronal responses to natural scene features relative to simpler grating stimuli that varied in orientation and spatial frequency. We assessed population selectivity in V1 using decoding methods and found that natural scene discriminability increased by 75% between the ages of 4 to 6 weeks. Both natural scene and grating discriminability were higher in standard-reared animals compared to those raised in the dark. This increase in discriminability was accompanied by a reduction in the number of neurons that responded to low-spatial frequency gratings. At the same time there was an increase in neuronal preference for natural scenes. Light exposure restricted to a 2-4 week window during adulthood did not induce improvements in natural scene nor in grating stimulus discriminability. Our results demonstrate that experience reduces the number of neurons required to effectively encode grating stimuli and that early visual experience enhances natural scene discriminability by directly increasing responsiveness to natural scene features.


2016 ◽  
Vol 116 (3) ◽  
pp. 1328-1343 ◽  
Author(s):  
Pavan Ramkumar ◽  
Patrick N. Lawlor ◽  
Joshua I. Glaser ◽  
Daniel K. Wood ◽  
Adam N. Phillips ◽  
...  

When we search for visual objects, the features of those objects bias our attention across the visual landscape (feature-based attention). The brain uses these top-down cues to select eye movement targets (spatial selection). The frontal eye field (FEF) is a prefrontal brain region implicated in selecting eye movements and is thought to reflect feature-based attention and spatial selection. Here, we study how FEF facilitates attention and selection in complex natural scenes. We ask whether FEF neurons facilitate feature-based attention by representing search-relevant visual features or whether they are primarily involved in selecting eye movement targets in space. We show that search-relevant visual features are weakly predictive of gaze in natural scenes and additionally have no significant influence on FEF activity. Instead, FEF activity appears to primarily correlate with the direction of the upcoming eye movement. Our result demonstrates a concrete need for better models of natural scene search and suggests that FEF activity during natural scene search is explained primarily by spatial selection.


Sign in / Sign up

Export Citation Format

Share Document