Faculty Opinions recommendation of Cortical sensitivity to visual features in natural scenes.

Author(s):  
Peter König
2017 ◽  
Author(s):  
Ghislain St-Yves ◽  
Thomas Naselaris

AbstractWe introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map—a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: “where” parameters that characterize the location and extent of pooling over visual features, and “what” parameters that characterize tuning to visual features. The “where” parameters are analogous to classical receptive fields, while “what” parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation.We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model’s application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity.


2017 ◽  
Vol 114 (18) ◽  
pp. 4793-4798 ◽  
Author(s):  
Michael F. Bonner ◽  
Russell A. Epstein

A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space.


PLoS Biology ◽  
2005 ◽  
Vol 3 (10) ◽  
pp. e342 ◽  
Author(s):  
Gidon Felsen ◽  
Jon Touryan ◽  
Feng Han ◽  
Yang Dan

2018 ◽  
Author(s):  
João Barbosa ◽  
Albert Compte

AbstractSerial dependence, how recent experiences bias our current estimations, has been described experimentally during delayed-estimation of many different visual features, with subjects tending to make estimates biased towards previous ones. It has been proposed that these attractive biases help perception stabilization in the face of correlated natural scene statistics as an adaptive mechanism, although this remains mostly theoretical. Color, which is strongly correlated in natural scenes, has never been studied with regard to its serial dependencies. Here, we found significant serial dependence in 6 out of 7 datasets with behavioral data of humans (total n=111) performing delayed-estimation of color with uncorrelated sequential stimuli. Consistent with a drifting memory model, serial dependence was stronger when referenced relative to previous report, rather than to previous stimulus. In addition, it built up through the experimental session, suggesting metaplastic mechanisms operating at a slower time scale than previously proposed (e.g. short-term synaptic facilitation). Because, in contrast with natural scenes, stimuli were temporally uncorrelated, this build-up casts doubt on serial dependencies being an ongoing adaptation to the stable statistics of the environment.


2021 ◽  
Author(s):  
Olivier Pennacchio ◽  
Christina Halpin ◽  
Innes Cuthill ◽  
P. Lovell ◽  
Matthew Wheelwright ◽  
...  

Abstract Animal warning signals show remarkable diversity, yet subjectively appear to share visual features that make defended prey stand out and look different from more cryptic palatable species. Here we develop and apply a computational model that emulates avian visual processing of pattern and colour to Lepidopteran wing patterns to show that warning signals have specific neural signatures that set them apart not only from the patterns of undefended species but also from natural scenes. For the first time, we offer an objective and quantitative neural-level definition of warning signals based on how the pattern generates neural activity in the brain of the receiver. This opens new perspectives for understanding and testing how warning signals function and evolve, and, more generally, how sensory systems constrain general principles for signal design.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Daniel Kaiser ◽  
Jacopo Turini ◽  
Radoslaw M Cichy

With every glimpse of our eyes, we sample only a small and incomplete fragment of the visual world, which needs to be contextualized and integrated into a coherent scene representation. Here we show that the visual system achieves this contextualization by exploiting spatial schemata, that is our knowledge about the composition of natural scenes. We measured fMRI and EEG responses to incomplete scene fragments and used representational similarity analysis to reconstruct their cortical representations in space and time. We observed a sorting of representations according to the fragments' place within the scene schema, which occurred during perceptual analysis in the occipital place area and within the first 200 ms of vision. This schema-based coding operates flexibly across visual features (as measured by a deep neural network model) and different types of environments (indoor and outdoor scenes). This flexibility highlights the mechanism's ability to efficiently organize incoming information under dynamic real-world conditions.


2019 ◽  
Author(s):  
Ari S. Benjamin ◽  
Pavan Ramkumar ◽  
Hugo Fernandes ◽  
Matthew Smith ◽  
Konrad P. Kording

SummaryTo understand activity in the higher visual cortex, researchers typically investigate how parametric changes in stimuli affect neural activity. These experiments reveal neurons’ general response properties only when the effect of a parameter in synthetic stimuli is representative of its effect in other visual contexts. However, in higher visual cortex it is rarely verified how well tuning to parameters of simplified experimental stimuli represents tuning to those parameters in complex or naturalistic stimuli. To evaluate precisely how much tuning curves can change with context, we developed a methodology to estimate tuning from neural responses to natural scenes. For neurons in macaque V4, we then estimated tuning curves for hue from both natural scene responses and responses to artificial stimuli of varying hue. We found that neurons’ hue tuning on artificial stimuli was not representative of their hue tuning on natural images, even if the neurons were strongly modulated by hue. These neurons thus respond strongly to interactions between hue and other visual features. We argue that such feature interactions are generally to be expected if the cortex takes an optimal coding strategy. This finding illustrates that tuning curves in higher visual cortex may only be accurate for similar stimuli as shown in the lab, and do not generalize for all neurons to naturalistic and behaviorally relevant stimuli.


2016 ◽  
Vol 116 (3) ◽  
pp. 1328-1343 ◽  
Author(s):  
Pavan Ramkumar ◽  
Patrick N. Lawlor ◽  
Joshua I. Glaser ◽  
Daniel K. Wood ◽  
Adam N. Phillips ◽  
...  

When we search for visual objects, the features of those objects bias our attention across the visual landscape (feature-based attention). The brain uses these top-down cues to select eye movement targets (spatial selection). The frontal eye field (FEF) is a prefrontal brain region implicated in selecting eye movements and is thought to reflect feature-based attention and spatial selection. Here, we study how FEF facilitates attention and selection in complex natural scenes. We ask whether FEF neurons facilitate feature-based attention by representing search-relevant visual features or whether they are primarily involved in selecting eye movement targets in space. We show that search-relevant visual features are weakly predictive of gaze in natural scenes and additionally have no significant influence on FEF activity. Instead, FEF activity appears to primarily correlate with the direction of the upcoming eye movement. Our result demonstrates a concrete need for better models of natural scene search and suggests that FEF activity during natural scene search is explained primarily by spatial selection.


Sign in / Sign up

Export Citation Format

Share Document