scholarly journals A Dependency Capturing Code for Robust Object Representation

2019 ◽  
Author(s):  
Rishabh Raj ◽  
Dar Dahlen ◽  
Kyle Duyck ◽  
C. Ron Yu

AbstractThe brain has a remarkable ability to recognize objects from noisy or corrupted sensory inputs. How this cognitive robustness is achieved computationally remains unknown. We present a coding paradigm, which encodes structural dependence among features of the input and transforms various forms of the same input into the same representation. The paradigm, through dimensionally expanded representation and sparsity constraint, allows redundant feature coding to enhance robustness and is efficient in representing objects. We demonstrate consistent representations of visual and olfactory objects under conditions of occlusion, high noise or with corrupted coding units. Robust face recognition is achievable without deep layers or large training sets. The paradigm produces both complex and simple receptive fields depending on learning experience, thereby offers a unifying framework of sensory processing.One line abstractWe present a framework of efficient coding of objects as a combination of structurally dependent feature groups that is robust against noise and corruption.

1997 ◽  
Vol 9 (1) ◽  
pp. 117-132 ◽  
Author(s):  
Stephen Grossberg ◽  
Alexander Grunewald

How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts that belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.


2018 ◽  
Author(s):  
Anirvan M. Sengupta ◽  
Mariano Tepper ◽  
Cengiz Pehlevan ◽  
Alexander Genkin ◽  
Dmitri B. Chklovskii

AbstractMany neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling localized receptive fields. More generally, we show analytically that, for data lying on symmetric manifolds, optimal solutions of objectives, from which similarity-preserving networks are derived, have localized receptive fields. Therefore, nonnegative similarity-preserving mapping (NSM) implemented by neural networks can model representations of continuous manifolds in the brain.


2017 ◽  
Author(s):  
Yosef Singer ◽  
Yayoi Teramoto ◽  
Ben D. B. WiIJmore ◽  
Andrew J. King ◽  
Jan W. H. Schnupp ◽  
...  

Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimised to represent features in the recent past of sensory input that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few video or audio frames in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, in their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields tended to resemble those in the brain. This suggests that sensory processing is optimised to extract those features with the most capacity to predict future input.Impact statementPrediction of future input explains diverse neural tuning properties in sensory cortex.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Yosef Singer ◽  
Yayoi Teramoto ◽  
Ben DB Willmore ◽  
Jan WH Schnupp ◽  
Andrew J King ◽  
...  

Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimized to represent features in the recent sensory past that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few moments of video or audio in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons in different mammalian species, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields resembled those in the brain. This suggests that sensory processing is optimized to extract those features with the most capacity to predict future input.


2020 ◽  
Author(s):  
Fraser Aitken ◽  
Georgios Menelaou ◽  
Oliver Warrington ◽  
Renée S. Koolschijn ◽  
Nadège Corbin ◽  
...  

AbstractThe way we perceive the world is strongly influenced by our expectations. In line with this, much recent research has revealed that prior expectations strongly modulate sensory processing. However, the neural circuitry through which the brain integrates external sensory inputs with internal expectation signals remains unknown. In order to understand the computational architecture of the cortex, we need to investigate the way these signals flow through the cortical layers. This is crucial because the different cortical layers have distinct intra- and interregional connectivity patterns, and therefore determining which layers are involved in a cortical computation can inform us on the sources and targets of these signals. Here, we used ultra-high field (7T) functional magnetic resonance imaging (fMRI) to reveal that prior expectations evoke stimulus templates selectively in the deep layers of the primary visual cortex. These results shed light on the neural circuit underlying perceptual inference.


2013 ◽  
Vol 25 (11) ◽  
pp. 2904-2933
Author(s):  
Matthew Chalk ◽  
Iain Murray ◽  
Peggy Seriès

Attention causes diverse changes to visual neuron responses, including alterations in receptive field structure, and firing rates. A common theoretical approach to investigate why sensory neurons behave as they do is based on the efficient coding hypothesis: that sensory processing is optimized toward the statistics of the received input. We extend this approach to account for the influence of task demands, hypothesizing that the brain learns a probabilistic model of both the sensory input and reward received for performing different actions. Attention-dependent changes to neural responses reflect optimization of this internal model to deal with changes in the sensory environment (stimulus statistics) and behavioral demands (reward statistics). We use this framework to construct a simple model of visual processing that is able to replicate a number of attention-dependent changes to the responses of neurons in the midlevel visual cortices. The model is consistent with and provides a normative explanation for recent divisive normalization models of attention (Reynolds & Heeger, 2009 ).


Sign in / Sign up

Export Citation Format

Share Document