scholarly journals Visual motion computation in recurrent neural networks

2017 ◽  
Author(s):  
Marius Pachitariu ◽  
Maneesh Sahani

AbstractPopulations of neurons in primary visual cortex (V1) transform direct thalamic inputs into a cortical representation which acquires new spatio-temporal properties. One of these properties, motion selectivity, has not been strongly tied to putative neural mechanisms, and its origins remain poorly understood. Here we propose that motion selectivity is acquired through the recurrent mechanisms of a network of strongly connected neurons. We first show that a bank of V1 spatiotemporal receptive fields can be generated accurately by a network which receives only instantaneous inputs from the retina. The temporal structure of the receptive fields is generated by the long timescale dynamics associated with the high magnitude eigenvalues of the recurrent connectivity matrix. When these eigenvalues have complex parts, they generate receptive fields that are inseparable in time and space, such as those tuned to motion direction. We also show that the recurrent connectivity patterns can be learnt directly from the statistics of natural movies using a temporally-asymmetric Hebbian learning rule. Probed with drifting grating stimuli and moving bars, neurons in the model show patterns of responses analogous to those of direction-selective simple cells in primary visual cortex. These computations are enabled by a specific pattern of recurrent connections, that can be tested by combining connectome reconstructions with functional recordings.*Author summaryDynamic visual scenes provide our eyes with enormous quantities of visual information, particularly when the visual scene changes rapidly. Even at modest moving speeds, individual small objects quickly change their location causing single points in the scene to change their luminance equally fast. Furthermore, our own movements through the world add to the velocities of objects relative to our retinas, further increasing the speed at which visual inputs change. How can a biological system process efficiently such vast amounts of information, while keeping track of objects in the scene? Here we formulate and analyze a solution that is enabled by the temporal dynamics of networks of neurons.

2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.


2012 ◽  
Vol 24 (10) ◽  
pp. 2700-2725 ◽  
Author(s):  
Takuma Tanaka ◽  
Toshio Aoyagi ◽  
Takeshi Kaneko

We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.


2002 ◽  
Vol 144 (4) ◽  
pp. 430-444 ◽  
Author(s):  
Katrin Suder ◽  
Klaus Funke ◽  
Yongqiang Zhao ◽  
Nicolas Kerscher ◽  
Thomas Wennekers ◽  
...  

2018 ◽  
Author(s):  
Michele A. Cox ◽  
Kacie Dougherty ◽  
Jacob A. Westerberg ◽  
Michelle S. Schall ◽  
Alexander Maier

AbstractResearch throughout the past decades revealed that neurons in primate primary visual cortex (V1) rapidly integrate the two eyes’ separate signals into a combined binocular response. The exact mechanisms giving underlying this binocular integration remain elusive. One open question is whether binocular integration occurs at a single stage of sensory processing or in a sequence of computational steps. To address this question, we examined the temporal dynamics of binocular integration across V1’s laminar microcircuit of awake behaving monkeys. We find that V1 processes binocular stimuli in a dynamic sequence that comprises at least two distinct phases: A transient phase, lasting 50-150ms from stimulus onset, in which neuronal population responses are significantly enhanced for binocular stimulation compared to monocular stimulation, followed by a sustained phase characterized by widespread suppression in which feature-specific computations emerge. In the sustained phase, incongruent binocular stimulation resulted in response reduction relative to monocular stimulation across the V1 population. By contrast, sustained responses for binocular congruent stimulation were either reduced or enhanced relative to monocular responses depending on the neurons’ selectivity for one or both eyes (i.e., ocularity). These results suggest that binocular integration in V1 occurs in at least two sequential steps, with an initial additive combination of the two eyes’ signals followed by the establishment of interocular concordance and discordance.Significance StatementOur two eyes provide two separate streams of visual information that are merged in the primary visual cortex (V1). Previous work showed that stimulating both eyes rather than one eye may either increase or decrease activity in V1, depending on the nature of the stimuli. Here we show that V1 binocular responses change over time, with an early phase of general excitation and followed by stimulus-dependent response suppression. These results provide important new insights into the neural machinery that supports the combination of the two eye’s perspectives into a single coherent view.


1998 ◽  
Vol 78 (2) ◽  
pp. 467-485 ◽  
Author(s):  
CHARLES D. GILBERT

Gilbert, Charles D. Adult Cortical Dynamics. Physiol. Rev. 78: 467–485, 1998. — There are many influences on our perception of local features. What we see is not strictly a reflection of the physical characteristics of a scene but instead is highly dependent on the processes by which our brain attempts to interpret the scene. As a result, our percepts are shaped by the context within which local features are presented, by our previous visual experiences, operating over a wide range of time scales, and by our expectation of what is before us. The substrate for these influences is likely to be found in the lateral interactions operating within individual areas of the cerebral cortex and in the feedback from higher to lower order cortical areas. Even at early stages in the visual pathway, cells are far more flexible in their functional properties than previously thought. It had long been assumed that cells in primary visual cortex had fixed properties, passing along the product of a stereotyped operation to the next stage in the visual pathway. Any plasticity dependent on visual experience was thought to be restricted to a period early in the life of the animal, the critical period. Furthermore, the assembly of contours and surfaces into unified percepts was assumed to take place at high levels in the visual pathway, whereas the receptive fields of cells in primary visual cortex represented very small windows on the visual scene. These concepts of spatial integration and plasticity have been radically modified in the past few years. The emerging view is that even at the earliest stages in the cortical processing of visual information, cells are highly mutable in their functional properties and are capable of integrating information over a much larger part of visual space than originally believed.


2020 ◽  
Author(s):  
Nicolò Meneghetti ◽  
Chiara Cerri ◽  
Elena Tantillo ◽  
Eleonora Vannini ◽  
Matteo Caleo ◽  
...  

AbstractGamma band is known to be involved in the encoding of visual features in the primary visual cortex (V1). Recent results in rodents V1 highlighted the presence, within a broad gamma band (BB) increasing with contrast, of a narrow gamma band (NB) peaking at ∼60 Hz suppressed by contrast and enhanced by luminance. However, the processing of visual information by the two channels still lacks a proper characterization. Here, by combining experimental analysis and modeling, we prove that the two bands are sensitive to specific thalamic inputs associated with complementary contrast ranges. We recorded local field potentials from V1 of awake mice during the presentation of gratings and observed that NB power progressively decreased from low to intermediate levels of contrast. Conversely, BB power was insensitive to low levels of contrast but it progressively increased going from intermediate to high levels of contrast. Moreover, BB response was stronger immediately after contrast reversal, while the opposite held for NB. All the aforementioned dynamics were accurately reproduced by a recurrent excitatory-inhibitory leaky integrate-and-fire network, mimicking layer IV of mouse V1, provided that the sustained and periodic component of the thalamic input were modulated over complementary contrast ranges. These results shed new light on the origin and function of the two V1 gamma bands. In addition, here we propose a simple and effective model of response to visual contrast that might help in reconstructing network dysfunction underlying pathological alterations of visual information processing.Significance StatementGamma band is a ubiquitous hallmark of cortical processing of sensory stimuli. Experimental evidence shows that in the mouse visual cortex two types of gamma activity are differentially modulated by contrast: a narrow band (NB), that seems to be rodent specific, and a standard broad band (BB), observed also in other animal models.We found that narrow band correlates and broad band anticorrelates with visual contrast in two complementary contrast ranges (low and high respectively). Moreover, BB displayed an earlier response than NB. A thalamocortical spiking neuron network model reproduced the aforementioned results, suggesting they might be due to the presence of two complementary but distinct components of the thalamic input into visual cortical circuitry.


2000 ◽  
Vol 84 (4) ◽  
pp. 2048-2062 ◽  
Author(s):  
Mitesh K. Kapadia ◽  
Gerald Westheimer ◽  
Charles D. Gilbert

To examine the role of primary visual cortex in visuospatial integration, we studied the spatial arrangement of contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception. We found a spatial segregation of opposing contextual interactions. At the level of cortical neurons, excitatory interactions were located along the ends of receptive fields, while inhibitory interactions were strongest along the orthogonal axis. Parallel psychophysical studies in human observers showed opposing contextual interactions surrounding a target line with a similar spatial distribution. The results suggest that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.


Sign in / Sign up

Export Citation Format

Share Document