scholarly journals Neural population control via deep image synthesis

Science ◽  
2019 ◽  
Vol 364 (6439) ◽  
pp. eaav9436 ◽  
Author(s):  
Pouya Bashivan ◽  
Kohitij Kar ◽  
James J. DiCarlo

Particular deep artificial neural networks (ANNs) are today’s most accurate models of the primate brain’s ventral visual stream. Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today’s ANN models might be used to noninvasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control.

2018 ◽  
Author(s):  
Pouya Bashivan ◽  
Kohitij Kar ◽  
James J DiCarlo

Particular deep artificial neural networks (ANNs) are today’s most accurate models of the primate brain’s ventral visual stream. Here we report that, using a targeted ANN-driven image synthesis method, new luminous power patterns (i.e. images) can be applied to the primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. More importantly, this method, while not yet perfect, already achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today’s ANN models might be used to non-invasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control.


NeuroImage ◽  
2019 ◽  
Vol 188 ◽  
pp. 59-69 ◽  
Author(s):  
Jesse Gomez ◽  
Alexis Drain ◽  
Brianna Jeska ◽  
Vaidehi S. Natu ◽  
Michael Barnett ◽  
...  

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Thomas SA Wallis ◽  
Christina M Funke ◽  
Alexander S Ecker ◽  
Leon A Gatys ◽  
Felix A Wichmann ◽  
...  

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


2014 ◽  
Vol 63 ◽  
pp. 54-61 ◽  
Author(s):  
Young-Choon Kim ◽  
Tae-Wuk Bae ◽  
Hyuk-Ju Kwon ◽  
Byoung-Ik Kim ◽  
Sang-Ho Ahn

Author(s):  
Wenbin He ◽  
Junpeng Wang ◽  
Hanqi Guo ◽  
Ko-Chih Wang ◽  
Han-Wei Shen ◽  
...  

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 59-59
Author(s):  
J M Zanker ◽  
M P Davey

Visual information processing in primate cortex is based on a highly ordered representation of the surrounding world. In addition to the retinotopic mapping of the visual field, systematic variations of the orientation tuning of neurons are described electrophysiologically for the first stages of the visual stream. On the way to understanding the relation of position and orientation representation, in order to give an adequate account of cortical architecture, it will be an essential step to define the minimum spatial requirements for detection of orientation. We addressed the basic question of spatial limits for detecting orientation by comparing computer simulations of simple orientation filters with psychophysical experiments in which the orientation of small lines had to be detected at various positions in the visual field. At sufficiently high contrast levels, the minimum physical length of a line whose orientation can just be resolved is not constant when presented at various eccentricities, but covaries inversely with the cortical magnification factor. A line needs to span less than 0.2 mm on the cortical surface in order to be recognised as oriented, independently of the actual eccentricity at which the stimulus is presented. This seems to indicate that human performance for this task approaches the physical limits, requiring hardly more than approximately three input elements to be activated, in order to detect the orientation of a highly visible line segment. Combined with the estimates for receptive field sizes of orientation-selective filters derived from computer simulations, this experimental result may nourish speculations of how the rather local elementary process underlying orientation detection in the human visual system can be assembled to form much larger receptive fields of the orientation-sensitive neurons known to exist in the primate visual system.


Sign in / Sign up

Export Citation Format

Share Document