Modelling the visual cortex using artificial neural networks for visual image reconstruction

Author(s):  
G. Qiu
2021 ◽  
Author(s):  
Kyle Aitken ◽  
Marina Garrett ◽  
Shawn Olsen ◽  
Stefan Mihalas

Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggest that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift most often occurs along directions that have the most variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.


1992 ◽  
Vol 03 (supp01) ◽  
pp. 91-103 ◽  
Author(s):  
J.A. Hertz ◽  
T.W. Kjær ◽  
E.N. Eskandar ◽  
B.J. Richmond

We show how to use artificial neural networks as a quantitative tool in studying real neuronal processing in the monkey visual system. Training a network to classify neuronal signals according to the stimulus that elicited them permits us to calculate the information transmitted by these signals. We illustrate this for neurons in the primary visual cortex with measurements of the information transmitted about visual stimuli and for cells in inferior temporal cortex with measurements of information about behavioral context. For the latter neurons we also illustrate how artificial neural networks can be used to model the computation they do.


Sign in / Sign up

Export Citation Format

Share Document