scholarly journals Human and monkey detection performance in natural images compared with V1 population responses

2015 ◽  
Vol 15 (12) ◽  
pp. 577
Author(s):  
Yoon Bai ◽  
Yuzhi Chen ◽  
Wilson Geisler ◽  
Eyal Seidemann
2021 ◽  
Author(s):  
Mohammad Bashiri ◽  
Edgar Y. Walker ◽  
Konstantin-Klemens Lurz ◽  
Akshay Kumar Jagadish ◽  
Taliah Muhammad ◽  
...  

AbstractWe present a joint deep neural system identification model for two major sources of neural variability: stimulus-driven and stimulus-conditioned fluctuations. To this end, we combine (1) state-of-the-art deep networks for stimulus-driven activity and (2) a flexible, normalizing flow-based generative model to capture the stimulus-conditioned variability including noise correlations. This allows us to train the model end-to-end without the need for sophisticated probabilistic approximations associated with many latent state models for stimulus-conditioned fluctuations. We train the model on the responses of thousands of neurons from multiple areas of the mouse visual cortex to natural images. We show that our model outperforms previous state-of-the-art models in predicting the distribution of neural population responses to novel stimuli, including shared stimulus-conditioned variability. Furthermore, it successfully learns known latent factors of the population responses that are related to behavioral variables such as pupil dilation, and other factors that vary systematically with brain area or retinotopic location. Overall, our model accurately accounts for two critical sources of neural variability while avoiding several complexities associated with many existing latent state models. It thus provides a useful tool for uncovering the interplay between different factors that contribute to variability in neural activity.


Author(s):  
Yuki HAYAMI ◽  
Daiki TAKASU ◽  
Hisakazu AOYANAGI ◽  
Hiroaki TAKAMATSU ◽  
Yoshifumi SHIMODAIRA ◽  
...  

2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


2009 ◽  
Vol 2128 (1) ◽  
pp. 161-172 ◽  
Author(s):  
Dan Middleton ◽  
Ryan Longmire ◽  
Darcy M. Bullock ◽  
James R. Sturdevant

Sign in / Sign up

Export Citation Format

Share Document