scholarly journals A computational observer model of spatial contrast sensitivity: Effects of photocurrent encoding, fixational eye movements and inference engine

2019 ◽  
Author(s):  
Nicolas P. Cottaris ◽  
Brian A. Wandell ◽  
Fred Rieke ◽  
David H. Brainard

AbstractWe have recently shown that using the information carried by the mosaic of cone excitations of a stationary retina, the relative spatial contrast sensitivity function (CSF) of a computational observer has the same shape as a typical human subject. Absolute human sensitivity, however, is lower than the computational observer by a factor of 5 to 10. Here we model how additional known features of early vision affect spatial contrast sensitivity: fixational eye movements and the conversion of cone photopigment excitations to cone photocurrent responses. For a computational observer that uses a linear classifier applied to the responses of a stimulus-matched linear filter, fixational eye movements substantially change the shape of the spatial CSF, primarily by reducing sensitivity at spatial frequencies above 10 c/deg. For a computational observer that uses a translation-invariant calculation, in which decisions are based on the squared response of a quadrature-pair of linear filters, the CSF shape is little changed by eye movements, but there is a two-fold reduction in sensitivity. The noise and response dynamics of conversion of cone excitations into photocurrent introduce an additional two-fold sensitivity decrease. Hence, the combined effects of fixational eye movements and phototransduction bring the absolute sensitivity of the translation-invariant computational observer CSF to within a factor of 1 to 2 of the human CSF. We note that the human CSF depends on processing of the initial representation by many thalamic and cortical neurons, which are individually quite noisy. Our computational modeling suggests that the net effect of this noise on contrast-detection performance, when considered at the neural population level and behavioral level, is quite small: the inference mechanisms that determine the CSF, presumably in cortex, make efficient use of the information available from the cone photocurrents of the fixating eye.

2018 ◽  
Vol 59 (13) ◽  
pp. 5408
Author(s):  
Jonathan Denniss ◽  
Chris Scholes ◽  
Paul V. McGraw ◽  
Se-Ho Nam ◽  
Neil W. Roach

2016 ◽  
Vol 36 (23) ◽  
pp. 6225-6241 ◽  
Author(s):  
James M. McFarland ◽  
Bruce G. Cumming ◽  
Daniel A. Butts

2019 ◽  
Vol 19 (4) ◽  
pp. 8 ◽  
Author(s):  
Nicolas P. Cottaris ◽  
Haomiao Jiang ◽  
Xiaomao Ding ◽  
Brian A. Wandell ◽  
David H. Brainard

2015 ◽  
Vol 282 (1817) ◽  
pp. 20151568 ◽  
Author(s):  
Chris Scholes ◽  
Paul V. McGraw ◽  
Marcus Nyström ◽  
Neil W. Roach

During steady fixation, observers make small fixational saccades at a rate of around 1–2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate—an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent, we directly compared the contrast sensitivity of fixational eye movements with individuals' psychophysical judgements. Classification accuracy closely matched psychophysical performance, and predicted individuals' threshold estimates with less bias and overall error than those obtained using specific features of the signature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye control mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement.


2021 ◽  
Author(s):  
Andres A Kiani ◽  
Geoffrey M Ghose ◽  
Theoden I Netoff

Neural-mass modeling of neural population data (EEG, ECoG, or LFPs) has shown promise both in elucidating the neural processes underlying cortical rhythms and changes in brain state, as well as offering a framework for testing the interplay between these rhythms and information processing. Models of cortical alpha rhythms (8 - 12 Hz) and their impact in visual sensory processing have been at the forefront of this effort, with the Jansen-Rit being one of the more popular models in this domain. The Jansen-Rit model, however, fails in reproducing key physiological observations including the level of inputs that cortical neurons receive and their responses to visual transients. To address these issues we generated a neural mass model that complies better with synaptic mediated dynamics, intrinsic alpha behavior, and produces realistic responses. The model is robust to many changes in parameter values but critically depends on the ratio of excitation to inhibition, producing response transients whose features are dependent on this ratio and alpha phase and power. The model is sufficiently flexible so as to be able to easily replicate the range of low frequency oscillations observed in different studies. Consistent with experimental observations, we find phase-dependent response dynamics to both visual and electrical stimulation using this model. The model suggests that stimulation facilitates alpha at particular phases and suppresses it in others due to a phase dependent lag in inhibitory responses. Hence, the model generates insight into the physiological parameters responsible for intrinsic oscillations and testable hypotheses regarding the interactions between visual and electrical stimulation on those oscillations.


2018 ◽  
Author(s):  
Nicolas P. Cottaris ◽  
Haomiao Jiang ◽  
Xiaomao Ding ◽  
Brian A. Wandell ◽  
David H. Brainard

We present a computational observer model of the human spatial contrast sensitivity (CSF) function based on the Image Systems EngineeringTools for Biology (ISETBio) simulation framework. We demonstrate that ISETBio-derived CSFs agree well with CSFs derived using traditional ideal observer approaches, when the mosaic, optics, and inference engine are matched. Further simulations extend earlier work by considering more realistic cone mosaics, more recent measurements of human physiological optics, and the effect of varying the inference engine used to link visual representations to psy-chohysical performance. Relative to earlier calculations, our simulations show that the spatial structure of realistic cone mosaics reduces upper bounds on performance at low spatial frequencies, whereas realistic optics derived from modern wavefront measurements lead to increased upper bounds high spatial frequencies. Finally, we demonstrate that the type of inference engine used has a substantial effect on the absolute level of predicted performance. Indeed, the performance gap between an ideal observer with exact knowledge of the relevant signals and human observers is greatly reduced when the inference engine has to learn aspects of the visual task. ISETBio-derived estimates of stimulus representations at different stages along the visual pathway provide a powerful tool for computing the limits of human performance.


Sign in / Sign up

Export Citation Format

Share Document