scholarly journals Predictive remapping preserves elementary visual features across saccades

2012 ◽  
Vol 12 (9) ◽  
pp. 444-444
Author(s):  
W. Harrison ◽  
J. Retell ◽  
R. Remington ◽  
J. Mattingley
2018 ◽  
Author(s):  
Tao He ◽  
Matthias Fritsche ◽  
Floris P. de Lange

AbstractVisual stability is thought to be mediated by predictive remapping of the relevant object information from its current, pre-saccadic locations to its future, post-saccadic location on the retina. However, it is heavily debated whether and what feature information is predictively remapped during the pre-saccadic interval. Using an orientation adaptation paradigm, we investigated whether predictive remapping occurs for stimulus features and whether adaptation itself is remapped. We found strong evidence for predictive remapping of a stimulus presented shortly before saccade onset, but no remapping of adaptation. Furthermore, we establish that predictive remapping also occurs for stimuli that are not saccade targets, pointing toward a ‘forward remapping’ process operating across the whole visual field. Together, our findings suggest that predictive feature remapping of object information plays an important role in mediating visual stability.


2018 ◽  
Vol 18 (13) ◽  
pp. 20 ◽  
Author(s):  
Tao He ◽  
Matthias Fritsche ◽  
Floris P. de Lange

2001 ◽  
Author(s):  
Donald A. Varakin ◽  
Sheena Rogers ◽  
Jeffrey T. Andre ◽  
Susan L. Davis

2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


Sign in / Sign up

Export Citation Format

Share Document