scholarly journals Temporal evolution from retinal image size to perceived size in human visual cortex

2018 ◽  
Author(s):  
Juan Chen ◽  
Irene Sperandio ◽  
Molly J. Henry ◽  
Melvyn A Goodale

AbstractOur visual system affords a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that real changes in distance can modulate the firing rate of neurons in primary visual cortex and even subcortical structures, which raises an intriguing possibility that the required integration for size constancy may occur in the initial visual processing in V1 or even earlier. In humans, however, EEG and brain imaging studies have typically manipulated the apparent (not real) distance of stimuli using pictorial illusions, in which the cues to distance are sparse and not congruent. Here, we physically moved the monitor to different distances from the observer, a more ecologically valid paradigm that emulates what happens in everyday life. Using this paradigm in combination with electroencephalography (EEG), we were able for the first time to examine how the computation of size constancy unfolds in real time under real-world viewing conditions. We showed that even when all distance cues were available and congruent, size constancy took about 150 ms to emerge in the activity of visual cortex. The 150-ms interval exceeds the time required for the visual signals to reach V1, but is consistent with the time typically associated with later processing within V1 or recurrent processing from higher-level visual areas. Therefore, this finding provides unequivocal evidence that size constancy does not occur during the initial signal processing in V1 or earlier, but requires subsequent processing, just like any other feature binding mechanisms.

2016 ◽  
Author(s):  
Dylan R Muir ◽  
Patricia Molina-Luna ◽  
Morgane M Roth ◽  
Fritjof Helmchen ◽  
Björn M Kampa

AbstractLocal excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by a assuming ‘feature binding’ connectivity. Unlike under the ‘like-to-like’ scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.Author summaryThe brain is a highly complex structure, with abundant connectivity between nearby neurons in the neocortex, the outermost and evolutionarily most recent part of the brain. Although the network architecture of the neocortex can appear disordered, connections between neurons seem to follow certain rules. These rules most likely determine how information flows through the neural circuits of the brain, but the relationship between particular connectivity rules and the function of the cortical network is not known. We built models of visual cortex in the mouse, assuming distinct rules for connectivity, and examined how the various rules changed the way the models responded to visual stimuli. We also recorded responses to visual stimuli of populations of neurons in anaesthetised mice, and compared these responses with our model predictions. We found that connections in neocortex probably follow a connectivity rule that groups together neurons that differ in simple visual properties, to build more complex representations of visual stimuli. This finding is surprising because primary visual cortex is assumed to support mainly simple visual representations. We show that including specific rules for non-random connectivity in cortical models, and precisely measuring those rules in cortical tissue, is essential to understanding how information is processed by the brain.


2019 ◽  
Vol 29 (13) ◽  
pp. 2237-2243.e4 ◽  
Author(s):  
Juan Chen ◽  
Irene Sperandio ◽  
Molly J. Henry ◽  
Melvyn A. Goodale

2021 ◽  
pp. 1-10
Author(s):  
Georgina Powell ◽  
Olivier Penacchio ◽  
Hannah Derry-Sumner ◽  
Simon K. Rushton ◽  
Deepak Rajenderkumar ◽  
...  

BACKGROUND: Images that deviate from natural scene statistics in terms of spatial frequency and orientation content can produce visual stress (also known as visual discomfort), especially for migraine sufferers. These images appear to over-activate the visual cortex. OBJECTIVE: To connect the literature on visual discomfort with a common chronic condition presenting in neuro-otology clinics known as persistent postural perceptual dizziness (PPPD). Sufferers experience dizziness when walking through highly cluttered environments or when watching moving stimuli. This is thought to arise from maladaptive interaction between vestibular and visual signals for balance. METHODS: We measured visual discomfort to stationary images in patients with PPPD (N = 30) and symptoms of PPPD in a large general population cohort (N = 1858) using the Visual Vertigo Analogue Scale (VVAS) and the Situational Characteristics Questionnaire (SCQ). RESULTS: We found that patients with PPPD, and individuals in the general population with more PPPD symptoms, report heightened visual discomfort to stationary images that deviate from natural spectra (patient comparison, F (1, 1865) = 29, p <  0.001; general population correlations, VVAS, rs (1387) = 0.46, p <  0.001; SCQ, rs (1387) = 0.39, p <  0.001). These findings were not explained by co-morbid migraine. Indeed, PPPD symptoms showed a significantly stronger relationship with visual discomfort than did migraine (VVAS, zH = 8.81, p <  0.001; SCQ, zH  = 6.29, p <  0.001). CONCLUSIONS: We speculate that atypical visual processing –perhaps due to a visual cortex more prone to over-activation –may predispose individuals to PPPD, possibly helping to explain why some patients with vestibular conditions develop PPPD and some do not.


Perception ◽  
2020 ◽  
Vol 49 (7) ◽  
pp. 749-770
Author(s):  
Leon Lou

In three experiments, a bias to inflate in drawing the proportion of an image on a mirror over the mirror itself is demonstrated in a sample ( N = 146) of undergraduate students taking introductory psychology classes. The inflation is not confined to the image of one’s own head but is likely to occur in depictions of any object from a mirror with the mirror frame included. Having to include in the drawing background objects visible in the mirror is found to reduce the inflation. The inflation also diminishes with a smaller mirror and at a longer viewing distance. An account for the inflation in terms of a mechanism of size constancy contingent on selective attention is offered. The size of the inflation suggests a conflation of the perceived mirror image size with the size of the distal object it signals rather than a complete take-over by the latter. The reduction of the size inflation when participants are asked draw both a target and background objects is more likely a result of the selective attention to proportional relationships in the mirror scene, rather than a manifestation of an evenly scaled visual space under distributed visual spatial attention. The implications of the findings to improving proportional accuracy in observational drawing are discussed.


2021 ◽  
pp. 1-14
Author(s):  
Jie Huang ◽  
Paul Beach ◽  
Andrea Bozoki ◽  
David C. Zhu

Background: Postmortem studies of brains with Alzheimer’s disease (AD) not only find amyloid-beta (Aβ) and neurofibrillary tangles (NFT) in the visual cortex, but also reveal temporally sequential changes in AD pathology from higher-order association areas to lower-order areas and then primary visual area (V1) with disease progression. Objective: This study investigated the effect of AD severity on visual functional network. Methods: Eight severe AD (SAD) patients, 11 mild/moderate AD (MAD), and 26 healthy senior (HS) controls undertook a resting-state fMRI (rs-fMRI) and a task fMRI of viewing face photos. A resting-state visual functional connectivity (FC) network and a face-evoked visual-processing network were identified for each group. Results: For the HS, the identified group-mean face-evoked visual-processing network in the ventral pathway started from V1 and ended within the fusiform gyrus. In contrast, the resting-state visual FC network was mainly confined within the visual cortex. AD disrupted these two functional networks in a similar severity dependent manner: the more severe the cognitive impairment, the greater reduction in network connectivity. For the face-evoked visual-processing network, MAD disrupted and reduced activation mainly in the higher-order visual association areas, with SAD further disrupting and reducing activation in the lower-order areas. Conclusion: These findings provide a functional corollary to the canonical view of the temporally sequential advancement of AD pathology through visual cortical areas. The association of the disruption of functional networks, especially the face-evoked visual-processing network, with AD severity suggests a potential predictor or biomarker of AD progression.


2009 ◽  
Vol 102 (6) ◽  
pp. 3469-3480 ◽  
Author(s):  
H. M. Van Ettinger-Veenstra ◽  
W. Huijbers ◽  
T. P. Gutteling ◽  
M. Vink ◽  
J. L. Kenemans ◽  
...  

It is well known that parts of a visual scene are prioritized for visual processing, depending on the current situation. How the CNS moves this focus of attention across the visual image is largely unknown, although there is substantial evidence that preparation of an action is a key factor. Our results support the view that direct corticocortical feedback connections from frontal oculomotor areas to the visual cortex are responsible for the coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)–guided transcranial magnetic stimulation (TMS) was applied to the frontal eye fields (FEFs) and intraparietal sulcus (IPS). A single pulse was delivered 60, 30, or 0 ms before a discrimination target was presented at, or next to, the target of a saccade in preparation. Results showed that the known enhancement of discrimination performance specific to locations to which eye movements are being prepared was enhanced by early TMS on the FEF contralateral to eye movement direction, whereas TMS on the IPS resulted in a general performance increase. The current findings indicate that the FEF affects selective visual processing within the visual cortex itself through direct feedback projections.


NeuroImage ◽  
2012 ◽  
Vol 63 (3) ◽  
pp. 1464-1477 ◽  
Author(s):  
Andreas A. Ioannides ◽  
Vahe Poghosyan ◽  
Lichan Liu ◽  
George A. Saridis ◽  
Marco Tamietto ◽  
...  

2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2018 ◽  
Author(s):  
Andreea Lazar ◽  
Chris Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolić

SummarySensory exposure alters the response properties of individual neurons in primary sensory cortices. However, it remains unclear how these changes affect stimulus encoding by populations of sensory cells. Here, recording from populations of neurons in cat primary visual cortex, we demonstrate that visual exposure enhances stimulus encoding and discrimination. We find that repeated presentation of brief, high-contrast shapes results in a stereotyped, biphasic population response consisting of a short-latency transient, followed by a late and extended period of reverberatory activity. Visual exposure selectively improves the stimulus specificity of the reverberatory activity, by increasing the magnitude and decreasing the trial-to-trial variability of the neuronal response. Critically, this improved stimulus encoding is distributed across the population and depends on precise temporal coordination. Our findings provide evidence for the existence of an exposure-driven optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


Sign in / Sign up

Export Citation Format

Share Document