scholarly journals Compressive Temporal Summation in Human Visual Cortex

2017 ◽  
Author(s):  
Jingyang Zhou ◽  
Noah C. Benson ◽  
Kendrick Kay ◽  
Jonathan Winawer

AbstractCombining sensory inputs over space and time is fundamental to vision. Population receptive field models have been successful in characterizing spatial encoding throughout the human visual pathways. A parallel question—how visual areas in the human brain process information distributed over time—has received less attention. One challenge is that the most widely used neuroimaging method—fMRI—has coarse temporal resolution compared to the time-scale of neural dynamics. Here, via carefully controlled temporally modulated stimuli, we show that information about temporal processing can be readily derived from fMRI signal amplitudes in male and female subjects. We find that all visual areas exhibit sub-additive summation, whereby responses to longer stimuli are less than the linear prediction from briefer stimuli. We also find fMRI evidence that the neural response to two stimuli is reduced for brief interstimulus intervals (indicating adaptation). These effects are more pronounced in visual areas anterior to V1-V3. Finally, we develop a general model that shows how these effects can be captured with two simple operations: temporal summation followed by a compressive nonlinearity. This model operates for arbitrary temporal stimulation patterns and provides a simple and interpretable set of computations that can be used to characterize neural response properties across the visual hierarchy. Importantly, compressive temporal summation directly parallels earlier findings of compressive spatial summation in visual cortex describing responses to stimuli distributed across space. This indicates that for space and time, cortex uses a similar processing strategy to achieve higher-level and increasingly invariant representations of the visual world.Significance statementCombining sensory inputs over time is fundamental to seeing. Two important temporal phenomena are summation, the accumulation of sensory inputs over time, and adaptation, a response reduction for repeated or sustained stimuli. We investigated these phenomena in the human visual system using fMRI. We built predictive models that operate on arbitrary temporal patterns of stimulation using two simple computations: temporal summation followed by a compressive nonlinearity. Our new temporal compressive summation model captures (1) subadditive temporal summation, and (2) adaptation. We show that the model accounts for systematic differences in these phenomena across visual areas. Finally, we show that for space and time, the visual system uses a similar strategy to achieve increasingly invariant representations of the visual world.

2019 ◽  
Author(s):  
Kevin A. Murgas ◽  
Ashley M. Wilson ◽  
Valerie Michael ◽  
Lindsey L. Glickfeld

AbstractNeurons in the visual system integrate over a wide range of spatial scales. This diversity is thought to enable both local and global computations. To understand how spatial information is encoded across the mouse visual system, we use two-photon imaging to measure receptive fields in primary visual cortex (V1) and three downstream higher visual areas (HVAs): LM (lateromedial), AL (anterolateral) and PM (posteromedial). We find significantly larger receptive field sizes and less surround suppression in PM than in V1 or the other HVAs. Unlike other visual features studied in this system, specialization of spatial integration in PM cannot be explained by specific projections from V1 to the HVAs. Instead, our data suggests that distinct connectivity within PM may support the area’s unique ability to encode global features of the visual scene, whereas V1, LM and AL may be more specialized for processing local features.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Michael J Arcaro ◽  
Christopher J Honey ◽  
Ryan EB Mruczek ◽  
Sabine Kastner ◽  
Uri Hasson

The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.


2020 ◽  
Vol 124 (1) ◽  
pp. 245-258 ◽  
Author(s):  
Miaomiao Jin ◽  
Lindsey L. Glickfeld

Rapid adaptation dynamically alters sensory signals to account for recent experience. To understand how adaptation affects sensory processing and perception, we must determine how it impacts the diverse set of cortical and subcortical areas along the hierarchy of the mouse visual system. We find that rapid adaptation strongly impacts neurons in primary visual cortex, the higher visual areas, and the colliculus, consistent with its profound effects on behavior.


2015 ◽  
Vol 32 ◽  
Author(s):  
M.J. ARCARO ◽  
S. KASTNER

AbstractAreas V3 and V4 are commonly thought of as individual entities in the primate visual system, based on definition criteria such as their representation of visual space, connectivity, functional response properties, and relative anatomical location in cortex. Yet, large-scale functional and anatomical organization patterns not only emphasize distinctions within each area, but also links across visual cortex. Specifically, the visuotopic organization of V3 and V4 appears to be part of a larger, supra-areal organization, clustering these areas with early visual areas V1 and V2. In addition, connectivity patterns across visual cortex appear to vary within these areas as a function of their supra-areal eccentricity organization. This complicates the traditional view of these regions as individual functional “areas.” Here, we will review the criteria for defining areas V3 and V4 and will discuss functional and anatomical studies in humans and monkeys that emphasize the integration of individual visual areas into broad, supra-areal clusters that work in concert for a common computational goal. Specifically, we propose that the visuotopic organization of V3 and V4, which provides the criteria for differentiating these areas, also unifies these areas into the supra-areal organization of early visual cortex. We propose that V3 and V4 play a critical role in this supra-areal organization by filtering information about the visual environment along parallel pathways across higher-order cortex.


2019 ◽  
Author(s):  
Jiye G. Kim ◽  
Emma Gregory ◽  
Barbara Landau ◽  
Michael McCloskey ◽  
Nicholas B. Turk-Browne ◽  
...  

AbstractRepeated stimuli elicit attenuated responses in visual cortex relative to novel stimuli. This adaptation phenomenon can be considered a form of rapid learning and a signature of perceptual memory. Adaptation occurs not only when a stimulus is repeated immediately, but also when there is a lag in terms of time and other intervening stimuli before the repetition. But how does the visual system keep track of which stimuli are repeated, especially after long delays and many intervening stimuli? We hypothesized that the hippocampus supports long-lag adaptation, given that it learns from single experiences, maintains information over delays, and sends feedback to visual cortex. We tested this hypothesis with fMRI in an amnesic patient, LSJ, who has encephalitic damage to the medial temporal lobe resulting in complete bilateral hippocampal loss. We measured adaptation at varying time lags between repetitions in functionally localized visual areas that were intact in LSJ. We observed that these areas track information over a few minutes even when the hippocampus is unavailable. Indeed, LSJ and controls were identical when attention was directed away from the repeating stimuli: adaptation occurred for lags up to three minutes, but not six minutes. However, when attention was directed toward stimuli, controls now showed an adaptation effect at six minutes but LSJ did not. These findings suggest that visual cortex can support one-shot perceptual memories lasting for several minutes but that the hippocampus is necessary for adaptation in visual cortex after longer delays when stimuli are task-relevant.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 61-61 ◽  
Author(s):  
S Zeki

The most fundamental function of the visual brain is to acquire knowledge about the constant, essential properties of the visual world, in conditions in which the information reaching the brain is never constant from moment to moment. This requires the brain to undertake complex operations on the incoming visual signals, discounting all that is not essential for it to acquire knowledge about the world, selecting that which is important, and subjecting the latter to operations that make the brain independent of the continually changing and non-essential information reaching it. One strategy that the brain uses in undertaking this task is that of functional specialisation, through which different essential features, such as motion and colour, are extracted in specialised and geographically distinct visual areas lying outside the primary visual cortex area V1. Our recent psychophysical experiments show that, just as the processing systems for different attributes of vision are separate, so are the final perceptual systems, since different attributes of the visual scene such as colour, form, and motion are perceived at different times, with colour being ahead of motion by about 80 ms, thus leading to a perceptual asynchrony in terms of real time. The end-result of the operations in these individual areas is the acquisition of knowledge. But knowledge can only be acquired in the conscious state. A conscious awareness is therefore the corollary of activity in the specialised areas. Recent experiments using imaging and time resolution methods as well as patients blinded by lesions either in V1 or in more extensive parts of the visual cortex show that the activity in one or a small number of visual areas, without involvement of V1, can give rise to both conscious experience and a crude knowledge about the visual world. This leads us to the conclusion that consciousness itself may be modular.


2018 ◽  
Vol 4 (1) ◽  
pp. 143-163 ◽  
Author(s):  
Helen H. Yang ◽  
Thomas R. Clandinin

Motion in the visual world provides critical information to guide the behavior of sighted animals. Furthermore, as visual motion estimation requires comparisons of signals across inputs and over time, it represents a paradigmatic and generalizable neural computation. Focusing on the Drosophila visual system, where an explosion of technological advances has recently accelerated experimental progress, we review our understanding of how, algorithmically and mechanistically, motion signals are first computed.


2021 ◽  
Vol 15 ◽  
Author(s):  
Edmund T. Rolls

First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.


2014 ◽  
Vol 26 (3) ◽  
pp. 459-475 ◽  
Author(s):  
Marcin Szwed ◽  
Emilie Qiao ◽  
Antoinette Jobert ◽  
Stanislas Dehaene ◽  
Laurent Cohen

How does reading expertise change the visual system? Here, we explored whether the visual system could develop dedicated perceptual mechanisms in early and intermediate visual cortex under the pressure for fast processing that is particularly strong in reading. We compared fMRI activations in Chinese participants with limited knowledge of French and in French participants with no knowledge of Chinese, exploiting these doubly dissociated reading skills as a tool to study the neural correlates of visual expertise. All participants viewed the same stimuli: words in both languages and matched visual controls, presented at a fast rate comparable with fluent reading. In the Visual Word Form Area, all participants showed enhanced responses to their known scripts. However, group differences were found in occipital cortex. In French readers reading French, activations were enhanced in left-hemisphere visual area V1, with the strongest differences between French words and their controls found at the central and horizontal meridian representations. Chinese participants, who were not expert French readers, did not show these early visual activations. In contrast, Chinese readers reading Chinese showed enhanced activations in intermediate visual areas V3v/hV4, absent in French participants. Together with our previous findings [Szwed, M., Dehaene, S., Kleinschmidt, A., Eger, E., Valabregue, R., Amadon, A., et al. Specialization for written words over objects in the visual cortex. Neuroimage, 56, 330–344, 2011], our results suggest that the effects of extensive practice can be found at the lowest levels of the visual system. They also reveal their cross-script variability: Alphabetic reading involves enhanced engagement of central and right meridian V1 representations that are particularly used in left-to-right reading, whereas Chinese characters put greater emphasis on intermediate visual areas.


2019 ◽  
Author(s):  
Sonia Poltoratski ◽  
Frank Tong

AbstractThe detection and segmentation of meaningful figures from their background is a core function of vision. While work in non-human primates has implicated early visual mechanisms in this figure-ground modulation, neuroimaging in humans has instead largely ascribed the processing of figures and objects to higher stages of the visual hierarchy. Here, we used high-field fMRI at 7Tesla to measure BOLD responses to task-irrelevant orientation-defined figures in human early visual cortex, and employed a novel population receptive field (pRF) mapping-based approach to resolve the spatial profiles of two constituent mechanisms of figure-ground modulation: a local boundary response, and a further enhancement spanning the full extent of the figure region that is driven by global differences in features. Reconstructing the distinct spatial profiles of these effects reveals that figure enhancement modulates responses in human early visual cortex in a manner consistent with a mechanism of automatic, contextually-driven feedback from higher visual areas.Significance StatementA core function of the visual system is to parse complex 2D input into meaningful figures. We do so constantly and seamlessly, both by processing information about visible edges and by analyzing large-scale differences between figures and background. While influential neurophysiology work has characterized an intriguing mechanism that enhances V1 responses to perceptual figures, we have a poor understanding of how the early visual system contributes to figure-ground processing in humans. Here, we use advanced computational analysis methods and high-field human fMRI data to resolve the distinct spatial profiles of local edge and global figure enhancement in the early visual system (V1 and LGN); the latter is distinct and consistent a mechanism of automatic, stimulus-driven feedback from higher-level visual areas.


Sign in / Sign up

Export Citation Format

Share Document