scholarly journals Suppressed sensory response to predictable object stimuli throughout the ventral visual stream

2017 ◽  
Author(s):  
David Richter ◽  
Matthias Ekman ◽  
Floris P. de Lange

AbstractPrediction plays a crucial role in perception, as prominently suggested by predictive coding theories. However, the exact form and mechanism of predictive modulations of sensory processing remain unclear, with some studies reporting a downregulation of the sensory response for predictable input, while others observed an enhanced response. In a similar vein, downregulation of the sensory response for predictable input has been linked to either sharpening or dampening of the sensory representation, which are opposite in nature. In the present study we set out to investigate the neural consequences of perceptual expectation of object stimuli throughout the visual hierarchy, using fMRI in human volunteers. Participants (n=24) were exposed to pairs of sequentially presented object images in a statistical learning paradigm, in which the first object predicted the identity of the second object. Image transitions were not task relevant; thus all learning of statistical regularities was incidental. We found strong suppression of neural responses to expected compared to unexpected stimuli throughout the ventral visual stream, including primary visual cortex (V1), lateral occipital complex (LOC), and anterior ventral visual areas. Expectation suppression in LOC, but not V1, scaled positively with image preference, lending support to the dampening account of expectation suppression in object perception.Significance StatementStatistical regularities permeate our world and help us to perceive and understand our surroundings. It has been suggested that the brain fundamentally relies on predictions and constructs models of the world in order to make sense of sensory information. Previous research on the neural basis of prediction has documented expectation suppression, i.e. suppressed responses to expected compared to unexpected stimuli. In the present study we queried the presence and characteristics of expectation suppression throughout the ventral visual stream. We demonstrate robust expectation suppression in the entire ventral visual pathway, and underlying this suppression a dampening of the sensory representation in object-selective visual cortex, but not in primary visual cortex. Taken together, our results provide novel evidence in support of theories conceptualizing perception as an active inference process, which selectively dampens cortical representations of predictable objects. This dampening may support our ability to automatically filter out irrelevant, predictable objects.

2018 ◽  
Vol 120 (3) ◽  
pp. 926-941 ◽  
Author(s):  
Dzmitry A. Kaliukhovich ◽  
Hans Op de Beeck

Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.


2021 ◽  
pp. 1-16
Author(s):  
Tao He ◽  
David Richter ◽  
Zhiguo Wang ◽  
Floris P. de Lange

Abstract Both spatial and temporal context play an important role in visual perception and behavior. Humans can extract statistical regularities from both forms of context to help process the present and to construct expectations about the future. Numerous studies have found reduced neural responses to expected stimuli compared with unexpected stimuli, for both spatial and temporal regularities. However, it is largely unclear whether and how these forms of context interact. In the current fMRI study, 33 human volunteers were exposed to pairs of object stimuli that could be expected or surprising in terms of their spatial and temporal context. We found reliable independent contributions of both spatial and temporal context in modulating the neural response. Specifically, neural responses to stimuli in expected compared with unexpected contexts were suppressed throughout the ventral visual stream. These results suggest that both spatial and temporal context may aid sensory processing in a similar fashion, providing evidence on how different types of context jointly modulate perceptual processing.


2009 ◽  
Vol 106 (37) ◽  
pp. 15996-16001 ◽  
Author(s):  
Christopher L. Striemer ◽  
Craig S. Chapman ◽  
Melvyn A. Goodale

When we reach toward objects, we easily avoid potential obstacles located in the workspace. Previous studies suggest that obstacle avoidance relies on mechanisms in the dorsal visual stream in the posterior parietal cortex. One fundamental question that remains unanswered is where the visual inputs to these dorsal-stream mechanisms are coming from. Here, we provide compelling evidence that these mechanisms can operate in “real-time” without direct input from primary visual cortex (V1). In our first experiment, we used a reaching task to demonstrate that an individual with a dense left visual field hemianopia after damage to V1 remained strikingly sensitive to the position of unseen static obstacles placed in his blind field. Importantly, in a second experiment, we showed that his sensitivity to the same obstacles in his blind field was abolished when a short 2-s delay (without vision) was introduced before reach onset. These findings have far-reaching implications, not only for our understanding of the time constraints under which different visual pathways operate, but also in relation to how these seemingly “primitive” subcortical visual pathways can control complex everyday behavior without recourse to conscious vision.


2017 ◽  
Vol 118 (6) ◽  
pp. 3282-3292 ◽  
Author(s):  
Jason M. Samonds ◽  
Berquin D. Feese ◽  
Tai Sing Lee ◽  
Sandra J. Kuhlman

Complex receptive field characteristics, distributed across a population of neurons, are thought to be critical for solving perceptual inference problems that arise during motion and image segmentation. For example, in a class of neurons referred to as “end-stopped,” increasing the length of stimuli outside of the bar-responsive region into the surround suppresses responsiveness. It is unknown whether these properties exist for receptive field surrounds in the mouse. We examined surround modulation in layer 2/3 neurons of the primary visual cortex in mice using two-photon calcium imaging. We found that surround suppression was significantly asymmetric in 17% of the visually responsive neurons examined. Furthermore, the magnitude of asymmetry was correlated with orientation selectivity. Our results demonstrate that neurons in mouse primary visual cortex are differentially sensitive to the addition of elements in the surround and that individual neurons can be described as being either uniformly suppressed by the surround, end-stopped, or side-stopped. NEW & NOTEWORTHY Perception of visual scenes requires active integration of both local and global features to successfully segment objects from the background. Although the underlying circuitry and development of perceptual inference is not well understood, converging evidence indicates that asymmetry and diversity in surround modulation are likely fundamental for these computations. We determined that these key features are present in the mouse. Our results support the mouse as a model to explore the neural basis and development of surround modulation as it relates to perceptual inference.


2021 ◽  
Author(s):  
Aran Nayebi ◽  
Nathan C. L. Kong ◽  
Chengxu Zhuang ◽  
Justin L. Gardner ◽  
Anthony M. Norcia ◽  
...  

Task-optimized deep convolutional neural networks are the most quantitatively accurate models of the primate ventral visual stream. However, such networks are implausible as a model of the mouse visual system because mouse visual cortex has a known shallower hierarchy and the supervised objectives these networks are typically trained with are likely neither ethologically relevant in content nor in quantity. Here we develop shallow network architectures that are more consistent with anatomical and physiological studies of mouse visual cortex than current models. We demonstrate that hierarchically shallow architectures trained using contrastive objective functions applied to visual-acuity-adapted images achieve neural prediction performance that exceed those of the same architectures trained in a supervised manner and result in the most quantitatively accurate models of the mouse visual system. Moreover, these models' neural predictivity significantly surpasses those of supervised, deep architectures that are known to correspond well to the primate ventral visual stream. Finally, we derive a novel measure of inter-animal consistency, and show that the best models closely match this quantity across visual areas. Taken together, our results suggest that contrastive objectives operating on shallow architectures with ethologically-motivated image transformations may be a biologically-plausible computational theory of visual coding in mice.


2018 ◽  
Author(s):  
Anna Blumenthal ◽  
Bobby Stojanoski ◽  
Chris Martin ◽  
Rhodri Cusack ◽  
Stefan Köhler

ABSTRACTIdentifying what an object is, and whether an object has been encountered before, is a crucial aspect of human behavior. Despite this importance, we do not have a complete understanding of the neural basis of these abilities. Investigations into the neural organization of human object representations have revealed category specific organization in the ventral visual stream in perceptual tasks. Interestingly, these categories fall within broader domains of organization, with distinctions between animate, inanimate large, and inanimate small objects. While there is some evidence for category specific effects in the medial temporal lobe (MTL), it is currently unclear whether domain level organization is also present across these structures. To this end, we used fMRI with a continuous recognition memory task. Stimuli were images of objects from several different categories, which were either animate or inanimate, or large or small within the inanimate domain. We employed representational similarity analysis (RSA) to test the hypothesis that object-evoked responses in MTL structures during recognition-memory judgments also show evidence for domain-level organization along both dimensions. Our data support this hypothesis. Specifically, object representations were shaped by either animacy, real-world size, or both, in perirhinal and parahippocampal cortex, as well as the hippocampus. While sensitivity to these dimensions differed when structures when probed individually, hinting at interesting links to functional differentiation, similarities in organization across MTL structures were more prominent overall. These results argue for continuity in the organization of object representations in the ventral visual stream and the MTL.


2017 ◽  
Author(s):  
Jesse Gomez ◽  
Vaidehi Natu ◽  
Brianna Jeska ◽  
Michael Barnett ◽  
Kalanit Grill-Spector

ABSTRACTReceptive fields (RFs) processing information in restricted parts of the visual field are a key property of neurons in the visual system. However, how RFs develop in humans is unknown. Using fMRI and population receptive field (pRF) modeling in children and adults, we determined where and how pRFs develop across the ventral visual stream. We find that pRF properties in visual field maps, V1 through VO1, are adult-like by age 5. However, pRF properties in face- and word-selective regions develop into adulthood, increasing the foveal representation and the visual field coverage for faces in the right hemisphere and words in the left hemisphere. Eye-tracking indicates that pRF changes are related to changing fixation patterns on words and faces across development. These findings suggest a link between viewing behavior of faces and words and the differential development of pRFs across visual cortex, potentially due to competition on foveal coverage.


2019 ◽  
Author(s):  
David Richter ◽  
Floris P. de Lange

AbstractPerception and behavior can be guided by predictions, which are often based on learned statistical regularities. Neural responses to expected stimuli are frequently found to be attenuated after statistical learning. However, whether this sensory attenuation following statistical learning occurs automatically or depends on attention remains unknown. In the present fMRI study, we exposed human volunteers to sequentially presented object stimuli, in which the first object predicted the identity of the second object. We observed a strong attenuation of neural activity for expected compared to unexpected stimuli in the ventral visual stream. Crucially, this sensory attenuation was only apparent when stimuli were attended, and vanished when attention was directed away from the predictable objects. These results put important constraints on neurocomputational theories that cast perception as a process of probabilistic integration of prior knowledge and sensory information.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Floris P de Lange ◽  
Matthias Ekman

The ongoing debate on the neural basis of orientation selectivity in the primary visual cortex continues.


2015 ◽  
Vol 113 (5) ◽  
pp. 1656-1669 ◽  
Author(s):  
Jedediah M. Singer ◽  
Joseph R. Madsen ◽  
William S. Anderson ◽  
Gabriel Kreiman

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.


Sign in / Sign up

Export Citation Format

Share Document