scholarly journals Neural dynamics of perceptual inference and its reversal during imagery

2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Luca Ambrogioni ◽  
Marcel A.J. van Gerven

After the presentation of a visual stimulus, cortical visual processing cascades from low-level sensory features in primary visual areas to increasingly abstract representations in higher-level areas. It is often hypothesized that the reverse process underpins the human ability to generate mental images. Under this hypothesis, visual information feeds back from high-level areas as abstract representations are used to construct the sensory representation in primary visual cortices. Such reversals of information flow are also hypothesized to play a central role in later stages of perception. According to predictive processing theories, ambiguous sensory information is resolved using abstract representations coming from high-level areas through oscillatory rebounds between different levels of the visual hierarchy. However, despite the elegance of these theoretical models, to this day there is no direct experimental evidence of the reversion of visual information flow during mental imagery and perception. In the first part of this paper, we provide direct evidence in humans for a reverse order of activation of the visual hierarchy during imagery. Specifically, we show that classification machine learning models trained on brain data at different time points during the early feedforward phase of perception are reactivated in reverse order during mental imagery. In the second part of the paper, we report an 11Hz oscillatory pattern of feedforward and reversed visual processing phases during perception. Together, these results are in line with the idea that during perception, the high-level cause of sensory input is inferred through recurrent hypothesis updating, whereas during imagery, this learned forward mapping is reversed to generate sensory signals given abstract representations.

2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


2019 ◽  
Author(s):  
Ali Pournaghdali ◽  
Bennett L Schwartz

Studies utilizing continuous flash suppression (CFS) provide valuable information regarding conscious and nonconscious perception. There are, however, crucial unanswered questions regarding the mechanisms of suppression and the level of visual processing in the absence of consciousness with CFS. Research suggests that the answers to these questions depend on the experimental configuration and how we assess consciousness in these studies. The aim of this review is to evaluate the impact of different experimental configurations and the assessment of consciousness on the results of the previous CFS studies. We review studies that evaluated the influence of different experimental configuration on the depth of suppression with CFS and discuss how different assessments of consciousness may impact the results of CFS studies. Finally, we review behavioral and brain recording studies of CFS. In conclusion, previous studies provide evidence for survival of low-level visual information and complete impairment of high-level visual information under the influence of CFS. That is, studies suggest that nonconscious perception of lower-level visual information happens with CFS but there is no evidence for nonconscious highlevel recognition with CFS.


2019 ◽  
Author(s):  
Koen V. Haak ◽  
Christian F. Beckmann

AbstractWhether and how the balance between plasticity and stability varies across the brain is an important open question. Within a processing hierarchy, it is thought that plasticity is increased at higher levels of cortical processing, but direct quantitative comparisons between low- and high-level plasticity have not been made so far. Here, we addressed this issue for the human cortical visual system. By quantifying plasticity as the complement of the heritability of functional connectivity, we demonstrate a non-monotonic relationship between plasticity and hierarchical level, such that plasticity decreases from early to mid-level cortex, and then increases further of the visual hierarchy. This non-monotonic relationship argues against recent theory that the balance between plasticity and stability is governed by the costs of the “coding-catastrophe”, and can be explained by a concurrent decline of short-term adaptation and rise of long-term plasticity up the visual processing hierarchy.


2020 ◽  
Author(s):  
Sanjeev Nara ◽  
Mikel Lizarazu ◽  
Craig G Richter ◽  
Diana C Dima ◽  
Mathieu Bourguignon ◽  
...  

AbstractPredictive processing has been proposed as a fundamental cognitive mechanism to account for how the brain interacts with the external environment via its sensory modalities. The brain processes external information about the content (i.e. “what”) and timing (i.e., “when”) of environmental stimuli to update an internal generative model of the world around it. However, the interaction between “what” and “when” has received very little attention when focusing on vision. In this magnetoencephalography (MEG) study we investigate how processing of feature specific information (i.e. “what”) is affected by temporal predictability (i.e. “when”). In line with previous findings, we observed a suppression of evoked neural responses in the visual cortex for predictable stimuli. Interestingly, we observed that temporal uncertainty enhances this expectation suppression effect. This suggests that in temporally uncertain scenarios the neurocognitive system relies more on internal representations and invests less resources integrating bottom-up information. Indeed, temporal decoding analysis indicated that visual features are encoded for a shorter time period by the neural system when temporal uncertainty is higher. This supports the fact that visual information is maintained active for less time for a stimulus whose time onset is unpredictable compared to when it is predictable. These findings highlight the higher reliance of the visual system on the internal expectations when the temporal dynamics of the external environment are less predictable.


2017 ◽  
Author(s):  
D. Pascucci ◽  
G. Mancuso ◽  
E. Santandrea ◽  
C. Della Libera ◽  
G. Plomp ◽  
...  

AbstractEvery instant of perception depends on a cascade of brain processes calibrated to the history of sensory and decisional events. In the present work, we show that human visual perception is constantly shaped by two contrasting forces, exerted by sensory adaptation and past decisions. In a series of experiments, we used multilevel modelling and cross-validation approaches to investigate the impact of previous stimuli and responses on current errors in adjustment tasks. Our results revealed that each perceptual report is permeated by opposite biases from a hierarchy of serially dependent processes: low-level adaptation repels perception away from previous stimuli; high-level, decisional traces attract perceptual reports toward previous responses. Contrary to recent claims, we demonstrated that positive serial dependence does not result from continuity fields operating at the level of early visual processing, but arises from the inertia of decisional templates. This finding is consistent with a Two-process model of serial dependence in which the persistence of read-out weights in a decision unit compensates for sensory adaptation, leading to attractive biases in sequential responses. We propose the first unified account of serial dependence in which functionally distinct mechanisms, operating at different stages, promote the differentiation and integration of visual information over time.


2019 ◽  
Author(s):  
Amarender R. Bogadhi ◽  
Leor N. Katz ◽  
Anil Bollimunta ◽  
David A. Leopold ◽  
Richard J. Krauzlis

AbstractThe evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision – the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.


2018 ◽  
Author(s):  
Ruyuan Zhang ◽  
Duje Tadin

ABSTRACTVisual perceptual learning (VPL) can lead to long-lasting perceptual improvements. While the efficacy of VPL is well established, there is still a considerable debate about what mechanisms underlie the effects of VPL. Much of this debate concentrates on where along the visual processing hierarchy behaviorally relevant plasticity takes place. Here, we aimed to tackle this question in context of motion processing, a domain where links between behavior and processing hierarchy are well established. Specifically, we took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. We trained two groups of participants on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from both low- and mid-level motion processing, as well as high-level cognitive changes. We observed a complete bi-directional transfer of learning between component and pattern stimuli as long as they shared the same apparent motion direction. This result indicates learning-induced plasticity at intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, highlighting the pivotal role of basic visual features in VPL, and diminishing the possibility of non-sensory decision-level enhancements. Taken together, our study psychophysically examined a variety of factors mediating motion VPL, and demonstrated that motion VPL most likely alters visual computation in the middle stage of motion processing.


i-Perception ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 204166952110164
Author(s):  
Stina Cornell Kärnekull* ◽  
Billy Gerdfeldter ◽  
Maria Larsson ◽  
Artin Arshamian

Olfactory perception is malleable and easily modulated by top-down processes such as those induced by visual and verbal information. A classic example of this is olfactory illusions where the perceived pleasantness of an odor is manipulated by the valence of a verbal label that is either visually or auditorily presented together with the odor. The mechanism behind this illusion is still unknown, and it is not clear if it is driven only by verbal information or if there is an interaction between language functions and visual mental imagery processes. One way to test this directly is to study early blind individuals who have little or no experience of visual information or visual mental imagery. Here, we did this by testing early blind, late blind, and sighted individuals in a classical paradigm where odors were presented with negative, neutral, and positive labels via speech. In contrast to our hypothesis—that the lack of visual imagery would render early blind individuals less susceptible to the olfactory illusion—early and late blind participants showed more amplified illusions than sighted. These findings demonstrate that the general mechanism underlying verbally induced olfactory illusions is not caused by visual processing and visual mental imagery per se.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Nadine Dijkstra ◽  
Luca Ambrogioni ◽  
Diego Vidaurre ◽  
Marcel van Gerven

After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.


Sign in / Sign up

Export Citation Format

Share Document