Examining the representational content of perirhinal cortex and posterior ventral visual pathway regions when maintenance of visual information is interrupted

Cortex ◽  
2019 ◽  
Vol 121 ◽  
pp. 16-26
Author(s):  
Edward B. O'Neil ◽  
Andy C.H. Lee
2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2018 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

AbstractObject recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. While this process depends on the ventral visual pathway (VVP), we lack an incremental account from low-level inputs to semantic representations, and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics, and test the output of the incremental model against patterns of neural oscillations recorded with MEG in humans. Representational Similarity Analysis showed visual information was represented in alpha activity throughout the VVP, and semantic information was represented in theta activity. Furthermore, informational connectivity showed visual information travels through feedforward connections, while visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2018 ◽  
Author(s):  
Noam Roth ◽  
Nicole C. Rust

ABSTRACTSearching for a specific visual object requires our brain to compare the items in view with a remembered representation of the sought target to determine whether a target match is present. This comparison is thought to be implemented, in part, via the combination of top-down modulations reflecting target identity with feed-forward visual representations. However, it remains unclear whether top-down signals are integrated at a single locus within the ventral visual pathway (e.g. V4) or at multiple stages (e.g. both V4 and inferotemporal cortex, IT). To investigate, we recorded neural responses in V4 and IT as rhesus monkeys performed a task that required them to identify when a target object appeared across variation in position, size and background context. We found non-visual, task-specific signals in both V4 and IT. To evaluate whether V4 was the only locus for the integration of top-down signals, we evaluated several feed-forward accounts of processing from V4 to IT, including a model in which IT preferentially sampled from the best V4 units and a model that allowed for nonlinear IT computation. IT task-specific modulation was not accounted for by any of these feed-forward descriptions, suggesting that during object search, top-down signals are integrated directly within IT.NEW & NOTEWORTHYTo find specific objects, the brain must integrate top-down, target-specific signals with visual information about objects in view. However, the exact route of this integration in the ventral visual pathway is unclear. In the first study to systematically compare V4 and IT during an invariant object search task, we demonstrate that top-down signals found in IT cannot be described as being inherited from V4, but rather must be integrated directly within IT itself.


2021 ◽  
Vol 17 (3) ◽  
pp. e1008775
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

While vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.


2021 ◽  
Author(s):  
Meike Dorothee Hettwer ◽  
Thomas M. Lancaster ◽  
Eva Raspor ◽  
Peter K. Hahn ◽  
Nina Roth Mota ◽  
...  

Recently, the first genetic variants conferring resilience to schizophrenia have been identified. However, the neurobiological mechanisms underlying their protective effect remain unknown. Current models implicate adaptive neuroplastic changes in the visual system and their pro-cognitive effects in schizophrenia resilience. Here, we test the hypothesis that comparable changes can emerge from schizophrenia resilience genes. To this end, we used structural magnetic resonance imaging to investigate the effects of a schizophrenia polygenic resilience score (PRSResilience) on cortical morphology (discovery sample: n=101; UK Biobank replication sample: n=33,224). We observed positive correlations between PRSResilience and cortical volume in the fusiform gyrus, a central hub within the ventral visual pathway. Our findings indicate that resilience to schizophrenia arises partly from genetically mediated enhancements of visual processing capacities for social and non-social object information. This implies an important role of visual information processing for mitigating schizophrenia risk, which might also be exploitable for early intervention studies.


2019 ◽  
Vol 122 (6) ◽  
pp. 2522-2540 ◽  
Author(s):  
Noam Roth ◽  
Nicole C. Rust

Searching for a specific visual object requires our brain to compare the items in view with a remembered representation of the sought target to determine whether a target match is present. This comparison is thought to be implemented, in part, via the combination of top-down modulations reflecting target identity with feed-forward visual representations. However, it remains unclear whether top-down signals are integrated at a single locus within the ventral visual pathway (e.g., V4) or at multiple stages [e.g., both V4 and inferotemporal cortex (IT)]. To investigate, we recorded neural responses in V4 and IT as rhesus monkeys performed a task that required them to identify when a target object appeared across variation in position, size, and background context. We found nonvisual, task-specific signals in both V4 and IT. To evaluate whether V4 was the only locus for the integration of top-down signals, we evaluated several feed-forward accounts of processing from V4 to IT, including a model in which IT preferentially sampled from the best V4 units and a model that allowed for nonlinear IT computation. IT task-specific modulation was not accounted for by any of these feed-forward descriptions, suggesting that during object search, top-down signals are integrated directly within IT. NEW & NOTEWORTHY To find specific objects, the brain must integrate top-down, target-specific signals with visual information about objects in view. However, the exact route of this integration in the ventral visual pathway is unclear. In the first study to systematically compare V4 and inferotemporal cortex (IT) during an invariant object search task, we demonstrate that top-down signals found in IT cannot be described as being inherited from V4 but rather must be integrated directly within IT itself.


Author(s):  
Shijia Fan ◽  
Xiaosha Wang ◽  
Xiaoying Wang ◽  
Tao Wei ◽  
Yanchao Bi

Sign in / Sign up

Export Citation Format

Share Document