scholarly journals Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.

2019 ◽  
Vol 224 (9) ◽  
pp. 3291-3308 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

2021 ◽  
Author(s):  
Hayley E Pickering ◽  
Jessica L Peters ◽  
Sheila Crewther

Literature examining the role of visual memory in vocabulary development during childhood is limited, despite it being well known that preverbal infants rely on their visual abilities to form memories and learn new words. Hence, this systematic review and meta-analysis utilised a cognitive neuroscience perspective to examine the association between visual memory and vocabulary development, including moderators such as age and task selection, in neurotypical children aged 2- to 12-years. Visual memory tasks were classified as spatio-temporal span tasks, visuo-perceptual or spatial concurrent array tasks, and executive judgment tasks. Visuo-perceptual concurrent array tasks expected to rely on ventral visual stream processing showed a moderate association with vocabulary, while tasks measuring spatio-temporal spans expected to be associated with dorsal visual stream processing, and executive judgments (central executive), showed only weak correlations with vocabulary. These findings have important implications for all health professionals and researchers interested in language, as they can support the development of more targeted language learning interventions that require ventral visual stream processing.


2015 ◽  
Vol 113 (5) ◽  
pp. 1656-1669 ◽  
Author(s):  
Jedediah M. Singer ◽  
Joseph R. Madsen ◽  
William S. Anderson ◽  
Gabriel Kreiman

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.


2020 ◽  
Vol 10 (9) ◽  
pp. 602
Author(s):  
Yibo Cui ◽  
Chi Zhang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Bin Yan ◽  
...  

Representation invariance plays a significant role in the performance of deep convolutional neural networks (CNNs) and human visual information processing in various complicated image-based tasks. However, there has been abounding confusion concerning the representation invariance mechanisms of the two sophisticated systems. To investigate their relationship under common conditions, we proposed a representation invariance analysis approach based on data augmentation technology. Firstly, the original image library was expanded by data augmentation. The representation invariances of CNNs and the ventral visual stream were then studied by comparing the similarities of the corresponding layer features of CNNs and the prediction performance of visual encoding models based on functional magnetic resonance imaging (fMRI) before and after data augmentation. Our experimental results suggest that the architecture of CNNs, combinations of convolutional and fully-connected layers, developed representation invariance of CNNs. Remarkably, we found representation invariance belongs to all successive stages of the ventral visual stream. Hence, the internal correlation between CNNs and the human visual system in representation invariance was revealed. Our study promotes the advancement of invariant representation of computer vision and deeper comprehension of the representation invariance mechanism of human visual information processing.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
E. Cleeren ◽  
I. D. Popivanov ◽  
W. Van Paesschen ◽  
Peter Janssen

Abstract Visual information reaches the amygdala through the various stages of the ventral visual stream. There is, however, evidence that a fast subcortical pathway for the processing of emotional visual input exists. To explore the presence of this pathway in primates, we recorded local field potentials in the amygdala of four rhesus monkeys during a passive fixation task showing images of ten object categories. Additionally, in one of the monkeys we also obtained multi-unit spiking activity during the same task. We observed remarkably fast medium and high gamma responses in the amygdala of the four monkeys. These responses were selective for the different stimulus categories, showed within-category selectivity, and peaked as early as 60 ms after stimulus onset. Multi-unit responses in the amygdala were lagging the gamma responses by about 40 ms. Thus, these observations add further evidence that selective visual information reaches the amygdala of nonhuman primates through a very fast route.


2018 ◽  
Vol 120 (3) ◽  
pp. 926-941 ◽  
Author(s):  
Dzmitry A. Kaliukhovich ◽  
Hans Op de Beeck

Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.


2018 ◽  
Author(s):  
Simona Monaco ◽  
Ying Chen ◽  
Nicholas Menghi ◽  
J Douglas Crawford

AbstractSensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction (manual alignment vs. grasp) influences the cortical processing of object orientation several seconds after the object had been viewed. This influence occurred as early as in the primary visual cortex and extended to ventral and dorsal visual stream areas. Importantly, this modulation was unrelated to non-specific action planning. Further, the primary visual cortex showed stronger functional connectivity with frontal-parietal areas and the inferior temporal cortex during the delay following orientation processing for align than grasping movements, strengthening the idea of reentrant feedback from dorsal visual stream areas involved in action. To our knowledge, this is the first demonstration that intended manual actions have such an early, pervasive, and differential influence on the cortical processing of vision.


2010 ◽  
Vol 22 (11) ◽  
pp. 2460-2479 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Timothy J. Bussey ◽  
Lisa M. Saksida

We examined the organization and function of the ventral object processing pathway. The prevailing theoretical approach in this field holds that the ventral object processing stream has a modular organization, in which visual perception is carried out in posterior regions and visual memory is carried out, independently, in the anterior temporal lobe. In contrast, recent work has argued against this modular framework, favoring instead a continuous, hierarchical account of cognitive processing in these regions. We join the latter group and illustrate our view with simulations from a computational model that extends the perceptual-mnemonic feature-conjunction model of visual discrimination proposed by Bussey and Saksida [Bussey, T. J., & Saksida, L. M. The organization of visual object representations: A connectionist model of effects of lesions in perirhinal cortex. European Journal of Neuroscience, 15, 355–364, 2002]. We use the extended model to revisit early data from Iwai and Mishkin [Iwai, E., & Mishkin, M. Two visual foci in the temporal lobe of monkeys. In N. Yoshii & N. Buchwald (Eds.), Neurophysiological basis of learning and behavior (pp. 1–11). Japan: Osaka University Press, 1968]; this seminal study was interpreted as evidence for the modularity of visual perception and visual memory. The model accounts for a double dissociation in monkeys' visual discrimination performance following lesions to different regions of the ventral visual stream. This double dissociation is frequently cited as evidence for separate systems for perception and memory. However, the model provides a parsimonious, mechanistic, single-system account of the double dissociation data. We propose that the effects of lesions in ventral visual stream on visual discrimination are due to compromised representations within a hierarchical representational continuum rather than impairment in a specific type of learning, memory, or perception. We argue that consideration of the nature of stimulus representations and their processing in cortex is a more fruitful approach than attempting to map cognition onto functional modules.


2014 ◽  
Vol 111 (1) ◽  
pp. 91-102 ◽  
Author(s):  
Leyla Isik ◽  
Ethan M. Meyers ◽  
Joel Z. Leibo ◽  
Tomaso Poggio

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


Sign in / Sign up

Export Citation Format

Share Document