scholarly journals Action intention modulates the representation of object features in early visual cortex

2018 ◽  
Author(s):  
Jena Velji-Ibrahim ◽  
J. Douglas Crawford ◽  
Luigi Cattaneo ◽  
Simona Monaco

AbstractThe role of the early visual cortex (EVC) has been extensively studied for visual recognition but to a lesser degree to determine how action planning influences perceptual representations of objects. We used functional MRI and pattern classification methods to determine if during action planning, object features (orientation and location) could be decoded in an action-dependent way and if so, whether this was due to functional connectivity between visual and higher-level cortical areas. Sixteen participants used their right dominant hand to perform movements (Align or Open Hand) towards one of two oriented objects that were simultaneously presented and placed on either side of a fixation cross. While both movements required aiming toward target location, only Align movements required participants to precisely adjust hand orientation. Therefore, we hypothesized that if the representation of object features in the EVC is modulated by the upcoming action, we could use the pre-movement activity pattern to dissociate between object locations in both tasks, and orientations in the Align task only. We found above chance decoding accuracy between the two objects for both tasks in the calcarine sulcus corresponding to the peripheral location of the objects in the visual cortex, suggesting a task-independent (i.e. location) modulation. In contrast, we found significant decoding accuracy between the two objects for Align but not Open Hand movements in the occipital pole corresponding to central vision, and dorsal stream areas, suggesting a task-dependent (i.e. orientation) modulation. Psychophysiological interaction analysis indicated stronger functional connectivity during the planning phase of Align than Open Hand movements between EVC and sensory-motor areas in the dorsal and ventral visual stream, as well as areas that lie at the interface between the two streams. These results demonstrate that task-specific preparatory signals modulate activity not only in areas typically known to be involved in perception for action, but also in the EVC. Further, our findings suggest that object features that are relevant for successful action performance are represented in the part of the visual cortex that is best suited to process visual features in great details, such as the foveal cortex, even if the objects are viewed in the periphery.

2019 ◽  
Vol 29 (11) ◽  
pp. 4662-4678 ◽  
Author(s):  
Jason P Gallivan ◽  
Craig S Chapman ◽  
Daniel J Gale ◽  
J Randall Flanagan ◽  
Jody C Culham

Abstract The primate visual system contains myriad feedback projections from higher- to lower-order cortical areas, an architecture that has been implicated in the top-down modulation of early visual areas during working memory and attention. Here we tested the hypothesis that these feedback projections also modulate early visual cortical activity during the planning of visually guided actions. We show, across three separate human functional magnetic resonance imaging (fMRI) studies involving object-directed movements, that information related to the motor effector to be used (i.e., limb, eye) and action goal to be performed (i.e., grasp, reach) can be selectively decoded—prior to movement—from the retinotopic representation of the target object(s) in early visual cortex. We also find that during the planning of sequential actions involving objects in two different spatial locations, that motor-related information can be decoded from both locations in retinotopic cortex. Together, these findings indicate that movement planning selectively modulates early visual cortical activity patterns in an effector-specific, target-centric, and task-dependent manner. These findings offer a neural account of how motor-relevant target features are enhanced during action planning and suggest a possible role for early visual cortex in instituting a sensorimotor estimate of the visual consequences of movement.


2020 ◽  
Author(s):  
Ke Bo ◽  
Siyang Yin ◽  
Yuelu Liu ◽  
Zhenhong Hu ◽  
Sreenivasan Meyyapan ◽  
...  

AbstractThe perception of opportunities and threats in complex scenes represents one of the main functions of the human visual system. In the laboratory, its neurophysiological basis is often studied by having observers view pictures varying in affective content. This body of work has consistently shown that viewing emotionally engaging, compared to neutral, pictures (1) heightens blood flow in limbic structures and frontoparietal cortex, as well as in anterior ventral and dorsal visual cortex, and (2) prompts an increase in the late positive event-related potential (LPP), a scalp-recorded and time-sensitive index of engagement within the network of aforementioned neural structures. The role of retinotopic visual cortex in this process has, however, been contentious, with competing theoretical notions predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. The present study used multimodal neuroimaging and machine learning to address this question by examining the large-scale neural representations of affective pictures. Recording EEG and fMRI simultaneously while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis to single-trial BOLD activities in retinotopic visual cortex, we identified three robust findings: First, unpleasant-versus-neutral decoding accuracy, as well as pleasant-versus-neutral decoding accuracy, were well above chance level in all retinotopic visual areas, including primary visual cortex. Second, the decoding accuracy in ventral visual cortex, but not in early visual cortex or dorsal visual cortex, was significantly correlated with LPP amplitude. Third, effective connectivity from amygdala to ventral visual cortex predicted unpleasant-versus-neutral decoding accuracy, and effective connectivity from ventral frontal cortex to ventral visual cortex predicted pleasant-versus-neutral decoding accuracy. These results suggest that affective pictures evoked valence-specific multivoxel neural representations in retinotopic visual cortex and that these multivoxel representations were influenced by reentry signals from limbic and frontal brain regions.


2015 ◽  
Vol 15 (11) ◽  
pp. 7 ◽  
Author(s):  
Pankhuri Malik ◽  
Joost C. Dessing ◽  
J. Douglas Crawford

2020 ◽  
Vol 124 (5) ◽  
pp. 1343-1363
Author(s):  
DoHyun Kim ◽  
Tomer Livne ◽  
Nicholas V. Metcalf ◽  
Maurizio Corbetta ◽  
Gordon L. Shulman

Spontaneous brain activity was once thought to reflect only noise, but evidence of strong spatiotemporal regularities has motivated a search for functional explanations. Here we show that the spatial pattern of spontaneous activity in human high-level and early visual cortex is related to the spatial patterns evoked by stimuli. Moreover, these patterns partly govern spontaneous spatiotemporal interactions between regions, so-called functional connectivity. These results support the hypothesis that spontaneous activity serves a representational function.


2015 ◽  
Vol 113 (5) ◽  
pp. 1453-1458 ◽  
Author(s):  
Edmund Chong ◽  
Ariana M. Familiar ◽  
Won Mok Shim

As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.


NeuroImage ◽  
2020 ◽  
Vol 218 ◽  
pp. 116981 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Jody C. Culham ◽  
Luigi Cattaneo ◽  
Luca Turella

2016 ◽  
Vol 16 (12) ◽  
pp. 133
Author(s):  
Leor Roseman ◽  
Martin Sereno ◽  
Robert Leech ◽  
Mendel Kaelen ◽  
Csaba Orban ◽  
...  

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Norman Sabbah ◽  
Nicolae Sanda ◽  
Colas N. Authié ◽  
Saddek Mohand-Saïd ◽  
José-Alain Sahel ◽  
...  

2009 ◽  
Vol 47 (12) ◽  
pp. 2480-2487 ◽  
Author(s):  
Eswar Damaraju ◽  
Yang-Ming Huang ◽  
Lisa Feldman Barrett ◽  
Luiz Pessoa

2016 ◽  
Vol 37 (8) ◽  
pp. 3031-3040 ◽  
Author(s):  
Leor Roseman ◽  
Martin I. Sereno ◽  
Robert Leech ◽  
Mendel Kaelen ◽  
Csaba Orban ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document