scholarly journals Cortical regions involved in the observation of bimanual actions

2012 ◽  
Vol 108 (9) ◽  
pp. 2594-2611 ◽  
Author(s):  
Marcus H. Heitger ◽  
Marc J.-M. Macé ◽  
Jan Jastorff ◽  
Stephan P. Swinnen ◽  
Guy A. Orban

Although we are beginning to understand how observed actions performed by conspecifics with a single hand are processed and how bimanual actions are controlled by the motor system, we know very little about the processing of observed bimanual actions. We used fMRI to compare the observation of bimanual manipulative actions with their unimanual components, relative to visual control conditions equalized for visual motion. Bimanual action observation did not activate any region specialized for processing visual signals related to this more elaborated action. On the contrary, observation of bimanual and unimanual actions activated similar occipito-temporal, parietal and premotor networks. However, whole-brain as well as region of interest (ROI) analyses revealed that this network functions differently under bimanual and unimanual conditions. Indeed, in bimanual conditions, activity in the network was overall more bilateral, especially in parietal cortex. In addition, ROI analyses indicated bilateral parietal activation patterns across hand conditions distinctly different from those at other levels of the action-observation network. These activation patterns suggest that while occipito-temporal and premotor levels are involved with processing the kinematics of the observed actions, the parietal cortex is more involved in the processing of static, postural aspects of the observed action. This study adds bimanual cooperation to the growing list of distinctions between parietal and premotor cortex regarding factors affecting visual processing of observed actions.

Author(s):  
Davide Albertini ◽  
Marco Lanzilotto ◽  
Monica Maranesi ◽  
Luca Bonini

The neural processing of others' observed actions recruits a large network of brain regions (the action observation network, AON), in which frontal motor areas are thought to play a crucial role. Since the discovery of mirror neurons (MNs) in the ventral premotor cortex, it has been assumed that their activation was conditional upon the presentation of biological rather than nonbiological motion stimuli, supporting a form of direct visuomotor matching. Nonetheless, nonbiological observed movements have rarely been used as control stimuli to evaluate visual specificity, thereby leaving the issue of similarity among neural codes for executed actions and biological or nonbiological observed movements unresolved. Here, we addressed this issue by recording from two nodes of the AON that are attracting increasing interest, namely the ventro-rostral part of the dorsal premotor area F2 and the mesial pre-supplementary motor area F6 of macaques while they 1) executed a reaching-grasping task, 2) observed an experimenter performing the task, and 3) observed a nonbiological effector moving in the same context. Our findings revealed stronger neuronal responses to the observation of biological than nonbiological movement, but biological and nonbiological visual stimuli produced highly similar neural dynamics and relied on largely shared neural codes, which in turn remarkably differed from those associated with executed actions. These results indicate that, in highly familiar contexts, visuo-motor remapping processes in premotor areas hosting MNs are more complex and flexible than predicted by a direct visuomotor matching hypothesis.


2016 ◽  
Vol 116 (4) ◽  
pp. 1885-1899 ◽  
Author(s):  
Tobias Heed ◽  
Frank T. M. Leone ◽  
Ivan Toni ◽  
W. Pieter Medendorp

It has been proposed that the posterior parietal cortex (PPC) is characterized by an effector-specific organization. However, strikingly similar functional MRI (fMRI) activation patterns have been found in the PPC for hand and foot movements. Because the fMRI signal is related to average neuronal activity, similar activation levels may result either from effector-unspecific neurons or from intermingled subsets of effector-specific neurons within a voxel. We distinguished between these possibilities using fMRI repetition suppression (RS). Participants made delayed, goal-directed eye, hand, and foot movements to visual targets. In each trial, the instructed effector was identical or different to that of the previous trial. RS effects indicated an attenuation of the fMRI signal in repeat trials. The caudal PPC was active during the delay but did not show RS, suggesting that its planning activity was effector independent. Hand and foot-specific RS effects were evident in the anterior superior parietal lobule (SPL), extending to the premotor cortex, with limb overlap in the anterior SPL. Connectivity analysis suggested information flow between the caudal PPC to limb-specific anterior SPL regions and between the limb-unspecific anterior SPL toward limb-specific motor regions. These results underline that both function and effector specificity should be integrated into a concept of PPC action representation not only on a regional but also on a fine-grained, subvoxel level.


2019 ◽  
Author(s):  
Burcu A. Urgen ◽  
Ayse P. Saygin

AbstractVisual perception of actions is supported by a network of brain regions in the occipito-temporal, parietal, and premotor cortex in the primate brain, known as the Action Observation Network (AON). Although there is a growing body of research that characterizes the functional properties of each node of this network, the communication and direction of information flow between the nodes is unclear. According to the predictive coding account of action perception, this network is not a purely feedforward system but has feedback connections through which prediction error signals are communicated between the regions of the AON. In the present study, we investigated the effective connectivity of the AON in an experimental setting where the human subjects’ predictions about the observed agent were violated, using fMRI and Dynamical Causal Modeling (DCM). We specifically examined the influence of the lowest and highest nodes in the AON hierarchy, pSTS and ventral premotor cortex, respectively, on the middle node, inferior parietal cortex during prediction violation. Our DCM results suggest that the influence on the inferior parietal node is through a feedback connection from ventral premotor cortex during perception of actions that violate people’s predictions.


1998 ◽  
Vol 80 (5) ◽  
pp. 2657-2670 ◽  
Author(s):  
Jody C. Culham ◽  
Stephan A. Brandt ◽  
Patrick Cavanagh ◽  
Nancy G. Kanwisher ◽  
Anders M. Dale ◽  
...  

Culham, Jody C., Stephan A. Brandt, Patrick Cavanagh, Nancy G. Kanwisher, Anders M. Dale, and Roger B. H. Tootell. Cortical fMRI activation produced by attentive tracking of moving targets. J. Neurophysiol. 80: 2657–2670, 1998. Attention can be used to keep track of moving items, particularly when there are multiple targets of interest that cannot all be followed with eye movements. Functional magnetic resonance imaging (fMRI) was used to investigate cortical regions involved in attentive tracking. Cortical flattening techniques facilitated within-subject comparisons of activation produced by attentive tracking, visual motion, discrete attention shifts, and eye movements. In the main task, subjects viewed a display of nine green “bouncing balls” and used attention to mentally track a subset of them while fixating. At the start of each attentive-tracking condition, several target balls (e.g., 3/9) turned red for 2 s and then reverted to green. Subjects then used attention to keep track of the previously indicated targets, which were otherwise indistinguishable from the nontargets. Attentive-tracking conditions alternated with passive viewing of the same display when no targets had been indicated. Subjects were pretested with an eye-movement monitor to ensure they could perform the task accurately while fixating. For seven subjects, functional activation was superimposed on each individual's cortically unfolded surface. Comparisons between attentive tracking and passive viewing revealed bilateral activation in parietal cortex (intraparietal sulcus, postcentral sulcus, superior parietal lobule, and precuneus), frontal cortex (frontal eye fields and precentral sulcus), and the MT complex (including motion-selective areas MT and MST). Attentional enhancement was absent in early visual areas and weak in the MT complex. However, in parietal and frontal areas, the signal change produced by the moving stimuli was more than doubled when items were tracked attentively. Comparisons between attentive tracking and attention shifting revealed essentially identical activation patterns that differed only in the magnitude of activation. This suggests that parietal cortex is involved not only in discrete shifts of attention between objects at different spatial locations but also in continuous “attentional pursuit” of moving objects. Attentive-tracking activation patterns were also similar, though not identical, to those produced by eye movements. Taken together, these results suggest that attentive tracking is mediated by a network of areas that includes parietal and frontal regions responsible for attention shifts and eye movements and the MT complex, thought to be responsible for motion perception. These results are consistent with theoretical models of attentive tracking as an attentional process that assigns spatial tags to targets and registers changes in their position, generating a high-level percept of apparent motion.


2021 ◽  
Author(s):  
Burcu A. Urgen ◽  
Guy A. Orban

AbstractAction observation is supported by a network of regions in occipito-temporal, parietal, and premotor cortex in primates. Recent research suggests that the parietal node has regions dedicated to different action classes including manipulation, interpersonal, skin-displacing, locomotion, and climbing. The goals of the current study consist of: 1) extending this work with new classes of actions that are communicative and specific to humans, 2) investigating how parietal cortex differs from the occipito-temporal and premotor cortex in representing action classes. Human subjects underwent fMRI scanning while observing three action classes: indirect communication, direct communication, and manipulation, plus two types of control stimuli, static controls which were static frames from the video clips, and dynamic controls consisting of temporally-scrambled optic flow information. Using univariate analysis, MVPA, and representational similarity analysis, our study presents several novel findings. First, we provide further evidence for the anatomical segregation in parietal cortex of different action classes: We have found a new region that is specific for representing human-specific indirect communicative actions in cytoarchitectonic parietal area PFt. Second, we found that the discriminability between action classes was higher in parietal cortex than the other two levels suggesting the coding of action identity information at this level. Finally, our results advocate the use of the control stimuli not just for univariate analysis of complex action videos but also when using multivariate techniques.


2017 ◽  
Vol 29 (3) ◽  
pp. 448-466 ◽  
Author(s):  
Avril Treille ◽  
Coriandre Vilain ◽  
Thomas Hueber ◽  
Laurent Lamalle ◽  
Marc Sato

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.


2000 ◽  
Vol 12 (1) ◽  
pp. 56-77 ◽  
Author(s):  
Deborah L. Harrington ◽  
Stephen M. Rao ◽  
Kathleen Y. Haaland ◽  
Julie A. Bobholz ◽  
Andrew R. Mayer ◽  
...  

The ease by which movements are combined into skilled actions depends on many factors, including the complexity of movement sequences. Complexity can be defined by the surface structure of a sequence, including motoric properties such as the types of effectors, and by the abstract or sequence-specific structure, which is apparent in the relations amongst movements, such as repetitions. It is not known whether different neural systems support the cognitive and the sensorimotor processes underlying different structural properties of sequential actions. We investigated this question using whole-brain functional magnetic resonance imaging (fMRI) in healthy adults as they performed sequences of five key presses involving up to three fingers. The structure of sequences was defined by two factors that independently lengthen the time to plan sequences before movement: the number of different fingers (1-3; surface structure) and the number of finger transitions (0-4; sequence-specific structure). The results showed that systems involved in visual processing (extrastriate cortex) and the preparation of sensory aspects of movement (rostral inferior parietal and ventral premotor cortex (PMv)) correlated with both properties of sequence structure. The number of different fingers positively correlated with activation intensity in the cerebellum and superior parietal cortex (anterior), systems associated with sensorimotor, and kinematic representations of movement, respectively. The number of finger transitions correlated with activation in systems previously associated with sequence-specific processing, including the inferior parietal and the dorsal premotor cortex (PMd), and in interconnecting superior temporal-middle frontal gyrus networks. Different patterns of activation in the left and right inferior parietal cortex were associated with different sequences, consistent with the speculation that sequences are encoded using different mnemonics, depending on the sequence-specific structure. In contrast, PMd activation correlated positively with increases in the number of transitions, consistent with the role of this area in the retrieval or preparation of abstract action plans. These findings suggest that the surface and the sequence-specific structure of sequential movements can be distinguished by distinct distributed systems that support their underlying mental operations.


2021 ◽  
Author(s):  
Minye Zhan ◽  
Rainer Wilhelm Goebel ◽  
Beatrice de Gelder

How we subjectively generate an understanding of other people's bodily actions and emotions is not well understood. In this 7T fMRI study, we examined the representational geometry of bodily action- and emotion-understanding by mapping individual subjective reports with word embeddings, besides using conventional univariate/multivariate analyses with predefined categories. Dimensionality reduction revealed that the representations for perceived action and emotion were high dimensional, each correlated to but were not reducible to the predefined action and emotion categories. With searchlight representational similarity analysis, we found the left middle superior temporal sulcus and left dorsal premotor cortex corresponded to the subjective action and emotion representations. Furthermore using task-residual functional connectivity and hierarchical clustering, we found that areas in the action observation network and the semantic/default-mode network were functionally connected to these two seed regions and showed similar representations. Our study provides direct evidence that both networks were concurrently involved in subjective action and emotion understanding.


2021 ◽  
Author(s):  
Daniel A Stehr ◽  
Xiaojue Zhou ◽  
Mariel Tisby ◽  
Patrick T Hwu ◽  
John A Pyles ◽  
...  

Abstract The posterior superior temporal sulcus (pSTS) is a brain region characterized by perceptual representations of human body actions that promote the understanding of observed behavior. Increasingly, action observation is recognized as being strongly shaped by the expectations of the observer (Kilner 2011; Koster-Hale and Saxe 2013; Patel et al. 2019). Therefore, to characterize top-down influences on action observation, we evaluated the statistical structure of multivariate activation patterns from the action observation network (AON) while observers attended to the different dimensions of action vignettes (the action kinematics, goal, or identity of avatars jumping or crouching). Decoding accuracy varied as a function of attention instruction in the right pSTS and left inferior frontal cortex (IFC), with the right pSTS classifying actions most accurately when observers attended to the action kinematics and the left IFC classifying most accurately when observed attended to the actor’s goal. Functional connectivity also increased between the right pSTS and right IFC when observers attended to the actions portrayed in the vignettes. Our findings are evidence that the attentive state of the viewer modulates sensory representations in the pSTS, consistent with proposals that the pSTS occupies an interstitial zone mediating top-down context and bottom-up perceptual cues during action observation.


2011 ◽  
Vol 29 (supplement) ◽  
pp. 352-377 ◽  
Author(s):  
Seon Hee Jang ◽  
Frank E Pollick

The study of dance has been helpful to advance our understanding of how human brain networks of action observation are influenced by experience. However previous studies have not examined the effect of extensive visual experience alone: for example, an art critic or dance fan who has a rich experience of watching dance but negligible experience performing dance. To explore the effect of pure visual experience we performed a single experiment using functional Magnetic Resonance Imaging (fMRI) to compare the neural processing of dance actions in 3 groups: a) 14 ballet dancers, b) 10 experienced viewers, c) 12 novices without any extensive dance or viewing experience. Each of the 36 participants viewed short 2-second displays of ballet derived from motion capture of a professional ballerina. These displays represented the ballerina as only points of light at the major joints. We wished to study the action observation network broadly and thus included two different types of display and two different tasks for participants to perform. The two different displays were: a) brief movies of a ballet action and b) frames from the ballet movies with the points of lights connected by lines to show a ballet posture. The two different tasks were: a) passively observe the display and b) imagine performing the action depicted in the display. The two levels of display and task were combined factorially to produce four experimental conditions (observe movie, observe posture, motor imagery of movie, motor imagery of posture). The set of stimuli used in the experiment are available for download after this paper. A random effects ANOVA was performed on brain activity and an effect of experience was obtained in seven different brain areas including: right Temporoparietal Junction (TPJ), left Retrosplenial Cortex (RSC), right Primary Somatosensory Cortex (S1), bilateral Primary Motor Cortex (M1), right Orbitofrontal Cortex (OFC), right Temporal Pole (TP). The patterns of activation were plotted in each of these areas (TPJ, RSC, S1, M1, OFC, TP) to investigate more closely how the effect of experience changed across these areas. For this analysis, novices were treated as baseline and the relative effect of experience examined in the dancer and experienced viewer groups. Interpretation of these results suggests that both visual and motor experience appear equivalent in producing more extensive early processing of dance actions in early stages of representation (TPJ and RSC) and we hypothesise that this could be due to the involvement of autobiographical memory processes. The pattern of results found for dancers in S1 and M1 suggest that their perception of dance actions are enhanced by embodied processes. For example, the S1 results are consistent with claims that this brain area shows mirror properties. The pattern of results found for the experienced viewers in OFC and TP suggests that their perception of dance actions are enhanced by cognitive processes. For example, involving aspects of social cognition and hedonic processing – the experienced viewers find the motor imagery task more pleasant and have richer connections of dance to social memory. While aspects of our interpretation are speculative the core results clearly show common and distinct aspects of how viewing experience and physical experience shape brain responses to watching dance.


Sign in / Sign up

Export Citation Format

Share Document