scholarly journals Evidence for the representation of movement kinematics in the discharge of F5 mirror neurons during the observation of transitive and intransitive actions

2017 ◽  
Vol 118 (6) ◽  
pp. 3215-3229 ◽  
Author(s):  
Vassilis Papadourakis ◽  
Vassilis Raos

Mirror neurons (MirNs) are sensorimotor neurons that fire both when an animal performs a goal-directed action and when the same animal observes another agent performing the same or a similar transitive action. It has been claimed that the observation of intransitive actions does not activate MirNs in a monkey’s brain. Prompted by recent evidence indicating that the discharge of MirNs is modulated also by non-object-directed actions, we investigated thoroughly the efficacy of intransitive actions to trigger MirNs’ discharge. Using representational similarity analysis, we also studied whether the elements constituting the visual scene presented to the monkey during the observation of actions (both transitive and intransitive) are represented in the discharge of MirNs. For this purpose, the moving hand was modeled by its kinematics and the object by features of its geometry. We found that MirNs respond to the observation of both transitive and intransitive actions and that the discharge differences evoked by the observation of object- and non-object-directed actions are correlated more with the kinematic differences of these actions than with the differences of the objects’ features. These findings support the view that observed action kinematics contribute to action mirroring. NEW & NOTEWORTHY Mirror neurons in the monkey brain are thought to respond exclusively to the observation of object-directed actions. Here, we show that mirror neurons also respond to the observation of intransitive actions and that the kinematics of the observed movements are represented in their discharge. This finding supports the view that mirror neurons provide also a kinematics-based representation of actions.

2017 ◽  
Vol 17 (10) ◽  
pp. 571
Author(s):  
Ming Bo Cai ◽  
Nicolas Schuck ◽  
Michael Anderson ◽  
Jonathan Pillow ◽  
Yael Niv

2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.


PLoS ONE ◽  
2015 ◽  
Vol 10 (8) ◽  
pp. e0135697 ◽  
Author(s):  
Blair Kaneshiro ◽  
Marcos Perreau Guimaraes ◽  
Hyung-Suk Kim ◽  
Anthony M. Norcia ◽  
Patrick Suppes

Sign in / Sign up

Export Citation Format

Share Document