scholarly journals Decoding and encoding models reveal the role of mental simulation in the brain representation of meaning

2019 ◽  
Author(s):  
David Soto ◽  
Usman Ayub Sheikh ◽  
Ning Mei ◽  
Roberto Santana

AbstractHow the brain representation of conceptual knowledge vary as a function of processing goals, strategies and task-factors remains a key unresolved question in cognitive neuroscience. Here we asked how the brain representation of semantic categories is shaped by the depth of processing during mental simulation. Participants were presented with visual words during functional magnetic resonance imaging (fMRI). During shallow processing, participants had to read the items. During deep processing, they had to mentally simulate the features associated with the words. Multivariate classification, informational connectivity and encoding models were used to reveal how the depth of processing determines the brain representation of word meaning. Decoding accuracy in putative substrates of the semantic network was enhanced when the depth processing was high, and the brain representations were more generalizable in semantic space relative to shallow processing contexts. This pattern was observed even in association areas in inferior frontal and parietal cortex. Deep information processing during mental simulation also increased the informational connectivity within key substrates of the semantic network. To further examine the properties of the words encoded in brain activity, we compared computer vision models - associated with the image referents of the words - and word embedding. Computer vision models explained more variance of the brain responses across multiple areas of the semantic network. These results indicate that the brain representation of word meaning is highly malleable by the depth of processing imposed by the task, relies on access to visual representations and is highly distributed, including prefrontal areas previously implicated in semantic control.

2020 ◽  
Vol 7 (5) ◽  
pp. 192043 ◽  
Author(s):  
David Soto ◽  
Usman Ayub Sheikh ◽  
Ning Mei ◽  
Roberto Santana

How the brain representation of conceptual knowledge varies as a function of processing goals, strategies and task-factors remains a key unresolved question in cognitive neuroscience. In the present functional magnetic resonance imaging study, participants were presented with visual words during functional magnetic resonance imaging (fMRI). During shallow processing, participants had to read the items. During deep processing, they had to mentally simulate the features associated with the words. Multivariate classification, informational connectivity and encoding models were used to reveal how the depth of processing determines the brain representation of word meaning. Decoding accuracy in putative substrates of the semantic network was enhanced when the depth processing was high, and the brain representations were more generalizable in semantic space relative to shallow processing contexts. This pattern was observed even in association areas in inferior frontal and parietal cortex. Deep information processing during mental simulation also increased the informational connectivity within key substrates of the semantic network. To further examine the properties of the words encoded in brain activity, we compared computer vision models—associated with the image referents of the words—and word embedding. Computer vision models explained more variance of the brain responses across multiple areas of the semantic network. These results indicate that the brain representation of word meaning is highly malleable by the depth of processing imposed by the task, relies on access to visual representations and is highly distributed, including prefrontal areas previously implicated in semantic control.


2019 ◽  
Author(s):  
Usman Ayub Sheikh ◽  
Manuel Carreiras ◽  
David Soto

The neurocognitive mechanisms that support the generalization of semantic representations across different languages remain to be determined. Current psycholinguistic models propose that semantic representations are likely to overlap across languages, although there is evidence also to the contrary. Neuroimaging studies observed that brain activity patterns associated with the meaning of words may be similar across languages. However, the factors that mediate cross-language generalization of semantic representations are not known. We here identify a key factor: the depth of processing. Human participants were asked to process visual words as they underwent functional MRI. We found that, during shallow processing, multivariate pattern classifiers could decode the word semantic category within each language in putative substrates of the semantic network, but there was no evidence of cross-language generalization in the shallow processing context. By contrast, when the depth of processing was higher, significant cross-language generalization was observed in several regions, including inferior parietal, ventromedial, lateral temporal, and inferior frontal cortex. These results support the distributed-only view of semantic processing and favour models based on multiple semantic hubs. The results also have ramifications for psycholinguistic models of word processing such as the BIA+, which by default assumes non-selective access to both native and second languages.


2013 ◽  
Vol 109 (2) ◽  
pp. 405-414 ◽  
Author(s):  
Luís Aureliano Imbiriba ◽  
Maitê Mello Russo ◽  
Laura Alice Santos de Oliveira ◽  
Ana Paula Fontana ◽  
Erika de Carvalho Rodrigues ◽  
...  

It is well established that the mental simulation of actions involves visual and/or somatomotor representations of those imagined actions. To investigate whether the total absence of vision affects the brain activity associated with the retrieval of motor representations, we recorded the readiness potential (RP), a marker of motor preparation preceding the execution, as well as the motor imagery of the right middle-finger extension in the first-person (1P; imagining oneself performing the movement) and in the third-person (3P; imagining the experimenter performing the movement) modes in 19 sighted and 10 congenitally blind subjects. Our main result was found for the single RP slope values at the Cz channel (likely corresponding to the supplementary motor area). No difference in RP slope was found between 1P and 3P in the sighted group, suggesting that similar motor preparation networks are recruited to simulate our own and other people's actions in spite of explicit instructions to perform the task in 1P or 3P. Conversely, reduced RP slopes in 3P compared with 1P found in the blind group indicated that they might have used an alternative, nonmotor strategy to perform the task in 3P. Moreover, movement imagery ability, assessed both by means of mental chronometry and a modified version of the Movement Imagery Questionnaire-Revised, indicated that blind and sighted individuals had similar motor imagery performance. Taken together, these results suggest that complete visual loss early in life modifies the brain networks that associate with others' action representations.


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


2019 ◽  
Vol 31 (1) ◽  
pp. 95-108
Author(s):  
Valentina Borghesani ◽  
Marco Buiatti ◽  
Evelyn Eger ◽  
Manuela Piazza

A single word (the noun “ elephant”) encapsulates a complex multidimensional meaning, including both perceptual (“ big”, “ gray”, “ trumpeting”) and conceptual (“ mammal”, “ can be found in India”) features. Opposing theories make different predictions as to whether different features (also conceivable as dimensions of the semantic space) are stored in similar neural regions and recovered with similar temporal dynamics during word reading. In this magnetoencephalography study, we tracked the brain activity of healthy human participants while reading single words varying orthogonally across three semantic dimensions: two perceptual ones (i.e., the average implied real-world size and the average strength of association with a prototypical sound) and a conceptual one (i.e., the semantic category). The results indicate that perceptual and conceptual representations are supported by partially segregated neural networks: Whereas visual and auditory dimensions are encoded in the phase coherence of low-frequency oscillations of occipital and superior temporal regions, respectively, semantic features are encoded in the power of low-frequency oscillations of anterior temporal and inferior parietal areas. However, despite the differences, these representations appear to emerge at the same latency: around 200 msec after stimulus onset. Taken together, these findings suggest that perceptual and conceptual dimensions of the semantic space are recovered automatically, rapidly, and in parallel during word reading.


2021 ◽  
Author(s):  
Fatma Deniz ◽  
Christine Tseng ◽  
Leila Wehbe ◽  
Jack L Gallant

The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. We investigated this issue by directly comparing the brain representation of semantic information across four conditions that vary in context. fMRI was used to record human brain activity while four subjects (two female) read words presented in four different conditions: narratives (Narratives), isolated sentences (Sentences), blocks of semantically similar words (Semantic Blocks), and isolated words (Single Words). Using a voxelwise encoding model approach, we find two clear and consistent effects of increasing context. First, stimuli with more context (Narratives, Sentences) evoke brain responses with substantially higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared to stimuli with little context (Semantic Blocks, Single Words). Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. However, in individual subjects, only natural language stimuli (Narratives) consistently evoke widespread representation of semantic information across the cortical surface. These results show that context has large effects on both the quality of neuroimaging data and on the representation of meaning in the brain, and they imply that the results of neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.


2010 ◽  
Vol 24 (2) ◽  
pp. 131-135 ◽  
Author(s):  
Włodzimierz Klonowski ◽  
Pawel Stepien ◽  
Robert Stepien

Over 20 years ago, Watt and Hameroff (1987 ) suggested that consciousness may be described as a manifestation of deterministic chaos in the brain/mind. To analyze EEG-signal complexity, we used Higuchi’s fractal dimension in time domain and symbolic analysis methods. Our results of analysis of EEG-signals under anesthesia, during physiological sleep, and during epileptic seizures lead to a conclusion similar to that of Watt and Hameroff: Brain activity, measured by complexity of the EEG-signal, diminishes (becomes less chaotic) when consciousness is being “switched off”. So, consciousness may be described as a manifestation of deterministic chaos in the brain/mind.


1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Author(s):  
V. A. Maksimenko ◽  
A. A. Harchenko ◽  
A. Lüttjohann

Introduction: Now the great interest in studying the brain activity based on detection of oscillatory patterns on the recorded data of electrical neuronal activity (electroencephalograms) is associated with the possibility of developing brain-computer interfaces. Braincomputer interfaces are based on the real-time detection of characteristic patterns on electroencephalograms and their transformation  into commands for controlling external devices. One of the important areas of the brain-computer interfaces application is the control of the pathological activity of the brain. This is in demand for epilepsy patients, who do not respond to drug treatment.Purpose: A technique for detecting the characteristic patterns of neural activity preceding the occurrence of epileptic seizures.Results:Using multi-channel electroencephalograms, we consider the dynamics of thalamo-cortical brain network, preceded the occurrence of an epileptic seizure. We have developed technique which allows to predict the occurrence of an epileptic seizure. The technique has been implemented in a brain-computer interface, which has been tested in-vivo on the animal model of absence epilepsy.Practical relevance:The results of our study demonstrate the possibility of epileptic seizures prediction based on multichannel electroencephalograms. The obtained results can be used in the development of neurointerfaces for the prediction and prevention of seizures of various types of epilepsy in humans. 


Sign in / Sign up

Export Citation Format

Share Document