scholarly journals I like coffee with cream anddog?Change in an implicit probabilistic representation captures meaning processing in the brain

2017 ◽  
Author(s):  
Milena Rabovsky ◽  
Steven S. Hansen ◽  
James L. McClelland

AbstractThe N400 component of the event-related brain potential has aroused much interest because it is thought to provide an online measure of meaning processing in the brain. Yet, the underlying process remains incompletely understood and actively debated. Here, we present a computationally explicit account of this process and the emerging representation of sentence meaning. We simulate N400 amplitudes as the change induced by an incoming stimulus in an implicit and probabilistic representation of meaning captured by the hidden unit activation pattern in a neural network model of sentence comprehension, and we propose that the process underlying the N400 also drives implicit learning in the network. The model provides a unified account of 16 distinct findings from the N400 literature and connects human language processing with successful deep learning approaches to language processing.

2015 ◽  
Vol 27 (8) ◽  
pp. 1528-1541 ◽  
Author(s):  
Vicky Tzuyin Lai ◽  
Roel M. Willems ◽  
Peter Hagoort

This study investigated the brain regions for the comprehension of implied emotion in sentences. Participants read negative sentences without negative words, for example, “The boy fell asleep and never woke up again,” and their neutral counterparts “The boy stood up and grabbed his bag.” This kind of negative sentence allows us to examine implied emotion derived at the sentence level, without associative emotion coming from word retrieval. We found that implied emotion in sentences, relative to neutral sentences, led to activation in some emotion-related areas, including the medial prefrontal cortex, the amygdala, and the insula, as well as certain language-related areas, including the inferior frontal gyrus, which has been implicated in combinatorial processing. These results suggest that the emotional network involved in implied emotion is intricately related to the network for combinatorial processing in language, supporting the view that sentence meaning is more than simply concatenating the meanings of its lexical building blocks.


2021 ◽  
Author(s):  
Alice Hodapp ◽  
Milena Rabovsky

The functional significance of the N400 ERP component is still actively debated. Based on neural network modeling it was recently proposed that the N400 component can be interpreted as the change in a probabilistic representation corresponding to an internal temporal-difference prediction error at the level of meaning that drives adaptation in language processing. These computational modeling results imply that increased N400 amplitudes should correspond to greater adaptation. To investigate this model derived hypothesis, the current study manipulated expectancy in a sentence reading task, which influenced N400 amplitudes, and critically also later implicit memory for the manipulated word: reaction times in a perceptual identification task were significantly faster for previously unexpected words. Additionally, it could be demonstrated that this adaptation seems to specifically depend on the process underlying N400 amplitudes, as participants with larger N400 differences also exhibited a larger implicit memory benefit. These findings support the interpretation of the N400 as an implicit learning signal in language processing.


2021 ◽  
Author(s):  
Alessandro Lopopolo ◽  
Milena Rabovsky

The N400 component of the event-related brain potential is widely used to investigate language and meaning processing. However, despite much research, the component's functional basis remains actively debated. Recent work showed that the update of the predictive representation of sentence meaning (semantic update, or SU) generated by the Sentence Gestalt model (Mcclelland1 et al. 1989) consistently displayed a similar pattern to the N400 amplitude in a series of conditions known to modulate this event-related potential. These results led Rabovsky et al. (2018) to suggest that the N400 might reflect change in a probabilistic representation of meaning corresponding to an implicit semantic prediction error. However, a limitation of this work is that the model was trained on a small artificial training corpus and thus could not be presented with the same naturalistic stimuli presented in empirical experiments. In the present study, we overcome this limitation and directly model the amplitude of the N400 elicited during naturalistic sentence processing by using as predictor the SU generated by a Sentence Gestalt model trained on a large corpus of texts. The results reported in this paper corroborate the hypothesis that the N400 component reflects the change in a probabilistic representation of meaning after every word presentation. Further analyses demonstrate that the SU of the Sentence Gestalt model and the amplitude of the N400 are influenced similarly by the stochastic and positional properties of the linguistic input.


2020 ◽  
Vol 8 ◽  
pp. 231-246
Author(s):  
Vesna G. Djokic ◽  
Jean Maillard ◽  
Luana Bulat ◽  
Ekaterina Shutova

Recent years have seen a growing interest within the natural language processing (NLP) community in evaluating the ability of semantic models to capture human meaning representation in the brain. Existing research has mainly focused on applying semantic models to decode brain activity patterns associated with the meaning of individual words, and, more recently, this approach has been extended to sentences and larger text fragments. Our work is the first to investigate metaphor processing in the brain in this context. We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences. Our results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing support for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension.


2019 ◽  
Vol 375 (1791) ◽  
pp. 20190313 ◽  
Author(s):  
Milena Rabovsky ◽  
James L. McClelland

We argue that natural language can be usefully described as quasi-compositional and we suggest that deep learning-based neural language models bear long-term promise to capture how language conveys meaning. We also note that a successful account of human language processing should explain both the outcome of the comprehension process and the continuous internal processes underlying this performance. These points motivate our discussion of a neural network model of sentence comprehension, the Sentence Gestalt model, which we have used to account for the N400 component of the event-related brain potential (ERP), which tracks meaning processing as it happens in real time. The model, which shares features with recent deep learning-based language models, simulates N400 amplitude as the automatic update of a probabilistic representation of the situation or event described by the sentence, corresponding to a temporal difference learning signal at the level of meaning. We suggest that this process happens relatively automatically, and that sometimes a more-controlled attention-dependent process is necessary for successful comprehension, which may be reflected in the subsequent P600 ERP component. We relate this account to current deep learning models as well as classic linguistic theory, and use it to illustrate a domain general perspective on some specific linguistic operations postulated based on compositional analyses of natural language. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


Author(s):  
Riitta Salmelin ◽  
Jan Kujala ◽  
Mia Liljeström

When seeking to uncover the brain correlates of language processing, timing and location are of the essence. Magnetoencephalography (MEG) offers them both, with the highest sensitivity to cortical activity. MEG has shown its worth in revealing cortical dynamics of reading, speech perception, and speech production in adults and children, in unimpaired language processing as well as developmental and acquired language disorders. The MEG signals, once recorded, provide an extensive selection of measures for examination of neural processing. Like all other neuroimaging tools, MEG has its own strengths and limitations of which the user should be aware in order to make the best possible use of this powerful method and to generate meaningful and reliable scientific data. This chapter reviews MEG methodology and how MEG has been used to study the cortical dynamics of language.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1372
Author(s):  
Sanjanasri JP ◽  
Vijay Krishna Menon ◽  
Soman KP ◽  
Rajendran S ◽  
Agnieszka Wolk

Linguists have been focused on a qualitative comparison of the semantics from different languages. Evaluation of the semantic interpretation among disparate language pairs like English and Tamil is an even more formidable task than for Slavic languages. The concept of word embedding in Natural Language Processing (NLP) has enabled a felicitous opportunity to quantify linguistic semantics. Multi-lingual tasks can be performed by projecting the word embeddings of one language onto the semantic space of the other. This research presents a suite of data-efficient deep learning approaches to deduce the transfer function from the embedding space of English to that of Tamil, deploying three popular embedding algorithms: Word2Vec, GloVe and FastText. A novel evaluation paradigm was devised for the generation of embeddings to assess their effectiveness, using the original embeddings as ground truths. Transferability across other target languages of the proposed model was assessed via pre-trained Word2Vec embeddings from Hindi and Chinese languages. We empirically prove that with a bilingual dictionary of a thousand words and a corresponding small monolingual target (Tamil) corpus, useful embeddings can be generated by transfer learning from a well-trained source (English) embedding. Furthermore, we demonstrate the usability of generated target embeddings in a few NLP use-case tasks, such as text summarization, part-of-speech (POS) tagging, and bilingual dictionary induction (BDI), bearing in mind that those are not the only possible applications.


2020 ◽  
pp. 174702182098462
Author(s):  
Masataka Yano ◽  
Shugo Suwazono ◽  
Hiroshi Arao ◽  
Daichi Yasunaga ◽  
Hiroaki Oishi

The present study conducted two event-related potential experiments to investigate whether readers adapt their expectations to morphosyntactically (Experiment 1) or semantically (Experiment 2) anomalous sentences when they are repeatedly exposed to them. To address this issue, we manipulated the probability of morphosyntactically/semantically grammatical and anomalous sentence occurrence through experiments. For the low probability block, anomalous sentences were presented less frequently than grammatical sentences (with a ratio of 1 to 4), while they were presented as frequently as grammatical sentences in the equal probability block. Experiment 1 revealed a smaller P600 effect for morphosyntactic violations in the equal probability block than in the low probability block. Linear mixed-effect models were used to examine how the size of the P600 effect changed as the experiment went along. The results showed that the smaller P600 effect of the equal probability block resulted from an amplitude’s decline in morphosyntactically violated sentences over the course of the experiment, suggesting an adaptation to morphosyntactic violations. In Experiment 2, semantically anomalous sentences elicited a larger N400 effect than their semantically natural counterparts regardless of probability manipulation. No evidence was found in favor of adaptation to semantic violations in that the processing cost of semantic violations did not decrease over the course of the experiment. Therefore, the present study demonstrated a dynamic aspect of language-processing system. We will discuss why the language-processing system shows a selective adaptation to morphosyntactic violations.


2021 ◽  
pp. 002073142110174
Author(s):  
Md Mijanur Rahman ◽  
Fatema Khatun ◽  
Ashik Uzzaman ◽  
Sadia Islam Sami ◽  
Md Al-Amin Bhuiyan ◽  
...  

The novel coronavirus disease (COVID-19) has spread over 219 countries of the globe as a pandemic, creating alarming impacts on health care, socioeconomic environments, and international relationships. The principal objective of the study is to provide the current technological aspects of artificial intelligence (AI) and other relevant technologies and their implications for confronting COVID-19 and preventing the pandemic’s dreadful effects. This article presents AI approaches that have significant contributions in the fields of health care, then highlights and categorizes their applications in confronting COVID-19, such as detection and diagnosis, data analysis and treatment procedures, research and drug development, social control and services, and the prediction of outbreaks. The study addresses the link between the technologies and the epidemics as well as the potential impacts of technology in health care with the introduction of machine learning and natural language processing tools. It is expected that this comprehensive study will support researchers in modeling health care systems and drive further studies in advanced technologies. Finally, we propose future directions in research and conclude that persuasive AI strategies, probabilistic models, and supervised learning are required to tackle future pandemic challenges.


Sign in / Sign up

Export Citation Format

Share Document