scholarly journals Term Relationships and Their Contribution to Text Semantics and Information Literacy Through Lexical Cohesion

Author(s):  
Jane Morris ◽  
Clare Beghtol ◽  
Graeme Hirst

An analysis of linguistic approaches to determining the lexical cohesion in text reveals differences in the types of lexical semantic relations (term relationships) that contribute to the continuity of lexical meaning in the text. Differences were also found in how these lexical relations join words together, sometimes with grammatical relations, to form larger groups of related words. . .

Author(s):  
Jane Morris

Preliminary results from an experimental study of readers’ perceptions of lexical cohesion and lexical semantic relations in text are presented. Readers agree on a common “core” of groups of related words and exhibit individual differences. The majority of relations reported are “non-classical” (not hyponymy, meronymy, synonymy, or antonymy). A group of commonly used relations is presented. These preliminary results indicate potential for improving both relations existing in lexical resources, and methods dependent on lexical cohesion analysis.Les résultatspréliminaires d’une étude expérimentale sur les perceptions des lecteurs au sujet de la cohésion lexicale et des relations lexicales sémantiques de textes sont présentés. Les lecteurs s’entendent sur un « noyau » commun de groupes de mots reliés et présentent des différences individuelles. La majorité des relations indiquées sont « non classiques » (ni hyponymiques, méronymiques, synonymiques ou antonymiques). Un groupe de relations couramment utilisées est présenté. Ces résultats préliminaires indiquent le potentiel nécessaire pour améliorer aussi bien les relations existant dans les ressources lexicales que les méthodes dépendant de l’analyse de la cohésion lexicale. 


2007 ◽  
Vol 19 (8) ◽  
pp. 1259-1274 ◽  
Author(s):  
Dietmar Roehm ◽  
Ina Bornkessel-Schlesewsky ◽  
Frank Rösler ◽  
Matthias Schlesewsky

We report a series of event-related potential experiments designed to dissociate the functionally distinct processes involved in the comprehension of highly restricted lexical-semantic relations (antonyms). We sought to differentiate between influences of semantic relatedness (which are independent of the experimental setting) and processes related to predictability (which differ as a function of the experimental environment). To this end, we conducted three ERP studies contrasting the processing of antonym relations (black-white) with that of related (black-yellow) and unrelated (black-nice) word pairs. Whereas the lexical-semantic manipulation was kept constant across experiments, the experimental environment and the task demands varied: Experiment 1 presented the word pairs in a sentence context of the form The opposite of X is Y and used a sensicality judgment. Experiment 2 used a word pair presentation mode and a lexical decision task. Experiment 3 also examined word pairs, but with an antonymy judgment task. All three experiments revealed a graded N400 response (unrelated > related > antonyms), thus supporting the assumption that semantic associations are processed automatically. In addition, the experiments revealed that, in highly constrained task environments, the N400 gradation occurs simultaneously with a P300 effect for the antonym condition, thus leading to the superficial impression of an extremely “reduced” N400 for antonym pairs. Comparisons across experiments and participant groups revealed that the P300 effect is not only a function of stimulus constraints (i.e., sentence context) and experimental task, but that it is also crucially influenced by individual processing strategies used to achieve successful task performance.


2020 ◽  
Vol 8 ◽  
pp. 311-329
Author(s):  
Kushal Arora ◽  
Aishik Chakraborty ◽  
Jackie C. K. Cheung

In this paper, we propose LexSub, a novel approach towards unifying lexical and distributional semantics. We inject knowledge about lexical-semantic relations into distributional word embeddings by defining subspaces of the distributional vector space in which a lexical relation should hold. Our framework can handle symmetric attract and repel relations (e.g., synonymy and antonymy, respectively), as well as asymmetric relations (e.g., hypernymy and meronomy). In a suite of intrinsic benchmarks, we show that our model outperforms previous approaches on relatedness tasks and on hypernymy classification and detection, while being competitive on word similarity tasks. It also outperforms previous systems on extrinsic classification tasks that benefit from exploiting lexical relational cues. We perform a series of analyses to understand the behaviors of our model. 1 Code available at https://github.com/aishikchakraborty/LexSub .


Author(s):  
Cyril Belica ◽  
Holger Keibel ◽  
Marc Kupietz ◽  
Rainer Perkuhn

Author(s):  
Martin Maiden

The historical morphology of the verb ‘snow’ in Francoprovençal presents a conundrum, in that it is clearly analogically influenced by the verb ‘rain’, for obvious reasons of lexical semantic similarity, but the locus of that influence is not the ‘root’ (the ostensible bearer of lexical meaning) but desinential inflexion-class members, which are in principle independent of any lexical meaning. Similar morphological changes are also identified for other Gallo-Romance verbs. It seems, in effect, that speakers can identify exponents of the lexical meaning of word-forms in linear sequences larger than the apparent ‘morphemic’ composition of those word-forms, even when such a composition may seem prima facie transparent and obvious. It is argued that these facts are inherently incompatible with ‘constructivist’, morpheme-based, models of morphology, and strongly compatible with what have been called ‘abstractivist’ (‘word-and-paradigm’) approaches, which generally take entire word-forms as the primary units of morphological analysis.


2013 ◽  
Vol 19 (3) ◽  
pp. 385-407 ◽  
Author(s):  
SU NAM KIM ◽  
TIMOTHY BALDWIN

AbstractThis paper presents a study on the interpretation and bracketing of noun compounds (‘NCs’) based on lexical semantics. Our primary goal is to develop a method to automatically interpret NCs through the use of semantic relations. Our NC interpretation method is based on lexical similarity with tagged NCs, based on lexical similarity measures derived from WordNet. We apply the interpretation method to both two- and three-term NC interpretation based on semantic roles. Finally, we demonstrate that our NC interpretation method can boost the coverage and accuracy of NC bracketing.


Sign in / Sign up

Export Citation Format

Share Document