scholarly journals OntoNotes: A Unified Relational Semantic Representation

Author(s):  
Sameer S. Pradhan ◽  
Eduard Hovy ◽  
Mitch Marcus ◽  
Martha Palmer ◽  
Lance Ramshaw ◽  
...  
Author(s):  
Sameer S. Pradhan ◽  
Eduard Hovy ◽  
Mitch Marcus ◽  
Martha Palmer ◽  
Lance Ramshaw ◽  
...  

2007 ◽  
Vol 01 (04) ◽  
pp. 405-419 ◽  
Author(s):  
SAMEER S. PRADHAN ◽  
EDUARD HOVY ◽  
MITCH MARCUS ◽  
MARTHA PALMER ◽  
LANCE RAMSHAW ◽  
...  

The OntoNotes project is creating a corpus of large-scale, accurate, and integrated annotation of multiple levels of the shallow semantic structure in text. Such rich, integrated annotation covering many levels will allow for richer, cross-level models enabling significantly better automatic semantic analysis. At the same time, it demands a robust, efficient, scalable mechanism for storing and accessing these complex inter-dependent annotations. We describe a relational database representation that captures both the inter- and intra-layer dependencies and provide details of an object-oriented API for efficient, multi-tiered access to this data.


Author(s):  
Ryan Cotterell ◽  
Hinrich Schütze

Much like sentences are composed of words, words themselves are composed of smaller units. For example, the English word questionably can be analyzed as question+ able+ ly. However, this structural decomposition of the word does not directly give us a semantic representation of the word’s meaning. Since morphology obeys the principle of compositionality, the semantics of the word can be systematically derived from the meaning of its parts. In this work, we propose a novel probabilistic model of word formation that captures both the analysis of a word w into its constituent segments and the synthesis of the meaning of w from the meanings of those segments. Our model jointly learns to segment words into morphemes and compose distributional semantic vectors of those morphemes. We experiment with the model on English CELEX data and German DErivBase (Zeller et al., 2013) data. We show that jointly modeling semantics increases both segmentation accuracy and morpheme F1 by between 3% and 5%. Additionally, we investigate different models of vector composition, showing that recurrent neural networks yield an improvement over simple additive models. Finally, we study the degree to which the representations correspond to a linguist’s notion of morphological productivity.


Heritage ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 612-640
Author(s):  
Nikolaos Partarakis ◽  
Danai Kaplanidi ◽  
Paraskevi Doulgeraki ◽  
Effie Karuzaki ◽  
Argyro Petraki ◽  
...  

This paper presents a knowledge representation framework and provides tools to allow the representation and presentation of the tangible and intangible dimensions of culinary tradition as cultural heritage including the socio-historic context of its evolution. The representation framework adheres to and extends the knowledge representation standards for the Cultural Heritage (CH) domain while providing a widely accessible web-based authoring environment to facilitate the representation activities. In strong collaboration with social sciences and humanities, this work allows the exploitation of ethnographic research outcomes by providing a systematic approach for the representation of culinary tradition in the form of recipes, both in an abstract form for their preservation and in a semantic representation of their execution captured on-site during ethnographic research.


2021 ◽  
Vol 13 (4) ◽  
pp. 742
Author(s):  
Jian Peng ◽  
Xiaoming Mei ◽  
Wenbo Li ◽  
Liang Hong ◽  
Bingyu Sun ◽  
...  

Scene understanding of remote sensing images is of great significance in various applications. Its fundamental problem is how to construct representative features. Various convolutional neural network architectures have been proposed for automatically learning features from images. However, is the current way of configuring the same architecture to learn all the data while ignoring the differences between images the right one? It seems to be contrary to our intuition: it is clear that some images are easier to recognize, and some are harder to recognize. This problem is the gap between the characteristics of the images and the learning features corresponding to specific network structures. Unfortunately, the literature so far lacks an analysis of the two. In this paper, we explore this problem from three aspects: we first build a visual-based evaluation pipeline of scene complexity to characterize the intrinsic differences between images; then, we analyze the relationship between semantic concepts and feature representations, i.e., the scalability and hierarchy of features which the essential elements in CNNs of different architectures, for remote sensing scenes of different complexity; thirdly, we introduce CAM, a visualization method that explains feature learning within neural networks, to analyze the relationship between scenes with different complexity and semantic feature representations. The experimental results show that a complex scene would need deeper and multi-scale features, whereas a simpler scene would need lower and single-scale features. Besides, the complex scene concept is more dependent on the joint semantic representation of multiple objects. Furthermore, we propose the framework of scene complexity prediction for an image and utilize it to design a depth and scale-adaptive model. It achieves higher performance but with fewer parameters than the original model, demonstrating the potential significance of scene complexity.


1985 ◽  
Vol 13 (4) ◽  
pp. 371-376 ◽  
Author(s):  
Barbara Von Eckardt ◽  
Mary C. Potter

Sign in / Sign up

Export Citation Format

Share Document