scholarly journals Generating Natural Language Descriptions from OWL Ontologies: the NaturalOWL System

2013 ◽  
Vol 48 ◽  
pp. 671-715 ◽  
Author(s):  
I. Androutsopoulos ◽  
G. Lampouras ◽  
D. Galanis

We present NaturalOWL, a natural language generation system that produces texts describing individuals or classes of OWL ontologies. Unlike simpler OWL verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent natural language primarily for the benefit of domain experts, we aim to generate fluent and coherent multi-sentence texts for end-users. With a system like NaturalOWL, one can publish information in OWL on the Web, along with automatically produced corresponding texts in multiple languages, making the information accessible not only to computer programs and domain experts, but also end-users. We discuss the processing stages of NaturalOWL, the optional domain-dependent linguistic resources that the system can use at each stage, and why they are useful. We also present trials showing that when the domain-dependent llinguistic resources are available, NaturalOWL produces significantly better texts compared to a simpler verbalizer, and that the resources can be created with relatively light effort.

2020 ◽  
Vol 34 (05) ◽  
pp. 7375-7382
Author(s):  
Prithviraj Ammanabrolu ◽  
Ethan Tien ◽  
Wesley Cheung ◽  
Zhaochen Luo ◽  
William Ma ◽  
...  

Neural network based approaches to automated story plot generation attempt to learn how to generate novel plots from a corpus of natural language plot summaries. Prior work has shown that a semantic abstraction of sentences called events improves neural plot generation and and allows one to decompose the problem into: (1) the generation of a sequence of events (event-to-event) and (2) the transformation of these events into natural language sentences (event-to-sentence). However, typical neural language generation approaches to event-to-sentence can ignore the event details and produce grammatically-correct but semantically-unrelated sentences. We present an ensemble-based model that generates natural language guided by events. We provide results—including a human subjects study—for a full end-to-end automated story generation system showing that our method generates more coherent and plausible stories than baseline approaches 1.


2006 ◽  
Vol 32 (2) ◽  
pp. 223-262 ◽  
Author(s):  
Diana Inkpen ◽  
Graeme Hirst

Choosing the wrong word in a machine translation or natural language generation system can convey unwanted connotations, implications, or attitudes. The choice between near-synonyms such as error, mistake, slip, and blunder—words that share the same core meaning, but differ in their nuances—can be made only if knowledge about their differences is available. We present a method to automatically acquire a new type of lexical resource: a knowledge base of near-synonym differences. We develop an unsupervised decision-list algorithm that learns extraction patterns from a special dictionary of synonym differences. The patterns are then used to extract knowledge from the text of the dictionary. The initial knowledge base is later enriched with information from other machine-readable dictionaries. Information about the collocational behavior of the near-synonyms is acquired from free text. The knowledge base is used by Xenon, a natural language generation system that shows how the new lexical resource can be used to choose the best near-synonym in specific situations.


2020 ◽  
Vol 34 (05) ◽  
pp. 7570-7577
Author(s):  
Zewen Chi ◽  
Li Dong ◽  
Furu Wei ◽  
Wenhui Wang ◽  
Xian-Ling Mao ◽  
...  

In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings. The pre-training objective encourages the model to represent different languages in the shared space, so that we can conduct zero-shot cross-lingual transfer. After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks. Then the sequence-to-sequence model trained in a single language can be directly evaluated beyond that language (i.e., accepting multi-lingual input and producing multi-lingual output). Experimental results on question generation and abstractive summarization show that our model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation. Moreover, cross-lingual transfer improves NLG performance of low-resource languages by leveraging rich-resource language data. Our implementation and data are available at https://github.com/CZWin32768/xnlg.


2006 ◽  
Vol 13 (3) ◽  
pp. 191-233 ◽  
Author(s):  
I. ANDROUTSOPOULOS ◽  
J. OBERLANDER ◽  
V. KARKALETSIS

We present the source authoring facilities of a natural language generation system that produces personalised descriptions of objects in multiple natural languages starting from language-independent symbolic information in ontologies and databases as well as pieces of canned text. The system has been tested in applications ranging from museum exhibitions to presentations of computer equipment for sale. We discuss the architecture of the overall system, the resources that the authors manipulate, the functionality of the authoring facilities, the system's personalisation mechanisms, and how they relate to source authoring. A usability evaluation of the authoring facilities is also presented, followed by more recent work on reusing information extracted from existing databases and documents, and supporting the OWL ontology specification language.


2000 ◽  
Vol 26 (2) ◽  
pp. 107-138
Author(s):  
Robert Rubinoff

Natural language generation is usually divided into separate text planning and linguistic components. This division, though, assumes that the two components can operate independently, which is not always true. The IGEN generator eliminates the need for this assumption; it handles interactions between the components without sacrificing the advantages of modularity. IGEN accomplishes this by means of annotations that its linguistic component places on the structures it builds; these annotations provide an abstract description of the effects of particular linguistic choices, allowing the planner to evaluate these choices without needing any linguistic knowledge. This approach allows IGEN to vary the work done by each component independently, even in cases where the final output depends on interactions between them. In addition, since IGEN explicitly models the effects of linguistic choices, it can gracefully handle situations where the available time or linguistic resources are limited.


Sign in / Sign up

Export Citation Format

Share Document