scholarly journals Storing an OWL 2 Ontology in a Relational Database Structure

Author(s):  
Henrihs Gorskis ◽  
Arkady Borisov

<p class="R-AbstractKeywords"><span lang="EN-GB">This paper examines the possibility of storing OWL 2 based ontology information in a classical relational database and reviews some existing methods for ontology databases. In most cases a database is a fitting solution for storing and sharing information among systems, clients or agents. Similarly, in order to make domain ontology information more accessible to systems, in a comparable way, it can be stored and provided in a database form. As of today, there is no consensus on a specific ontology database structure. The main focus of this paper is specifically on OWL 2 as a basis for the description of ontology centric information in a database. The Web Ontology Language OWL 2 is a language for describing ontology information for the Semantic Web. As such it consists of a list of reserved words and grammatical rules for defining many parts of ontology knowledge. Based on this language specification this paper examines the possibility of storing information in a relational database for the description of domain ontology information. By creating a database structure based on OWL2 it is feasible to obtain an approach to storing information about the domain ontology in an utilizable way, by using its descriptive abilities. Nowadays multiple approaches to storing ontology information and OWL in databases exist; most of them are based on storing RDF data or provide persistence for specific OWL software libraries. The examination of the existing approaches provided in this paper, shows how they differ from the goal of obtaining a general, more easily usable and less software library specific database for domain ontology centric information. This paper describes a version of a simple relational database capable of holding and providing ontology knowledge on demand, which can be implemented on a database management system of choice. </span></p>

2006 ◽  
Vol 35 (3) ◽  
Author(s):  
Ernestas Vysniauskas ◽  
Lina Nemuraite

The current work has arisen with respect to the growing importance of ontology modelling in Informa-tion Systems development. Due to emerging technologies of Semantic Web, it is desirable to use for this purpose the Web Ontology Language OWL. From the other side, the relational database technology has ensured the best facilities for storing, updating and manipulating the information of problem domain. The algorithms for transformation of domain ontology, described in OWL, to relational database are proposed. The methodology is illustrated with an example.


Author(s):  
Hussein Ali Ahmed Ghanim ◽  
László Kovács

<p>E-Learning is an important support mechanism for educational systems to increase the efficiency of the education process including students and teachers. The current e-learning systems typically lack the level of metacognitive awareness, adaptive tutoring, and time management skills and have not always met the expectations of the learners as required. In this study, we introduce a novel ontological model for the learning process in the e-learning domain. In the framework, we have built a domain ontology that represents knowledge of the learning, the outcome domain ontology covers the whole learning process. We focused on the learning process ontology model conceptualizing knowledge constructions, such as learning courses, and we present the created course and learning process ontology in detail. In this work, we considered three layers of learning process. The top layer defines a general framework of learning process, conceptual model layer, defines the framework of the actual process of the learning process and course ontology model contains the knowledge unit of the learning process. The prototype ontology is constructed in protégé and managed by Java web ontology language-application programming interface (OWL-API). As a result, our model can solve the problems of current e-tutor systems. Also, it can be used for different domain in e-tutor systems. It can reach the characteristics of standardization, reusability, flexibility, and open knowledge. By applying this model, we can avoid applying isolated databases. The constructed ontology can be used in the future to control adaptive intelligent e-tutor frameworks.</p>


1990 ◽  
Vol 80 (6B) ◽  
pp. 1833-1851 ◽  
Author(s):  
Thomas C. Bache ◽  
Steven R. Bratt ◽  
James Wang ◽  
Robert M. Fung ◽  
Cris Kobryn ◽  
...  

Abstract The Intelligent Monitoring System (IMS) is a computer system for processing data from seismic arrays and simpler stations to detect, locate, and identify seismic events. The first operational version processes data from two high-frequency arrays (NORESS and ARCESS) in Norway. The IMS computers and functions are distributed between the NORSAR Data Analysis Center (NDAC) near Oslo and the Center for Seismic Studies (Center) in Arlington, Virginia. The IMS modules at NDAC automatically retrieve data from a disk buffer, detect signals, compute signal attributes (amplitude, slowness, azimuth, polarization, etc.), and store them in a commercial relational database management system (DBMS). IMS makes scheduled (e.g., hourly) transfers of the data to a separate DBMS at the Center. Arrival of new data automatically initiates a “knowledge-based system (KBS)” that interprets these data to locate and identify (earthquake, mine blast, etc.) seismic events. This KBS uses general and area-specific seismological knowledge represented in rules and procedures. For each event, unprocessed data segments (e.g., 7 min for regional events) are retrieved from NDAC for subsequent display and analyst review. The interactive analysis modules include integrated waveform and map display/manipulation tools for efficient analyst validation or correction of the solutions produced by the automated system. Another KBS compares the analyst and automatic solutions to mark overruled elements of the knowledge base. Performance analysis statistics guide subsequent changes to the knowledge base so it improves with experience. The IMS is implemented on networked Sun workstations, with a 56 kbps satellite link bridging the NDAC and Center computer networks. The software architecture is modular and distributed, with processes communicating by messages and sharing data via the DBMS. The IMS processing requirements are easily met with major processes (i.e., signal processing, KBS, and DBMS) on separate Sun 4/2xx workstations. This architecture facilitates expansion in functionality and number of stations. The first version was operated continuously for 8 weeks in late-1989. The Center functions were then transferred to NDAC for subsequent operation. Later versions will be distributed among NDAC, Scripps/IGPP (San Diego), and the Center to process data from many stations and arrays. The IMS design is ambitious in its integration of many new computer technologies, but the operational performance of the first version demonstrates its validity. Thus, IMS provides a new generation of automated seismic event monitoring capability.


Sign in / Sign up

Export Citation Format

Share Document