scholarly journals MMoOn Core – the Multilingual Morpheme Ontology

Semantic Web ◽  
2020 ◽  
pp. 1-29
Author(s):  
Bettina Klimek ◽  
Markus Ackermann ◽  
Martin Brümmer ◽  
Sebastian Hellmann

In the last years a rapid emergence of lexical resources has evolved in the Semantic Web. Whereas most of the linguistic information is already machine-readable, we found that morphological information is mostly absent or only contained in semi-structured strings. An integration of morphemic data has not yet been undertaken due to the lack of existing domain-specific ontologies and explicit morphemic data. In this paper, we present the Multilingual Morpheme Ontology called MMoOn Core which can be regarded as the first comprehensive ontology for the linguistic domain of morphological language data. It will be described how crucial concepts like morphs, morphemes, word forms and meanings are represented and interrelated and how language-specific morpheme inventories can be created as a new possibility of morphological datasets. The aim of the MMoOn Core ontology is to serve as a shared semantic model for linguists and NLP researchers alike to enable the creation, conversion, exchange, reuse and enrichment of morphological language data across different data-dependent language sciences. Therefore, various use cases are illustrated to draw attention to the cross-disciplinary potential which can be realized with the MMoOn Core ontology in the context of the existing Linguistic Linked Data research landscape.

Author(s):  
Jose María Alvarez Rodríguez ◽  
José Emilio Labra Gayo ◽  
Patricia Ordoñez de Pablos

The aim of this chapter is to present a proposal and a case study to describe the information about organizations in a standard way using the Linked Data approach. Several models and ontologies have been provided in order to formalize the data, structure and behaviour of organizations. Nevertheless, these tries have not been fully accepted due to some factors: (1) missing pieces to define the status of the organization; (2) tangled parts to specify the structure (concepts and relations) between the elements of the organization; 3) lack of text properties, and other factors. These divergences imply a set of incomplete approaches to formalize data and information about organizations. Taking into account the current trends of applying semantic web technologies and linked data to formalize, aggregate, and share domain specific information, a new model for organizations taking advantage of these initiatives is required in order to overcome existing barriers and exploit the corporate information in a standard way. This work is especially relevant in some senses to: (1) unify existing models to provide a common specification; (2) apply semantic web technologies and the Linked Data approach; (3) provide access to the information via standard protocols, and (4) offer new services that can exploit this information to trace the evolution and behaviour of the organization over time. Finally, this work is interesting to improve the clarity and transparency of some scenarios in which organizations play a key role, like e-procurement, e-health, or financial transactions.


2011 ◽  
Vol 6 (1) ◽  
pp. 165-182 ◽  
Author(s):  
David Tarrant ◽  
Steve Hitchcock ◽  
Leslie Carr

The Web is increasingly becoming a platform for linked data. This means making connections and adding value to data on the Web. As more data becomes openly available and more people are able to use the data, it becomes more powerful. An example is file format registries and the evaluation of format risks. Here the requirement for information is now greater than the effort that any single institution can put into gathering and collating this information. Recognising that more is better, the creators of PRONOM, JHOVE, GDFR and others are joining to lead a new initiative: the Unified Digital Format Registry. Ahead of this effort, a new RDF-based framework for structuring and facilitating file format data from multiple sources, including PRONOM, has demonstrated it is able to produce more links, and thus provide more answers to digital preservation questions - about format risks, applications, viewers and transformations - than the native data alone. This paper will describe this registry, P2, and its services, show how it can be used, and provide examples where it delivers more answers than the contributing resources. The P2 Registry is a reference platform to allow and encourage publication of preservation data, and also an examplar of what can be achieved if more data is published openly online as simple machine-readable documents. This approach calls for the active participation of the digital preservation community to contribute data by simply publishing it openly on the Web as linked data.


2017 ◽  
Vol 1 (2) ◽  
pp. 456-476 ◽  
Author(s):  
Piotr Kuroczynski

 Since the 1990s the application of the digital 3D reconstruction and computer-based visualisation of culturalheritage increased. The virtual reconstruction and 3D visualisation revealed a new “glittering” research space forobject-oriented disciplines such as archaeology, art history and architecture. Nevertheless the academicsconcerned with the uprising technology recognised early the lack of documentation standards in the 3Dprojects leading to the loss of information, findings and the fusion of knowledge behind the digital 3Drepresentation. Based on the methodological fundamentals of the digital 3D reconstruction the potentials andchallenges in the light of emerging Semantic Web and Web3D technologies will be introduced. The presentationsubscribes a scientific methodology and a collaborative web-based research environment followed by crucialfeatures for this kind of projects. As the groundwork a human- and machine-readable “language of objects” andthe implementation of this semantic patterns for spatial research purposes on destroyed and/or never realisedtangible cultural heritage will be discussed. Using examples from the practice the presentation explains therequirements of the Semantic Web (Linked Data), the role of controlled vocabularies, the architecture of the VREand the impact of a customised integration of interactive 3D models within the WebGL technology. Thepresentation intends to showcase the state-of-the-art on the way to a digital research infrastructure. The focuslies on the introduction of scholarly approved and sustainable digital 3D reconstruction, complaint withrecognised documentation standards and following the Linked Data requirements.


2021 ◽  
Author(s):  
Gillian Byrne ◽  
Lisa Goddard

Since 1999 the W3C has been working on a set of Semantic Web standards that have the potential to revolutionize web search. Also known as Linked Data, the Machine‐Readable Web, the Web of Data, or Web3.0, the Semantic Web relies on highly structured metadata that allow computers to understand the relationships between objects. Semantic web standards are complex, and difficult to conceptualize, but they offer solutions to many of the issues that plague libraries, including precise web search, authority control, classification, data portability, and disambiguation. This article will outline some of the benefits that linked data could have for libraries, will discuss some of the non‐technical obstacles that we face in moving forward, and will finally offer suggestions for practical ways in which libraries can participate in the development of the semantic web.


Author(s):  
Yusuke Tagawa ◽  
Arata Tanaka ◽  
Yuya Minami ◽  
Daichi Namikawa ◽  
Michio Simomura ◽  
...  

2020 ◽  
pp. 45-59
Author(s):  
Philipp Cimiano ◽  
Christian Chiarcos ◽  
John P. McCrae ◽  
Jorge Gracia

Author(s):  
Georg Neubauer

The main subject of the work is the visualization of typed links in Linked Data. The academic subjects relevant to the paper in general are the Semantic Web, the Web of Data and information visualization. The Semantic Web, invented by Tim Berners-Lee in 2001, was announced as an extension to the World Wide Web (Web 2.0). The actual area of investigation concerns the connectivity of information on the World Wide Web. To be able to explore such interconnections, visualizations are critical requirements as well as a major part of processing data in themselves. In the context of the Semantic Web, representation of information interrelations can be achieved using graphs. The aim of the article is to primarily describe the arrangement of Linked Data visualization concepts by establishing their principles in a theoretical approach. Putting design restrictions into context leads to practical guidelines. By describing the creation of two alternative visualizations of a commonly used web application representing Linked Data as network visualization, their compatibility was tested. The application-oriented part treats the design phase, its results, and future requirements of the project that can be derived from this test.


Author(s):  
Andrew Iliadis ◽  
Wesley Stevens ◽  
Jean-Christophe Plantin ◽  
Amelia Acker ◽  
Huw Davies ◽  
...  

This panel focuses on the way that platforms have become key players in the representation of knowledge. Recently, there have been calls to combine infrastructure and platform-based frameworks to understand the nature of information exchange on the web through digital tools for knowledge sharing. The present panel builds and extends work on platform and infrastructure studies in what has been referred to as “knowledge as programmable object” (Plantin, et al., 2018), specifically focusing on how metadata and semantic information are shaped and exchanged in specific web contexts. As Bucher (2012; 2013) and Helmond (2015) show, data portability in the context of web platforms requires a certain level of semantic annotation. Semantic interoperability is the defining feature of so-called "Web 3.0"—traditionally referred to as the semantic web (Antoniou et al, 2012; Szeredi et al, 2014). Since its inception, the semantic web has privileged the status of metadata for providing the fine-grained levels of contextual expressivity needed for machine-readable web data, and can be found in products as diverse as Google's Knowledge Graph, online research repositories like Figshare, and other sources that engage in platformizing knowledge. The first paper in this panel examines the international Schema.org collaboration. The second paper investigates the epistemological implications when platforms organize data sharing. The third paper argues for the use of patents to inform research methodologies for understanding knowledge graphs. The fourth paper discusses private platforms’ extraction and collection of user metadata and the enclosure of data access.


2011 ◽  
Vol 6 (2) ◽  
pp. 209-221 ◽  
Author(s):  
Huda Khan ◽  
Brian Caruso ◽  
Jon Corson-Rikert ◽  
Dianne Dietrich ◽  
Brian Lowe ◽  
...  

In disciplines as varied as medicine, social sciences, and economics, data and their analyses are essential parts of researchers’ contributions to their respective fields. While sharing research data for review and analysis presents new opportunities for furthering research, capturing these data in digital forms and providing the digital infrastructure for sharing data and metadata pose several challenges. This paper reviews the motivations behind and design of the Data Staging Repository (DataStaR) platform that targets specific portions of the research data curation lifecycle: data and metadata capture and sharing prior to publication, and publication to permanent archival repositories. The goal of DataStaR is to support both the sharing and publishing of data while at the same time enabling metadata creation without imposing additional overheads for researchers and librarians. Furthermore, DataStaR is intended to provide cross-disciplinary support by being able to integrate different domain-specific metadata schemas according to researchers’ needs. DataStaR’s strategy of a usable interface coupled with metadata flexibility allows for a more scaleable solution for data sharing, publication, and metadata reuse.


Sign in / Sign up

Export Citation Format

Share Document