scholarly journals Viewing the Web as a Distributed Knowledge Base

Author(s):  
Serge Abiteboul ◽  
Émilien Antoine ◽  
Julia Stoyanovich
2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Antonio F. L. Jacob ◽  
Eulália C. da Mata ◽  
Ádamo L. Santana ◽  
Carlos R. L. Francês ◽  
João C. W. A. Costa ◽  
...  

The Web is providing greater freedom for users to create and obtain information in a more dynamic and appropriate way. One means of obtaining information on this platform, which complements or replaces other forms, is the use of conversation robots or Chatterbots. Several factors must be taken into account for the effective use of this technology; the first of which is the need to employ a team of professionals from various fields to build the knowledge base of the system and be provided with a wide range of responses, i.e. interactions. It is a multidisciplinary task to ensure that the use of this system can be targeted to children. In this context, this chapter carries out a study of the technology of Chatterbots and shows some of the changes that have been implemented for the effective use of this technology for children. It also highlights the need for a shift away from traditional methods of interaction so that an affective computing model can be implemented.


Author(s):  
Martha Garcia-Murillo ◽  
Paula Maxwell ◽  
Simon Boyce ◽  
Raymond St. Denis ◽  
William Bistline

This case focuses on the challenges of managing a help desk that supports computer users. There are two main technologies that the Information Center (IC) uses to provide this service: the call distributing system and the knowledge base, which is also available on the Web. The choice of technologies affected the service provided by the help desk staff. Specifically, the call distributing system was unable to provide enough information regarding the number of calls answered, dropped, and allocated among the different staff members. The hospital knowledge base, on the other hand, is created based on peoples documentation of the problem and selection of keywords, which has led to inconsistencies in the data entry. One of the management challenges for the Information Center is to foster self-help and minimize the number of requests to the IC staff. This case presents the difficulties and some of the initiatives that the IC has considered to solve these problems.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


2007 ◽  
pp. 329-360
Author(s):  
Hebe Vessuri ◽  
María Victoria Canino ◽  
Isabelle Sánchez-Rose

Sign in / Sign up

Export Citation Format

Share Document