scholarly journals ARALD: Arabic Annotation Using Linked Data

2021 ◽  
Vol 26 (2) ◽  
pp. 143-149
Author(s):  
Abdelghani Bouziane ◽  
Djelloul Bouchiha ◽  
Redha Rebhi ◽  
Giulio Lorenzini ◽  
Noureddine Doumi ◽  
...  

The evolution of the traditional Web into the semantic Web makes the machine a first-class citizen on the Web and increases the discovery and accessibility of unstructured Web-based data. This development makes it possible to use Linked Data technology as the background knowledge base for unstructured data, especially texts, now available in massive quantities on the Web. Given any text, the main challenge is determining DBpedia's most relevant information with minimal effort and time. Although, DBpedia annotation tools, such as DBpedia spotlight, mainly targeted English and Latin DBpedia versions. The current situation of the Arabic language is less bright; the Web content of the Arabic language does not reflect the importance of this language. Thus, we have developed an approach to annotate Arabic texts with Linked Open Data, particularly DBpedia. This approach uses natural language processing and machine learning techniques for interlinking Arabic text with Linked Open Data. Despite the high complexity of the independent domain knowledge base and the reduced resources in Arabic natural language processing, the evaluation results of our approach were encouraging.

2020 ◽  
Vol 26 (3) ◽  
pp. 103-107
Author(s):  
Ilie Cristian Dorobăţ ◽  
Vlad Posea

AbstractThe continuous expansion of the semantic web and of the linked open data cloud meant more semantic data are available for querying from endpoints all over the web. We propose extending a standard SPARQL interface with UI and Natural Language Processing features to allow easier and more intelligent querying. The paper describes some usage scenarios for easy querying and launches a discussion on the advantages of such an implementation.


Designs ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 42
Author(s):  
Eric Lazarski ◽  
Mahmood Al-Khassaweneh ◽  
Cynthia Howard

In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use.


Author(s):  
Tim Berners-Lee ◽  
Kieron O’Hara

This paper discusses issues that will affect the future development of the Web, either increasing its power and utility, or alternatively suppressing its development. It argues for the importance of the continued development of the Linked Data Web, and describes the use of linked open data as an important component of that. Second, the paper defends the Web as a read–write medium, and goes on to consider how the read–write Linked Data Web could be achieved.


Author(s):  
César González-Mora ◽  
Cristina Barros ◽  
Irene Garrigós ◽  
Jose Zubcoff ◽  
Elena Lloret ◽  
...  

Author(s):  
Jose María Alvarez Rodríguez ◽  
Jules Clement ◽  
José Emilio Labra Gayo ◽  
Hania Farhan ◽  
Patricia Ordoñez de Pablos

This chapter introduces the promotion of statistical data to the Linked Open Data initiative in the context of the Web Index project. A framework for the publication of raw statistics and a method to convert them to Linked Data are also presented following the W3C standards RDF, SKOS, and OWL. This case study is focused on the Web Index project; launched by the Web Foundation, the Index is the first multi-dimensional measure of the growth, utility, and impact of the Web on people and nations. Finally, an evaluation of the advantages of using Linked Data to publish statistics is also presented in conjunction with a discussion and future steps sections.


2020 ◽  
pp. 034-040
Author(s):  
O.P. Zhezherun ◽  
◽  
M.S. Ryepkin ◽  
◽  

The article describes a classification system with natural language processing. Many systems use neural networks, but it needs massive amounts of data for training, which is not always available. Authors propose to use ontologies in such systems. As example of such approach it is shown the classification system, which helps to form a list of the best candidates during the recruitment process. An overview of the methods for ontologies constructing and language analyzers appropriate for classification systems are presented. The system in the form of a knowledge base is constracted. Described system supports Ukrainian and English languages. The possible ways of system expansion is regarded.


Author(s):  
Kiran Raj R

Today, everyone has a personal device to access the web. Every user tries to access the knowledge that they require through internet. Most of the knowledge is within the sort of a database. A user with limited knowledge of database will have difficulty in accessing the data in the database. Hence, there’s a requirement for a system that permits the users to access the knowledge within the database. The proposed method is to develop a system where the input be a natural language and receive an SQL query which is used to access the database and retrieve the information with ease. Tokenization, parts-of-speech tagging, lemmatization, parsing and mapping are the steps involved in the process. The project proposed would give a view of using of Natural Language Processing (NLP) and mapping the query in accordance with regular expression in English language to SQL.


2019 ◽  
Vol 20 (K9) ◽  
pp. 23-30
Author(s):  
Le Thi Thuy ◽  
Phan Thi Tuoi ◽  
Quan Thanh Tho

Entity co-reference resolution and sentiment analysis are independent problems and popular research topics in the community of natural language processing. However, the combination of those two problems has not been getting much attention. Thus, this paper susgests to apply knowledge base to solve co- reference between object and aspect with sentiment. In addition, the paper also proposes the model of Ontology-based co-reference resolution in sentiment analysis for English text. Finally, we also discuss evaluation methods applied for our model and the results obtained.


Sign in / Sign up

Export Citation Format

Share Document