2019 ◽  
Vol 66 ◽  
pp. 503-554 ◽  
Author(s):  
Andreas Niskanen ◽  
Johannes Wallner ◽  
Matti Järvisalo

Argumentation is today a topical area of artificial intelligence (AI) research. Abstract argumentation, with argumentation frameworks (AFs) as the underlying knowledge representation formalism, is a central viewpoint to argumentation in AI. Indeed, from the perspective of AI and computer science, understanding computational and representational aspects of AFs is key in the study of argumentation. Realizability of AFs has been recently proposed as a central notion for analyzing the expressive power of AFs under different semantics. In this work, we propose and study the AF synthesis problem as a natural extension of realizability, addressing some of the shortcomings arising from the relatively stringent definition of realizability. In particular, realizability gives means of establishing exact conditions on when a given collection of subsets of arguments has an AF with exactly the given collection as its set of extensions under a specific argumentation semantics. However, in various settings within the study of dynamics of argumentation---including revision and aggregation of AFs---non-realizability can naturally occur. To accommodate such settings, our notion of AF synthesis seeks to construct, or synthesize, AFs that are semantically closest to the knowledge at hand even when no AFs exactly representing the knowledge exist. Going beyond defining the AF synthesis problem, we study both theoretical and practical aspects of the problem. In particular, we (i) prove NP-completeness of AF synthesis under several semantics, (ii) study basic properties of the problem in relation to realizability, (iii) develop algorithmic solutions to NP-hard AF synthesis using the constraint optimization paradigms of maximum satisfiability and answer set programming, (iv) empirically evaluate our algorithms on different forms of AF synthesis instances, as well as (v) discuss variants and generalizations of AF synthesis.


Author(s):  
Tiago Oliveira ◽  
José Neves ◽  
Paulo Novais

The prevalence of situations of medical error and defensive medicine in healthcare institutions is a great concern of the medical community. Clinical Practice Guidelines are regarded by most researchers as a way to mitigate theseoccurrences; however, there is a need to make them interactive, easier to update and to deploy. This paper provides a model for Computer-Interpretable Guidelines based on the generic tasks of the clinical process, devised to be included in the framework of a Clinical Decision Support System. Aiming to represent medical recommendations in a simple and intuitive way. Hence, this work proposes a knowledge representation formalism that uses an Extension to Logic Programming to handle incomplete information. This model is used to represent different cases of missing, conflicting and inexact information with the aid of a method to quantify its quality. The integration of the guideline model with the knowledge representation formalism yields a clinical decision model that relies on the development of multiple information scenarios and the exploration of different clinical hypotheses.


Author(s):  
Farhad Ameri ◽  
Boonserm Kulvatunyou ◽  
Nenad Ivezic ◽  
Khosrow Kaikhah

Ontological conceptualization refers to the process of creating an abstract view of the domain of interest through a set of interconnected concepts. In this paper, a thesaurus-based methodology is proposed for systematic ontological conceptualization in the manufacturing domain. The methodology has three main phases, namely, thesaurus development, thesaurus evaluation, and thesaurus conversion and it uses simple knowledge organization system (SKOS) as the thesaurus representation formalism. The concept-based nature of a SKOS thesaurus makes it suitable for identifying important concepts in the domain. To that end, novel thesaurus evaluation and thesaurus conversion metrics that exploit this characteristic are presented. The ontology conceptualization methodology is demonstrated through the development of a manufacturing thesaurus, referred to as ManuTerms. The concepts in ManuTerms can be converted into ontological classes. The whole conceptualization process is the stepping stone to developing more axiomatic ontologies. Although the proposed methodology is developed in the context of manufacturing ontology development, the underlying methods, tools, and metrics can be applied to development of any domain ontology. The developed thesaurus can serve as a standalone lightweight ontology and its concepts can be reused by other ontologies or thesauri.


Author(s):  
KASPAR RIESEN ◽  
HORST BUNKE

Graphs provide us with a powerful and flexible representation formalism for pattern classification. Many classification algorithms have been proposed in the literature. However, the vast majority of these algorithms rely on vectorial data descriptions and cannot directly be applied to graphs. Recently, a growing interest in graph kernel methods can be observed. Graph kernels aim at bridging the gap between the high representational power and flexibility of graphs and the large amount of algorithms available for object representations in terms of feature vectors. In the present paper, we propose an approach transforming graphs into n-dimensional real vectors by means of prototype selection and graph edit distance computation. This approach allows one to build graph kernels in a straightforward way. It is not only applicable to graphs, but also to other kind of symbolic data in conjunction with any kind of dissimilarity measure. Thus it is characterized by a high degree of flexibility. With several experimental results, we prove the robustness and flexibility of our new method and show that our approach outperforms other graph classification methods on several graph data sets of diverse nature.


2013 ◽  
Vol 420 ◽  
pp. 325-332 ◽  
Author(s):  
Zhi Ping Zhang ◽  
Lin Na Li ◽  
Li Jun Wang ◽  
Hai Yan Yu

Data mining discovers knowledge and useful information from large amounts of data stored in databases. With the increasing popularity of object-oriented database system in advanced database applications, it is significantly important to study the data mining methods for object-oriented database. This paper proposes that higher-order logic programming languages and techniques is very suitable for object-oriented data mining, and presents a framework for object-oriented data mining based on higher-order logic programming. Such a framework is inductive logic programming which adopts higher-order logic programming language Escher as knowledge representation formalism. In addition, Escher is a generalization of the attribute-value representation, thus many higher-order logic learners under this framework can be upgraded directly from corresponding propositional learners.


Terminology ◽  
2002 ◽  
Vol 8 (1) ◽  
pp. 91-111 ◽  
Author(s):  
Caroline Barrière

This research looks at the complexity inherent in the causal relation and the implications for its representation in a Terminological Knowledge Base (TKB). Supported by a more general study of semantic relation hierarchies, a hierarchical refinement of the causal relation is proposed. It results from a manual search of a corpus which shows that it efficiently captures and formalizes variations expressed in text. The feasibility of determining such categorization during automatic extraction from corpora is also explored. Conceptual graphs are used as a representation formalism to which we have added certainty information to capture the degree of certainty surrounding the interaction between two terms involved in a causal relation.


2014 ◽  
Vol 49 ◽  
pp. 171-206 ◽  
Author(s):  
S. Schiffel ◽  
M. Thielscher

A general game player is a system that can play previously unknown games just by being given their rules. For this purpose, the Game Description Language (GDL) has been developed as a high-level knowledge representation formalism to communicate game rules to players. In this paper, we address a fundamental limitation of state-of-the-art methods and systems for General Game Playing, namely, their being confined to deterministic games with complete information about the game state. We develop a simple yet expressive extension of standard GDL that allows for formalising the rules of arbitrary finite, n-player games with randomness and incomplete state knowledge. In the second part of the paper, we address the intricate reasoning challenge for general game-playing systems that comes with the new description language. We develop a full embedding of extended GDL into the Situation Calculus augmented by Scherl and Levesque's knowledge fluent. We formally prove that this provides a sound and complete reasoning method for players' knowledge about game states as well as about the knowledge of the other players.


Author(s):  
Jori Bomanson ◽  
Tomi Janhunen ◽  
Antonius Weinzierl

Answer-Set Programming (ASP) is an expressive rule-based knowledge-representation formalism. Lazy grounding is a solving technique that avoids the well-known grounding bottleneck of traditional ASP evaluation but is restricted to normal rules, severely limiting its expressive power. In this work, we introduce a framework to handle aggregates by normalizing them on demand during lazy grounding, hence relieving the restrictions of lazy grounding significantly. We term our approach as lazy normalization and demonstrate its feasibility for different types of aggregates. Asymptotic behavior is analyzed and correctness of the presented lazy normalizations is shown. Benchmark results indicate that lazy normalization can bring up-to exponential gains in space and time as well as enable ASP to be used in new application areas.


Sign in / Sign up

Export Citation Format

Share Document