scholarly journals WOLED: A tool for Online Learning Weighted Answer Set Rules for Temporal Reasoning Under Uncertainty

Author(s):  
Nikos Katzouris ◽  
Alexander Artikis

Complex Event Recognition (CER) systems detect event occurrences in streaming time-stamped input using predefined event patterns. Logic-based approaches are of special interest in CER, since, via Statistical Relational AI, they combine uncertainty-resilient reasoning with time and change, with machine learning, thus alleviating the cost of manual event pattern authoring. We present WOLED, a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of weighted rules in the Event Calculus, whose structure and weights are learnt online. We compare our ASP-based implementation with a Markov Logic-based one and with a crisp version of the algorithm that learns unweighted rules, on CER datasets for activity recognition, maritime surveillance and fleet management. Our results demonstrate the superiority of our novel implementation, both in terms of efficiency and predictive performance.

Author(s):  
NIKOS KATZOURIS ◽  
GEORGIOS PALIOURAS ◽  
ALEXANDER ARTIKIS

Abstract Complex Event Recognition (CER) systems detect event occurrences in streaming time-stamped input using predefined event patterns. Logic-based approaches are of special interest in CER, since, via Statistical Relational AI, they combine uncertainty-resilient reasoning with time and change, with machine learning, thus alleviating the cost of manual event pattern authoring. We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of weighted rules in the Event Calculus, whose structure and weights are learnt online. We compare our ASP-based implementation with a Markov Logic-based one and with a number of state-of-the-art batch learning algorithms on CER data sets for activity recognition, maritime surveillance and fleet management. Our results demonstrate the superiority of our novel approach, both in terms of efficiency and predictive performance. This paper is under consideration for publication in Theory and Practice of Logic Programming (TPLP).


Author(s):  
Omar Adjali ◽  
Amar Ramdane-Cherif

This article describes a semantic framework that demonstrates an approach for modeling and reasoning based on environment knowledge representation language (EKRL) to enhance interaction between robots and their environment. Unlike EKRL, standard Binary approaches like OWL language fails to represent knowledge in an expressive way. The authors show in this work how to: model environment and interaction in an expressive way with first-order and second-order EKRL data-structures, and reason for decision-making thanks to inference capabilities based on a complex unification algorithm. This is with the understanding that robot environments are inherently subject to noise and partial observability, the authors extended EKRL framework with probabilistic reasoning based on Markov logic networks to manage uncertainty.


Author(s):  
Roberto Porto ◽  
Jose M. Molina ◽  
Antonio Berlanga ◽  
Miguel A. Patricio

Learning systems have been very focused on creating models that are capable of obtaining the best results in error metrics. Recently, the focus has shifted to improvement in order to interpret and explain their results. The need for interpretation is greater when these models are used to support decision making. In some areas this becomes an indispensable requirement, such as in medicine. This paper focuses on the prediction of cardiovascular disease by analyzing the well-known Statlog (Heart) Data Set from the UCI’s Automated Learning Repository. This study will analyze the cost of making predictions easier to interpret by reducing the number of features that explain the classification of health status versus the cost in accuracy. It will be analyzed on a large set of classification techniques and performance metrics. Demonstrating that it is possible to make explainable and reliable models that have a good commitment to predictive performance.


2019 ◽  
Vol 22 (1) ◽  
pp. 44-48 ◽  
Author(s):  
Colin Aitken ◽  
Dimitris Mavridis

IntroductionIt is difficult to reason correctly when the information available is uncertain. Reasoning under uncertainty is also known as probabilistic reasoning.MethodsWe discuss probabilistic reasoning in the context of a medical diagnosis or prognosis. The information available are symptoms for the diagnosis or diagnosis for the prognosis. We show how probabilities of events are updated in the light of new evidence (conditional probabilities/Bayes’ theorem). A resolution is explained in which the support of the information for the diagnosis or prognosis is measured by the comparison of two probabilities, a statistic also known as the likelihood ratio.ResultsThe likelihood ratio is a continuous measure of support that is not subject to the discrete nature of statistical significance where a result is either classified as ‘significant’ or ‘not significant’. It updates prior beliefs about diagnoses or prognoses in a coherent manner and enables proper consideration of successive pieces of information.DiscussionProbabilistic reasoning is not innate and relies on good education. Common mistakes include the ‘prosecutor’s fallacy’ and the interpretation of relative measures without consideration of the actual risks of the outcome, for example, interpretation of a likelihood ratio without taking into account the prior odds.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 607-622 ◽  
Author(s):  
JOOHYUNG LEE ◽  
YI WANG

AbstractWe present a probabilistic extension of action language${\cal BC}$+$. Just like${\cal BC}$+$is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we callp${\cal BC}$+$, is defined as a high-level notation of LPMLNprograms—a probabilistic extension of answer set programs. We show how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled inp${\cal BC}$+$and computed using an implementation of LPMLN.


2021 ◽  
Vol 8 (4) ◽  
pp. 229-236
Author(s):  
Changkyum Kim ◽  
Insik Chun ◽  
Byungcheol Oh

An Artificial Intelligence(AI) study was conducted to calculate overtopping discharges for various coastal structures. The Deep Neural Network(DNN), one of the artificial intelligence methods, was employed in the study. The neural network was trained, validated and tested using the EurOtop database containing the experimental data collected from all over the world. To improve the accuracy of the deep neural network results, all data were non-dimensionalized and max-min normalized as a preprocessing process. L2 regularization was also introduced in the cost function to secure the convergence of iterative learning, and the cost function was optimized using RMSProp and Adam techniques. In order to compare the performance of DNN, additional calculations based on the multiple linear regression model and EurOtop’s overtopping formulas were done as well, using the data sets which were not included in the network training. The results showed that the predictive performance of the AI technique was relatively superior to the two other methods.


Author(s):  
ZEYNEP G. SARIBATUR ◽  
THOMAS EITER

Abstract Abstraction is a well-known approach to simplify a complex problem by over-approximating it with a deliberate loss of information. It was not considered so far in Answer Set Programming (ASP), a convenient tool for problem solving. We introduce a method to automatically abstract ASP programs that preserves their structure by reducing the vocabulary while ensuring an over-approximation (i.e., each original answer set maps to some abstract answer set). This allows for generating partial answer set candidates that can help with approximation of reasoning. Computing the abstract answer sets is intuitively easier due to a smaller search space, at the cost of encountering spurious answer sets. Faithful (non-spurious) abstractions may be used to represent projected answer sets and to guide solvers in answer set construction. For dealing with spurious answer sets, we employ an ASP debugging approach to help with abstraction refinement, which determines atoms as badly omitted and adds them back in the abstraction. As a show case, we apply abstraction to explain unsatisfiability of ASP programs in terms of blocker sets, which are the sets of atoms such that abstraction to them preserves unsatisfiability. Their usefulness is demonstrated by experimental results.


2021 ◽  
Vol 11 (3) ◽  
pp. 1285
Author(s):  
Roberto Porto ◽  
José M. Molina ◽  
Antonio Berlanga ◽  
Miguel A. Patricio

Learning systems have been focused on creating models capable of obtaining the best results in error metrics. Recently, the focus has shifted to improvement in the interpretation and explanation of the results. The need for interpretation is greater when these models are used to support decision making. In some areas, this becomes an indispensable requirement, such as in medicine. The goal of this study was to define a simple process to construct a system that could be easily interpreted based on two principles: (1) reduction of attributes without degrading the performance of the prediction systems and (2) selecting a technique to interpret the final prediction system. To describe this process, we selected a problem, predicting cardiovascular disease, by analyzing the well-known Statlog (Heart) data set from the University of California’s Automated Learning Repository. We analyzed the cost of making predictions easier to interpret by reducing the number of features that explain the classification of health status versus the cost in accuracy. We performed an analysis on a large set of classification techniques and performance metrics, demonstrating that it is possible to construct explainable and reliable models that provide high quality predictive performance.


2020 ◽  
Vol 312 ◽  
pp. 02003
Author(s):  
John-Paris Pantouvakis ◽  
Alexander Maravas

During construction operations, fleet management aims at maximizing the uptime and efficiency of construction machinery while also minimizing the cost of ownership through lifecycle planning and management. In the deterministic approach, the theory suggests that one type of machinery is considered to be critical. However, taking into account the real circumstances under which projects are performed with issues such as machine reliability, worker performance, and/or errors in estimating the scope of work, it is evident that there are significant limitations to the existing approach. Hence, to address this issue, uncertainty in fleet productivity is modelled with fuzzy set theory. In this context, the notion of composite criticality under which the productivity of a fleet depends on more than one type of machinery because of the fluctuations of the individual productivities is introduced. To this purpose, a simple case study is presented to illustrate this concept. It is concluded that this approach leads to a better understanding of the estimation of activity duration and cost estimation which in turn means better project scheduling and financial planning.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110207
Author(s):  
Simon Aagaard Enni ◽  
Maja Bak Herrie

Machine learning (ML) systems have shown great potential for performing or supporting inferential reasoning through analyzing large data sets, thereby potentially facilitating more informed decision-making. However, a hindrance to such use of ML systems is that the predictive models created through ML are often complex, opaque, and poorly understood, even if the programs “learning” the models are simple, transparent, and well understood. ML models become difficult to trust, since lay-people, specialists, and even researchers have difficulties gauging the reasonableness, correctness, and reliability of the inferences performed. In this article, we argue that bridging this gap in the understanding of ML models and their reasonableness requires a focus on developing an improved methodology for their creation. This process has been likened to “alchemy” and criticized for involving a large degree of “black art,” owing to its reliance on poorly understood “best practices”. We soften this critique and argue that the seeming arbitrariness often is the result of a lack of explicit hypothesizing stemming from an empiricist and myopic focus on optimizing for predictive performance rather than from an occult or mystical process. We present some of the problems resulting from the excessive focus on optimizing generalization performance at the cost of hypothesizing about the selection of data and biases. We suggest embedding ML in a general logic of scientific discovery similar to the one presented by Charles Sanders Peirce, and present a recontextualized version of Peirce’s scientific hypothesis adjusted to ML.


Sign in / Sign up

Export Citation Format

Share Document