On Tractable XAI Queries based on Compiled Representations

Author(s):  
Gilles Audemard ◽  
Frédéric Koriche ◽  
Pierre Marquis

One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding predictions made by Machine Learning (ML) models and for assessing how much reliable they are. Several encoding schemas have recently been pointed out, showing how ML classifiers of various types can be mapped to Boolean circuits exhibiting the same input-output behaviours. Thanks to such mappings, XAI queries about classifiers can be delegated to the corresponding circuits. In this paper, we define new explanation and/or verification queries about classifiers. We show how they can be addressed by combining queries and transformations about the associated Boolean circuits. Taking advantage of previous results from the knowledge compilation map, this allows us to identify a number of XAI queries that are tractable provided that the circuit has been first turned into a compiled representation.

2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

2021 ◽  
Author(s):  
Jean-Jacques Ohana ◽  
Steve Ohana ◽  
Eric Benhamou ◽  
David Saltiel ◽  
Beatrice Guez

Author(s):  
Gaël Aglin ◽  
Siegfried Nijssen ◽  
Pierre Schaus

Decision Trees (DTs) are widely used Machine Learning (ML) models with a broad range of applications. The interest in these models has increased even further in the context of Explainable AI (XAI), as decision trees of limited depth are very interpretable models. However, traditional algorithms for learning DTs are heuristic in nature; they may produce trees that are of suboptimal quality under depth constraints. We introduce PyDL8.5, a Python library to infer depth-constrained Optimal Decision Trees (ODTs). PyDL8.5 provides an interface for DL8.5, an efficient algorithm for inferring depth-constrained ODTs. The library provides an easy-to-use scikit-learn compatible interface. It cannot only be used for classification tasks, but also for regression, clustering, and other tasks. We introduce an interface that allows users to easily implement these other learning tasks. We provide a number of examples of how to use this library.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Katharina Weitz

Abstract Human-Centered AI is a widely requested goal for AI applications. To reach this is explainable AI promises to help humans to understand the inner workings and decisions of AI systems. While different XAI techniques have been developed to shed light on AI systems, it is still unclear how end-users with no experience in machine learning perceive these. Psychological concepts like trust, mental models, and self-efficacy can serve as instruments to evaluate XAI approaches in empirical studies with end-users. First results in applications for education, healthcare, and industry suggest that one XAI does not fit all. Instead, the design of XAI has to consider user needs, personal background, and the specific task of the AI system.


2018 ◽  
Vol 14 (5) ◽  
pp. 20170660 ◽  
Author(s):  
Ruth E. Baker ◽  
Jose-Maria Peña ◽  
Jayaratnam Jayamohan ◽  
Antoine Jérusalem

Ninety per cent of the world's data have been generated in the last 5 years ( Machine learning: the power and promise of computers that learn by example . Report no. DES4702. Issued April 2017. Royal Society). A small fraction of these data is collected with the aim of validating specific hypotheses. These studies are led by the development of mechanistic models focused on the causality of input–output relationships. However, the vast majority is aimed at supporting statistical or correlation studies that bypass the need for causality and focus exclusively on prediction. Along these lines, there has been a vast increase in the use of machine learning models, in particular in the biomedical and clinical sciences, to try and keep pace with the rate of data generation. Recent successes now beg the question of whether mechanistic models are still relevant in this area. Said otherwise, why should we try to understand the mechanisms of disease progression when we can use machine learning tools to directly predict disease outcome?


2008 ◽  
Vol 20 (5) ◽  
pp. 750-756
Author(s):  
Shingo Nakamura ◽  
◽  
Shuji Hashimoto

We describe the adaptive modeling of a physical system using the affine transform and its application to machine learning. We previously proposed a method to implement machine learning in physical hardware, where we built a simulator based on actual hardware input/output, and used it to optimize a controller. The method decreases stress on hardware because the controller is optimized by software via the simulator. Moreover, it does not require specific physical information on hardware. We also did not need to formulate hardware kinematics. When hardware changes, however, optimization must be redone to build the simulator -a clearly inefficient procedure. We therefore considered using previous optimization results when reoptimizing for new hardware. In the physical system, the aspect of the phase space does not vary much if the system structure remains the same. We applied affine transform to phase space of the physical system, to remodel the simulator for new hardware characteristics triggered by parameter changes. We used the remodeled simulator in machine learning to reoptimize the controller. In experiments, we used the swing-up pendulum problem to evaluate our proposal, comparing our proposal and original methods and finding that our proposal accelerates reoptimization.


Significance It required arguably the single largest computational effort for a machine learning model to date, and is it capable of producing text at times indistinguishable from the work of a human author. This has generated considerable excitement about potentially transformative business applications -- and concerns about the system's weaknesses and possible misuse. Impacts Stereotypes and biases in machine learning models will become increasingly problematic as they are adopted by businesses and governments. The use of flawed AI tools that result in embarrassing failures risk cuts to public funding for AI research. Academia and industry face pressure to advance research into explainable AI, but progress is slow.


2021 ◽  
pp. 323-335
Author(s):  
James Hinns ◽  
Xiuyi Fan ◽  
Siyuan Liu ◽  
Veera Raghava Reddy Kovvuri ◽  
Mehmet Orcun Yalcin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document