scholarly journals Explainable data fusion

2021 ◽  
Author(s):  
◽  
Bryce J. Murray

The recent resurgence of Artificial Intelligence (AI), specifically in the context of applications like healthcare, security and defense, IoT, and other areas that have a big impact on human life, has led to a demand for eXplainable AI (XAI). The production of explanations is argued to be a key aspect of achieving goals like trustworthiness and transparent versus opaque AI. XAI is also of fundamental academic interest with respect to helping us identifying weaknesses in the pursuit of making better AI. Herein, I focus on one piece of the AI puzzle, information fusion. In this work, I propose XAI fusion indices, linguistic summaries (aka textual explanations) of these indices, and local explanations for the fuzzy integral. However, a limitation of these indices is its tailored to highly educated fusion experts, and it is not clear what to do with these explanations. Herein, I extend the introduced indices to actionable explanations, which are demonstrated in the context of two case studies; multi-source fusion and deep learning for remote sensing. This work ultimately shows what XAI for fusion is and how to create actionable insights.

2021 ◽  
Vol 13 (15) ◽  
pp. 2883
Author(s):  
Gwanggil Jeon

Remote sensing is a fundamental tool for comprehending the earth and supporting human–earth communications [...]


2021 ◽  
Vol 14 (13) ◽  
Author(s):  
Ratna Kumari Vemuri ◽  
Pundru Chandra Shaker Reddy ◽  
B S Puneeth Kumar ◽  
Jayavadivel Ravi ◽  
Sudhir Sharma ◽  
...  

Smart Cities ◽  
2020 ◽  
Vol 3 (4) ◽  
pp. 1353-1382
Author(s):  
Dhavalkumar Thakker ◽  
Bhupesh Kumar Mishra ◽  
Amr Abdullatif ◽  
Suvodeep Mazumdar ◽  
Sydney Simpson

Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.


Subject Prospect for artificial intelligence applications. Significance Artificial intelligence (AI) technologies, particularly those using 'deep learning', have in the past five years helped to automate many tasks previously outside the capabilities of computers. There are signs that the feverish pace of progress seen recently is slowing. Impacts Western legislation will make companies responsible for preventing decisions based on biased AI. Advances in 'explainable AI' will be rapid. China will be a major research player in AI technologies, alongside the United States, Japan and Europe.


2021 ◽  
Vol 11 (11) ◽  
pp. 1213
Author(s):  
Morteza Esmaeili ◽  
Riyas Vettukattil ◽  
Hasan Banitalebi ◽  
Nina R. Krogh ◽  
Jonn Terje Geitung

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
X. X. Zhu

<p><strong>Abstract.</strong> While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the <i>SEN1-2</i> dataset to foster deep learning research in SAR-optical data fusion. <i>SEN1-2</i> comprises 282;384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since <i>SEN1-2</i> is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.</p>


2020 ◽  
pp. 1-38
Author(s):  
Amandeep Kaur ◽  
◽  
Anjum Mohammad Aslam ◽  

In this chapter we discuss the core concept of Artificial Intelligence. We define the term of Artificial Intelligence and its interconnected terms such as Machine learning, deep learning, Neural Networks. We describe the concept with the perspective of its usage in the area of business. We further analyze various applications and case studies which can be achieved using Artificial Intelligence and its sub fields. In the area of business already numerous Artificial Intelligence applications are being utilized and will be expected to be utilized more in the future where machines will improve the Artificial Intelligence, Natural language processing, Machine learning abilities of humans in various zones.


2020 ◽  
Author(s):  
Tie Jun Cui ◽  
Che Liu ◽  
qian ma ◽  
Zhangjie Luo ◽  
Qiaoru Hong ◽  
...  

Abstract Artificial intelligence is facilitating human life in many aspects. Previous artificial intelligence has been mainly focused on computer algorithms (e.g. deep-learning and extreme-learning) and integrated circuits. Recently, all-optical diffractive deep neural networks (D2NN) were realized by using passive structures, which can perform complicated functions designed by computer-based neural networks at the light speed. However, once a passive D2NN architecture is fabricated, its function will be fixed. Here, we propose a programmable artificial intelligence machine (PAIM) that can execute various intellectual tasks by realizing hierarchical connections of brain neurons via a multi-layer digital-coding metasurface array. Integrated with two amplifier chips in each meta-atom, its transmission coefficient covers a dynamic range of 35 dB (from -40 dB to -5 dB), which is the basis to construct the reprogrammable physical layers of D2NN, in which the digital meta-atoms make the artificial neurons alive. We experimentally show that PAIM can handle various deep-learning tasks for wave sensing, including image classifications, mobile communication coder-decoder, and real-time multi-beam focusing. In particular, we propose a reinforcement learning algorithm for on-site learning and discrete optimization algorithm for digital coding, making PAIM have autonomous intelligence ability and perform self-learning tasks without the support of extra computer.


Sign in / Sign up

Export Citation Format

Share Document