scholarly journals Primed Prediction: A Critical Examination of the Consequences of Exclusion of the Ontological Now in AI Protocol

2021 ◽  
pp. 183-201
Author(s):  
Carrie O’Connell ◽  
Chad Van de Wiele

Revisiting Norbert Wiener’s cybernetic prediction as the theoretical foundation of AI this chapter makes a plea how we need to uncover the black box of what is behind prediction and simulation. It explores the shortcomings of cybernetic prediction, the theoretical foundation of Artificial Intelligence, through the lens of Jean Baudrillard’s simulacra and simulation. Specifically, what prediction excludes – namely, an accounting for the ontological now – is what Baudrillard warned against in his analysis of the role technological innovations play in untethering reality from the material plane, leading to a crisis of simulacrum of experience. From this perspective, any deep-learning system rooted in the Wiener’s view of cybernetic feedback loops risks creating behaviour more so than predicting it. As this chapter will argue, such prediction is a narrow, self-referential system of feedback that ultimately becomes a self-fulfilling prophecy girded by the psycho-social effects of the very chaos it seeks to rationalise.

Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


Author(s):  
Abraham Rudnick

Artificial intelligence (AI) and its correlates, such as machine and deep learning, are changing health care, where complex matters such as comoribidity call for dynamic decision-making. Yet, some people argue for extreme caution, referring to AI and its correlates as a black box. This brief article uses philosophy and science to address the black box argument about knowledge as a myth, concluding that this argument is misleading as it ignores a fundamental tenet of science, i.e., that no empirical knowledge is certain, and that scientific facts – as well as methods – often change. Instead, control of the technology of AI and its correlates has to be addressed to mitigate such unexpected negative consequences.


2020 ◽  
Vol 73 (4) ◽  
pp. 275-284
Author(s):  
Dukyong Yoon ◽  
Jong-Hwan Jang ◽  
Byung Jin Choi ◽  
Tae Young Kim ◽  
Chang Ho Han

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.


Author(s):  
Mehreen Sirshar ◽  
Syeda Hafsa Ali ◽  
Haleema Sadia Baig

Over the last few decades there has been an exponential growth in IT, motivating IT professionals and scientists to explore new dimensions resulting in the advancement of artificial intelligence and its subcategories like computer vision, deep learning and augmented reality. AR is comparatively a new area which was initially explored for gaming but recently a lot of work has been done in education using AR. Most of this focuses on improving students understanding and motivation. Like any other project, the performance of an AR based project is determined by the customer satisfaction which is usually affected by the theory of triple constraints; cost, time and scope. many studies have shown that most of the projects are under development because they are unable to overcome these constraints and meet project objectives. We were unable to find any notable work done regarding project management for augmented reality systems and application. Therefore, in this paper, we propose a system for management of AR applications which mainly focuses on catering triple constraints to meet desired objectives. Each variable is further divided into subprocesses and by following these processes successful completion of the project can be achieved.


2019 ◽  
Vol 87 (2) ◽  
pp. 27-29
Author(s):  
Meagan Wiederman

Artificial intelligence (AI) is the ability of any device to take an input, like that of its environment, and work to achieve a desired output. Some advancements in AI have focused n replicating the human brain in machinery. This is being made possible by the human connectome project: an initiative to map all the connections between neurons within the brain. A full replication of the thinking brain would inherently create something that could be argued to be a thinking machine. However, it is more interesting to question whether a non-biologically faithful AI could be considered as a thinking machine. Under Turing’s definition of ‘thinking’, a machine which can be mistaken as human when responding in writing from a “black box,” where they can not be viewed, can be said to pass for thinking. Backpropagation is an error minimizing algorithm to program AI for feature detection with no biological counterpart which is prevalent in AI. The recent success of backpropagation demonstrates that biological faithfulness is not required for deep learning or ‘thought’ in a machine. Backpropagation has been used in medical imaging compression algorithms and in pharmacological modelling.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2018 ◽  
Vol 16 (4) ◽  
pp. 306-327 ◽  
Author(s):  
Imdat As ◽  
Siddharth Pal ◽  
Prithwish Basu

Artificial intelligence, and in particular machine learning, is a fast-emerging field. Research on artificial intelligence focuses mainly on image-, text- and voice-based applications, leading to breakthrough developments in self-driving cars, voice recognition algorithms and recommendation systems. In this article, we present the research of an alternative graph-based machine learning system that deals with three-dimensional space, which is more structured and combinatorial than images, text or voice. Specifically, we present a function-driven deep learning approach to generate conceptual design. We trained and used deep neural networks to evaluate existing designs encoded as graphs, extract significant building blocks as subgraphs and merge them into new compositions. Finally, we explored the application of generative adversarial networks to generate entirely new and unique designs.


2021 ◽  
Author(s):  
Kevin Robert McKee ◽  
Xuechunzi Bai ◽  
Susan Fiske

Artificial intelligence increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and training. Previously neglected but fundamental, social-cognitive processes guide human interactions with A.I. systems. In five behavioral studies (N = 3,099), warmth and competence feature prominently in participants’ impressions of artificially intelligent systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner’s dilemma game shows that warmth and competence judgments predict participants’ willingness to cooperate with a deep learning system. These results demonstrate the generality of intent detection to interactions with technological actors. Researchers and developers should carefully consider the degree and alignment of interdependence between humans and new artificial intelligence systems.


2020 ◽  
pp. 30-37
Author(s):  
Anandakumar Haldorai ◽  
Shrinand Anandakumar

The ideology of explainability in Artificial Intelligence (AI) is a prevailing issue which requires attention in the healthcare sector. The issue of explain ability is as ancient as AI and the sophisticated AI signified an understandable retraceable technique. Nonetheless, their demerits were in handling the uncertainties of the actual world. As a result of the advent of probabilistic education, applications have now been considered successful and considerably invisible. Comprehensive AI handles the implementation of traceability and transparency of statistical black box techniques of Machine Learning (ML), certainly Deep Learning (DL). Based on the approach of this paper, it can be argued that there is need for researchers to go beyond the comprehensive AI. To accomplish the dimension of explainability in the healthcare sector, causability aspects have to be incorporated. In the same manner that usability incorporates measurements for the quality of usage, causability incorporates the evaluation of explainable quality. In this research, we provide a number of fundamental definitions to effectively discriminate between causability and explainability, including the application case of DL and human comprehensibility in the field of histopathology. The fundamental contribution of this paper is the ideology of causability that has been differentiated from the notion of explainability whereby causability is based on personal property whereas explainability is the system property.


Sign in / Sign up

Export Citation Format

Share Document