Ethical Responsibility vs. Ethical Responsiveness in Conscious and Unconscious Communication Agents

Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 68
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.

Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 68 ◽  
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 47
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.


2019 ◽  
Vol 46 (3) ◽  
pp. 205-211 ◽  
Author(s):  
Thomas Grote ◽  
Philipp Berens

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.


2021 ◽  
Vol 11 (8) ◽  
pp. 3296
Author(s):  
Musarrat Hussain ◽  
Jamil Hussain ◽  
Taqdir Ali ◽  
Syed Imran Ali ◽  
Hafiz Syed Muhammad Bilal ◽  
...  

Clinical Practice Guidelines (CPGs) aim to optimize patient care by assisting physicians during the decision-making process. However, guideline adherence is highly affected by its unstructured format and aggregation of background information with disease-specific information. The objective of our study is to extract disease-specific information from CPG for enhancing its adherence ratio. In this research, we propose a semi-automatic mechanism for extracting disease-specific information from CPGs using pattern-matching techniques. We apply supervised and unsupervised machine-learning algorithms on CPG to extract a list of salient terms contributing to distinguishing recommendation sentences (RS) from non-recommendation sentences (NRS). Simultaneously, a group of experts also analyzes the same CPG and extract the initial patterns “Heuristic Patterns” using a group decision-making method, nominal group technique (NGT). We provide the list of salient terms to the experts and ask them to refine their extracted patterns. The experts refine patterns considering the provided salient terms. The extracted heuristic patterns depend on specific terms and suffer from the specialization problem due to synonymy and polysemy. Therefore, we generalize the heuristic patterns to part-of-speech (POS) patterns and unified medical language system (UMLS) patterns, which make the proposed method generalize for all types of CPGs. We evaluated the initial extracted patterns on asthma, rhinosinusitis, and hypertension guidelines with the accuracy of 76.92%, 84.63%, and 89.16%, respectively. The accuracy increased to 78.89%, 85.32%, and 92.07% with refined machine-learning assistive patterns, respectively. Our system assists physicians by locating disease-specific information in the CPGs, which enhances the physicians’ performance and reduces CPG processing time. Additionally, it is beneficial in CPGs content annotation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


2021 ◽  
Vol 9 (5) ◽  
pp. 538
Author(s):  
Jinwan Park ◽  
Jung-Sik Jeong

According to the statistics of maritime collision accidents over the last five years (2016–2020), 95% of the total maritime collision accidents are caused by human factors. Machine learning algorithms are an emerging approach in judging the risk of collision among vessels and supporting reliable decision-making prior to any behaviors for collision avoidance. As the result, it can be a good method to reduce errors caused by navigators’ carelessness. This article aims to propose an enhanced machine learning method to estimate ship collision risk and to support more reliable decision-making for ship collision risk. In order to estimate the ship collision risk, the conventional support vector machine (SVM) was applied. Regardless of the advantage of the SVM to resolve the uncertainty problem by using the collected ships’ parameters, it has inherent weak points. In this study, the relevance vector machine (RVM), which can present reliable probabilistic results based on Bayesian theory, was applied to estimate the collision risk. The proposed method was compared with the results of applying the SVM. It showed that the estimation model using RVM is more accurate and efficient than the model using SVM. We expect to support the reasonable decision-making of the navigator through more accurate risk estimation, thus allowing early evasive actions.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


Author(s):  
Pragya Paudyal ◽  
B.L. William Wong

In this paper we introduce the problem of algorithmic opacity and the challenges it presents to ethical decision-making in criminal intelligence analysis. Machine learning algorithms have played important roles in the decision-making process over the past decades. Intelligence analysts are increasingly being presented with smart black box automation that use machine learning algorithms to find patterns or interesting and unusual occurrences in big data sets. Algorithmic opacity is the lack visibility of computational processes such that humans are not able to inspect its inner workings to ascertain for themselves how the results and conclusions were computed. This is a problem that leads to several ethical issues. In the VALCRI project, we developed an abstraction hierarchy and abstraction decomposition space to identify important functional relationships and system invariants in relation to ethical goals. Such explanatory relationships can be valuable for making algorithmic process transparent during the criminal intelligence analysis process.


Sign in / Sign up

Export Citation Format

Share Document