Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World

2020 ◽  
Author(s):  
Forrest Morgan ◽  
Benjamin Boudreaux ◽  
Andrew Lohn ◽  
Mark Ashby ◽  
Christian Curriden ◽  
...  
2020 ◽  
Author(s):  
Forrest Morgan ◽  
Benjamin Boudreaux ◽  
Andrew Lohn ◽  
Mark Ashby ◽  
Christian Curriden ◽  
...  

2021 ◽  
pp. medethics-2020-106820 ◽  
Author(s):  
Juan Manuel Durán ◽  
Karin Rolanda Jongsma

The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism, which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.


Author(s):  
Sam Hepenstal ◽  
Leishi Zhang ◽  
Neesha Kodogoda ◽  
B.L. William Wong

Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations.


2020 ◽  
pp. 277-288
Author(s):  
Abílio Azevedo ◽  
Patricia Anjos Azevedo

The use and possibilities of artificial intelligence (AI) have been assuming great importance in recent years. This fact led to a greater attention on the topic in various fields, especially in health and law, both in its daily application potential and in learning methods. The aim of this article was to present a brief perspective of the challenges and effects of the AI use in teaching and application on health and law domains. Therefore, to better define the theme it was performed a qualitative methodology of bibliographic review. The applications of artificial intelligence have a great potential in clinical and legal use, facilitating the tasks of those involved by helping to reduce workflow, to avoid errors and in decision-making. However, despite these benefits and new opportunities, there are still obstacles regarding regulation and ethical concerns, as well as some reluctance from professionals in their adoption and formal application. In addition, there also the need to proper implement these technologies in learning to keep up the change and the new challenges currently posed, so there is a path that still needs to be followed.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


Author(s):  
Libi Shen ◽  
Anchi Su

Artificial intelligence (AI) is ubiquitous in our lives and is progressing at an accelerated rate in the past 60 years. AI application is diverse and AI technology continues to grow. It enables a machine to think like human beings and has opened a new horizon for industries, businesses, transportation, hospitals, and schools. How is AI applied to educational settings? How will the emergence of AI technology assist teachers' teaching and improve students' learning? Will the implementation of AI technology in education replace schoolteachers? What would be the ethical concerns of AI technology? What role do teachers play with AI in education? The purpose of this chapter is to explore the roles that teachers play in the innovation and evolution of AI and to seek approaches teachers should take in coping with AI technology. Issues and problems of teaching with AI will be discussed; solutions will be recommended.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 131614-131625 ◽  
Author(s):  
Wei Wang ◽  
Hui Liu ◽  
Wangqun Lin ◽  
Ying Chen ◽  
Jun-An Yang

Orbis ◽  
2020 ◽  
Vol 64 (4) ◽  
pp. 528-543
Author(s):  
Michael C. Horowitz ◽  
Lauren Kahn ◽  
Casey Mahoney

2017 ◽  
Vol 18 (2) ◽  
pp. 174-190 ◽  
Author(s):  
Amitai Etzioni ◽  
Oren Etzioni

As Artificial Intelligence technology seems poised for a major take-off and changing societal dynamics are creating a high demand for caregivers for elders, children, and those infirmed, robotic caregivers may well be used much more often. This article examines the ethical concerns raised by the use of AI caregivers and concludes that many of these concerns are avoided when AI caregivers operate as partners rather than substitutes. Furthermore, most of the remaining concerns are minor and are faced by human caregivers as well. Nonetheless, because AI caregivers’ systems are learning systems, an AI caregiver could stray from its initial guidelines. Therefore, subjecting AI caregivers to an AI-based oversight system is proposed to ensure that their actions remain both legal and ethical.


Sign in / Sign up

Export Citation Format

Share Document