artificial moral agents
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 6)

H-INDEX

6
(FIVE YEARS 0)

AI & Society ◽  
2021 ◽  
Author(s):  
Alejo José G. Sison ◽  
Dulce M. Redín

AbstractWe examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all things considered judgment regarding the moral case for building AMAs” (Formosa and Ryan 2019, pp. 1–2). Although this strategy may benefit their acceptability, it may also detract from their ethical rootedness, coherence, and persuasiveness, characteristics often associated with consolidated ethical traditions. Neo-Aristotelian ethics, backed by a distinctive philosophical anthropology and worldview, is summoned to fill this gap as a standard to test these two opposing claims. It provides a substantive account of moral agency through the theory of voluntary action; it explains how voluntary action is tied to intelligent and autonomous human life; and it distinguishes machine operations from voluntary actions through the categories of poiesis and praxis respectively. This standpoint reveals that while Van Wynsberghe and Robbins may be right in rejecting the need for AMAs, there are deeper, more fundamental reasons. In addition, despite disagreeing with Formosa and Ryan’s defense of AMAs, their call for a more nuanced and context-dependent approach, similar to neo-Aristotelian practical wisdom, becomes expedient.


2021 ◽  
Vol 30 (3) ◽  
pp. 435-447
Author(s):  
Daniel W. Tigard

AbstractOur ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.


2021 ◽  
Vol 27 (1) ◽  
Author(s):  
Christian Herzog

AbstractIn the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature.


AI & Society ◽  
2021 ◽  
Author(s):  
Jeffrey White

AbstractRyan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. The present paper interprets Kantian moral theory on the basis of the preceding introduction, argues contra Tonkens that an engineer does not violate the categorical imperative in creating Kantian AMAs, and proposes that a Kantian AMA is not only a possible goal for Machine ethics research, but a necessary one.


2020 ◽  
Vol 64 ◽  
pp. 117-125
Author(s):  
Salvador Cervantes ◽  
Sonia López ◽  
José-Antonio Cervantes

Author(s):  
Artem Vladimirovich Makulin

One of the features of modern socio-philosophical knowledge is its involvement in the solution of ethi-cal problems in new conditions, determined by the consequences of the “information explosion”, digi-talization, the massive introduction of digital tech-nologies in the humanitarian spheres. One of the key problems is understanding the role of the so-called “machine ethics”, ie. a set of theoretical ap-proaches to hypothetical problems of the moral be-havior of machines in the framework of artificial in-telligence. The paper expounds the point of view according to which ethics, over the centuries of the formation of various philosophical systems, has developed many mechanisms of its own algorithmi-cization, which opens up wide opportunities for the formation of “computational morality”, up to the appearance of artificial moral agents (AMA). The paper briefly examines the history of the formaliza-tion of ethical problems and solutions. The key at-tempts of algorithmicization of ethical issues in the history of philosophy are identified, the socio-philosophical component of such a phenomenon as the “ethical calculator” is characterized.


2020 ◽  
pp. 349-359
Author(s):  
Deborah G. Johnson ◽  
Keith W. Miller

Sign in / Sign up

Export Citation Format

Share Document