scholarly journals Sexbots

2018 ◽  
Vol 9 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Robin Mackenzie

This article describes how sexbots: sentient, self-aware, feeling artificial moral agents created soon as customised potential sexual/intimate partners provoke crucial questions for technoethics. Coeckelbergh's model of human/robotic relations as co-evolving to their mutual benefit through mutual vulnerability is applied to sexbots. As sexbots have a sustainable claim to moral standing, benefits and vulnerabilities inherent in human/sexbots relations must be identified and addressed for both parties. Humans' and sexbots' vulnerabilities are explored, drawing on the philosophy and social science of dehumanisation and inclusion/exclusion. This article argues humans as creators owe a duty of care to sentient beings they create. Responsible innovation practices involving stakeholders debating ethicolegal conundrums pertaining to human duties to sexbots, and sexbots' putative interests, rights and responsibilities are essential. These validate the legal recognition of sexbots, the protection of their interests through regulatory oversight and ethical limitations on customisation which must be put in place.

2020 ◽  
pp. 1307-1325
Author(s):  
Robin Mackenzie

This article describes how sexbots: sentient, self-aware, feeling artificial moral agents created soon as customised potential sexual/intimate partners provoke crucial questions for technoethics. Coeckelbergh's model of human/robotic relations as co-evolving to their mutual benefit through mutual vulnerability is applied to sexbots. As sexbots have a sustainable claim to moral standing, benefits and vulnerabilities inherent in human/sexbots relations must be identified and addressed for both parties. Humans' and sexbots' vulnerabilities are explored, drawing on the philosophy and social science of dehumanisation and inclusion/exclusion. This article argues humans as creators owe a duty of care to sentient beings they create. Responsible innovation practices involving stakeholders debating ethicolegal conundrums pertaining to human duties to sexbots, and sexbots' putative interests, rights and responsibilities are essential. These validate the legal recognition of sexbots, the protection of their interests through regulatory oversight and ethical limitations on customisation which must be put in place.


2018 ◽  
Vol 9 (1) ◽  
pp. 44-61
Author(s):  
André Schmiljun

With the development of autonomous robots, one day probably capable of speaking, thinking and learning, self-reflecting, sharing emotions, in fact, with the raise of robots becoming artificial moral agents (AMAs) robot scientists like Abney, Veruggio and Petersen are already optimistic that sooner or later we need to call those robots “people” or rather “Artificial People” (AP). The paper rejects this forecast, due to its argument based on three metaphysical conflicting assumptions. Firstly, it is the idea that it is possible to precisely define persons and apply the definition to robots or use it to differentiate human beings from robots. Further, the argument of APs favors a position of non-reductive physicalism (second assumption) and materialism (third assumption), finally producing weird convictions about future robotics. Therefore, I will suggest to follow Christine Korsgaard’s defence of animals as ends in themselves with moral standing. I will show that her argument can be transmitted to robots, too, at least to robots which are capable of pursuing their own good (even if they are not rational). Korsgaard’s interpretation of Kant delivers an option that allows us to leave out complicated metaphysical notions like “person” or “subject” in the debate, without denying robots’ status as agents.


2020 ◽  
pp. 349-359
Author(s):  
Deborah G. Johnson ◽  
Keith W. Miller

Author(s):  
Alan E. Singer

An aspect of the relationship between philosophy and computer engineering is considered, with particular emphasis upon the design of artificial moral agents. Top-down vs. bottom-up approaches to ethical behavior are discussed, followed by an overview of some of the ways in which traditional ethics has informed robotics. Two macro-trends are then identified, one involving the evolution of moral consciousness in man and machine, the other involving the fading away of the boundary between the real and the virtual.


2019 ◽  
Vol 26 (2) ◽  
pp. 501-532 ◽  
Author(s):  
José-Antonio Cervantes ◽  
Sonia López ◽  
Luis-Felipe Rodríguez ◽  
Salvador Cervantes ◽  
Francisco Cervantes ◽  
...  

2020 ◽  
Vol 64 ◽  
pp. 117-125
Author(s):  
Salvador Cervantes ◽  
Sonia López ◽  
José-Antonio Cervantes

2007 ◽  
Vol 7 ◽  
pp. 129-134
Author(s):  
Michael Nagenborg

In this paper I will argue that artificial moral agents (AMAs) are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead us to the question what makes an AMA ?moral?? I will argue that this question does present a good starting point for an inter-cultural dialogue which might be helpful to overcome the notion of Africa as a mere victim.


Sign in / Sign up

Export Citation Format

Share Document