Social Norms for Artificial Systems

Author(s):  
Anna Strasser

This paper investigates reasons to argue for social norms regulating our behavior towards artificial agents. By problematizing the assertion that moral agency is, in principle, a necessary prerequisite for any form of moral patiency, reasons are examined which are independent of attributing moral agency to artificial agents, but which speak for morally appropriate behavior towards artificial systems. Suggesting a consequentialist strategy, potential negative impacts of human-machine interactions are analyzed with a focus on factors that support a transfer of behavioral patterns from human-machine interactions to human-human interactions.

Author(s):  
D.A. Tomiltseva ◽  
A.S. Zheleznov

Artificial agents i.e., man-made technical devices and software that are capable of taking meaningful actions and making independent decisions, permeate almost all spheres of human life today. Being new political actants, they transform the nature of human interactions, which gives rise to the problem of ethical and political regulation of their activities. Therefore, the appearance of such agents triggers a global philosophical reflection that goes beyond technical or practical issues and makes researchers return to the fundamental problems of ethics. The article identifies three main aspects that call for philosophical understanding of the existence of artificial agents. First, artificial agents reveal the true contradiction between declared moral and political values and real social practices. Learning from the data on the assessments and conclusions that have already taken place, artificial agents make decisions that correspond to the prevailing behavioral patterns rather than moral principles of their creators or consumers. Second, the specificity of the creation and functioning of artificial agents brings the problem of responsibility for their actions to the forefront, which, in turn, requires a new approach to the political regulation of the activities of not only developers, customers and users, but also the agents themselves. Third, the current forms of the activity of artificial agents shift the traditional boundaries of the human and raise the question of redefining the humanitarian. Having carefully analyzed the selected aspects, the authors reveal their logic and outline the field for further discussion.


Author(s):  
Hanna Meretoja

Chapter 4 tests hermeneutic narrative ethics as a lens for analyzing the (ab)uses of narrative for life in Julia Franck’s Die Mittagsfrau (2007, The Blind Side of the Heart), exploring how narrative practices expand and diminish the space of possibilities in which moral agents act and suffer. It demonstrates how narrative “in-betweens” bind people together, through dialogic narrative imagination, and can promote exclusion that amounts to annihilation. It addresses the necessity of storytelling for survival, and a transgenerational culture of silence that leads to the repetition of harmful emotional-behavioral patterns. It explores the continuum from being able to tell one’s own stories to violently imposed narrative identities and suggests that moral agency requires a minimum narrative sense of oneself as a being worthy and capable of goodness. The chapter argues that the ethical evaluation of narrative practices must be contextual—sensitive to how they function in particular sociohistorical worlds.


Author(s):  
John P. Sullins

This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.


Author(s):  
Christian Schemmel

This chapter develops the specific demands of liberal non-domination. It argues that they cover protection against dominatory groups as well as against power relations which are not mediated by authority, or otherwise public; demonstrates how different choices, such as those falling under the basic liberties, connected to intimate personal relationships, or at stake in resource-intensive programs and policies to enhance life options, call for different thresholds of intensity of protection; and how protection must itself be appropriately respectful of people’s moral agency. It goes on to show how the resulting requirements give a wide policy mandate to combat domination not only through formal institutions, but also by fostering a societal ethos and informal social norms, and argues that liberals should not be worried by this wide mandate. It concludes by analysing in which ways demands of protection against domination can, and cannot, be understood as distributive demands.


2018 ◽  
Vol 15 (5) ◽  
pp. 693-721 ◽  
Author(s):  
Ling Zhou ◽  
Shaojie Zhang

Abstract Back in the early 1990s, Gu Yueguo formulated the Politeness Principle and its maxims to explain Chinese politeness phenomena, as a counter reaction to Brown and Levinson’s politeness theory, which claims that politeness is a universal phenomenon in language usage. Despite the fact that Gu’s illustration of politeness phenomena in Chinese has attracted considerable attention from pragmaticians, this paper points out that there are at least three major issues in his study: failure to provide a generalized high-level definition of politeness, improper construction of some of the maxims, and inadequate adoption of Leech’s theoretical framework for analysis of Chinese data. With the increasing call for culture-specific research on im/politeness, it is of great necessity to rethink these issues regarding Chinese politeness phenomena. Therefore, this paper attempts to reconstruct the Politeness Principle in Chinese by reexamining and clarifying Gu’s approach. It argues that politeness can be defined as appropriate behavior in social interaction. Based on this definition, the Politeness Principle and four maxims of Modesty, Respectfulness, Friendliness, and Refinement are coherently reconstructed in a clarified and refined form. The reconstruction demonstrates that polite behavior in Chinese is exhibited in an appropriate or acceptable way in accordance with these maxims as social norms or regularities.


2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Luciano R. Coutinho ◽  
Victor M. Galvão ◽  
Antônio de Abreu Batista ◽  
Bruno Roberto S. Moraes ◽  
Márcio Regis M. Fraga

Looking at the ways in which players interact with computer games (thegameplays), we perceive predominance of character-centered and/or microcontrolled modes of interaction. Despite being well established, these gameplays tend to structure the games in terms of challenges to be fulfilled on an individual basis, or by thinking collectively but having to microcontrol several characters at the same time. From this observation, the paper presents a complementary gameplay in which the player is urged to face collective challenges by designingcharacter organizations. The basic idea is to make the player structure and control group of characters by definingorganizational specifications(i.e., definitions of roles, collective strategies, and social norms). During the game, commanded by the player, artificial agents are then instantiated to play the roles and to follow the strategies and norms as defined in the organizational specification. To turn the idea into practice, the paper proposes an abstract architecture comprising three components or layers. This architecture is materialized in a proof of concept prototype that combines the Minecraft game server, JADE agent platform, and MOISE+ organizational model. Variations and possibilities are discussed and the proposal is compared to related work in the literature.


2016 ◽  
Vol 6 (2) ◽  
pp. 1-22
Author(s):  
Seng-Beng Ho

A principled framework for general adaptive intelligent systems is described and applied to the domain of social robotics. Under the principled framework, the author develops computational methods to address an important aspect of a social robot, which is the ability to rapidly adapt to changes in the environment such as the introduction of novel objects and installations that serve novel purposes. Methods are also developed to address another important aspect of a social robot, which is the ability to understand the needs of humans that it interacts with by having a deep model of their needs, which enables the robot to assist humans in various tasks in a socially realistic manner. The author describes the methods of causal learning and script learning through computational visual observation that allow a robot to acquire the scripts and plans that enable it to understand the intentions of humans as well as solve problems to provide assistance to humans. The robot thus adapts rapidly to changing environmental factors as new observation provides new knowledge to guide its behavior. The assistance provided to humans is formulated as a script interaction problem and the optimal points at which assistance is provided are computed using a motivational strength model derived from psychological research and formulated computationally for robotic purposes. Also, a method is proposed to handle competition of needs which arises frequently in the course of robot-human interactions to generate socially realistic and appropriate behavior on the part of the robot. This paper uses primarily a home environment to demonstrate the methodology involved, but a robot that incorporates the methodology described could rapidly adapt to any environments such as the office and factory.


2021 ◽  
Vol 8 ◽  
Author(s):  
Jaime Banks

Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans’ (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 66
Author(s):  
Oliver Roesler ◽  
Elahe Bagheri

Robots that incorporate social norms in their behaviors are seen as more supportive, friendly, and understanding. Since it is impossible to manually specify the most appropriate behavior for all possible situations, robots need to be able to learn it through trial and error, by observing interactions between humans, or by utilizing theoretical knowledge available in natural language. In contrast to the former two approaches, the latter has not received much attention because understanding natural language is non-trivial and requires proper grounding mechanisms to link words to corresponding perceptual information. Previous grounding studies have mostly focused on grounding of concepts relevant to object manipulation, while grounding of more abstract concepts relevant to the learning of social norms has so far not been investigated. Therefore, this paper presents an unsupervised cross-situational learning based online grounding framework to ground emotion types, emotion intensities and genders. The proposed framework is evaluated through a simulated human–agent interaction scenario and compared to an existing unsupervised Bayesian grounding framework. The obtained results show that the proposed framework is able to ground words, including synonyms, through their corresponding perceptual features in an unsupervised and open-ended manner, while outperfoming the baseline in terms of grounding accuracy, transparency, and deployability.


Author(s):  
Anna A. Sher ◽  
Bruce M. Kahn

Considering humans as components of ecosystems is not new; geographers have been doing it in human ecology departments for decades (see Field and Burch 1988). There have also been many volumes dedicated to the subject (McDonnel and Pickett 1993, Schnaiberg and Gould 1994, Catton 1982, Wilson 1988). Recently there has been a development in the field of ecology to consider humans as a part of ecosystems, rather than simply agents of destruction, including the full complexity of human interactions (including social, cultural, and economic) with the environment (chapter 17 this volume, Folke et al. 1996, Turner and Carpenter 1999, Pickett et al. 1999, Haeuber and Ringold 1998). The goal of this chapter is to provide a framework for the types of interactions between humans and biodiversity. We use biodiversity as an umbrella term encompassing genetic, species, and landscape diversities (chapter 1 this volume). In particular, we emphasize human–biodiversity interactions in the context of arid and semiarid ecosystems. In part I, we analyze the various types of human–biodiversity interactions. In part II we suggest a framework for the study of these interactions. Not only do humans have the power to affect biodiversity, but biodiversity impacts humans as well. The nature of this reciprocal relationship can be positive and/or negative. Our reference to positive and negative impacts on biodiversity will usually be in the mathematical sense, that is, an increase or decrease in species and landscape diversity. However, we must be careful not to put a value judgment on such numbers. Increases in species or habitat diversity are not necessarily desirable for all ecosystems or management goals. All the elements of biodiversity are not equal in terms of ecological and economical value. For example, restoration efforts for a few endemic species may be detrimental to other, nonendemic species, resulting in less species diversity. This may especially be true when the diversity of weedy species that have taken over a disturbed area is threatened by restoring historical conditions. In this case, a lower level of biodiversity that includes native species may be more desirable than a higher level of nonendemic species diversity.


Sign in / Sign up

Export Citation Format

Share Document