scholarly journals Cooperative Behavior Rule Acquisition for Multi-Agent Systems by Machine Learning

Author(s):  
Mengchun Xie
Author(s):  
Kun Zhang ◽  
◽  
Yoichiro Maeda ◽  
Yasutake Takahashi ◽  

Research on multi-agent systems, in which autonomous agents are able to learn cooperative behavior, has been the subject of rising expectations in recent years. We have aimed at the group behavior generation of the multi-agents who have high levels of autonomous learning ability, like that of human beings, through social interaction between agents to acquire cooperative behavior. The sharing of environment states can improve cooperative ability, and the changing state of the environment in the information shared by agents will improve agents’ cooperative ability. On this basis, we use reward redistribution among agents to reinforce group behavior, and we propose a method of constructing a multi-agent system with an autonomous group creation ability. This is able to strengthen the cooperative behavior of the group as social agents.


Author(s):  
José A. R. P. Sardinha ◽  
Alessandro Garcia ◽  
Carlos J. P. Lucena ◽  
Ruy L. Milidiú

2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


2020 ◽  
Vol 34 (05) ◽  
pp. 7047-7054 ◽  
Author(s):  
Nicolas Anastassacos ◽  
Stephen Hailes ◽  
Mirco Musolesi

Social dilemmas have been widely studied to explain how humans are able to cooperate in society. Considerable effort has been invested in designing artificial agents for social dilemmas that incorporate explicit agent motivations that are chosen to favor coordinated or cooperative responses. The prevalence of this general approach points towards the importance of achieving an understanding of both an agent's internal design and external environment dynamics that facilitate cooperative behavior. In this paper, we investigate how partner selection can promote cooperative behavior between agents who are trained to maximize a purely selfish objective function. Our experiments reveal that agents trained with this dynamic learn a strategy that retaliates against defectors while promoting cooperation with other agents resulting in a prosocial society.


Author(s):  
Nicolas Verstaevel ◽  
Jérémy Boes ◽  
Julien Nigon ◽  
Dorian d'Amico ◽  
Marie-Pierre Gleizes

Author(s):  
Daniel Kudenko ◽  
Dimitar Kazakov ◽  
Eduardo Alonso

In order to be truly autonomous, agents need the ability to learn from and adapt to the environment and other agents. This chapter introduces key concepts of machine learning and how they apply to agent and multi-agent systems. Rather than present a comprehensive survey, we discuss a number of issues that we believe are important in the design of learning agents and multi-agent systems. Specifically, we focus on the challenges involved in adapting (originally disembodied) machine learning techniques to situated agents, the relationship between learning and communication, learning to collaborate and compete, learning of roles, evolution and natural selection, and distributed learning. In the second part of the chapter, we focus on some practicalities and present two case studies.


Author(s):  
Daniel Kudenko ◽  
Dimitar Kazakov ◽  
Eduardo Alonso

In order to be truly autonomous, agents need the ability to learn from and adapt to the environment and other agents. This chapter introduces key concepts of machine learning and how they apply to agent and multi-agent systems. Rather than present a comprehensive survey, we discuss a number of issues that we believe are important in the design of learning agents and multi-agent systems. Specifically, we focus on the challenges involved in adapting (originally disembodied) machine learning techniques to situated agents, the relationship between learning and communication, learning to collaborate and compete, learning of roles, evolution and natural selection, and distributed learning. In the second part of the chapter, we focus on some practicalities and present two case studies.


Author(s):  
Valentina Plekhanova

Traditionally multi-agent learning is considered as the intersection of two subfields of artificial intelligence: multi-agent systems and machine learning. Conventional machine learning involves a single agent that is trying to maximise some utility function without any awareness of existence of other agents in the environment (Mitchell, 1997). Meanwhile, multi-agent systems consider mechanisms for the interaction of autonomous agents. Learning system is defined as a system where an agent learns to interact with other agents (e.g., Clouse, 1996; Crites & Barto, 1998; Parsons, Wooldridge & Amgoud, 2003). There are two problems that agents need to overcome in order to interact with each other to reach their individual or shared goals: since agents can be available/unavailable (i.e., they might appear and/or disappear at any time), they must be able to find each other, and they must be able to interact (Jennings, Sycara & Wooldridge, 1998).


Author(s):  
Kun Zhang ◽  
◽  
Yoichiro Maeda ◽  
Yasutake Takahashi ◽  

In multi-agent systems, it is necessary for autonomous agents to interact with each other in order to have excellent cooperative performance. Therefore, we have studied social interaction between agents to see how they acquire cooperative behavior. We have found that sharing environmental states can improve agent cooperation through reinforcement learning, and that changing environmental states to target-related individual states improves cooperation. To further improve cooperation, we propose reward redistribution based on reward exchanges among agents. In receiving rewards from both the environment and other agents, agents learned how to adjust themselves to the environment and how to explore and strengthen cooperation in tasks that a single agent could not do alone. Agents thus cooperate best through the interaction of state conversion and reward exchange.


Sign in / Sign up

Export Citation Format

Share Document