A multiagent reinforcement learning method based on the model inference of the other agents

2002 ◽  
Vol 33 (12) ◽  
pp. 67-76
Author(s):  
Yoichiro Matsuno ◽  
Tatsuya Yamazaki ◽  
Jun Matsuda ◽  
Shin Ishii
2019 ◽  
Vol 33 (4) ◽  
pp. 403-429 ◽  
Author(s):  
Chengwei Zhang ◽  
Xiaohong Li ◽  
Jianye Hao ◽  
Siqi Chen ◽  
Karl Tuyls ◽  
...  

Author(s):  
Tonghao Wang ◽  
Xingguang Peng ◽  
Demin Xu

AbstractKnowledge transfer is widely adopted in accelerating multiagent reinforcement learning (MARL). To accelerate the learning speed of MARL for learning-from scratch agents, in this paper, we propose a Stationary and Scalable knowledge transfer approach based on Experience Sharing (S$$^{2}$$ 2 ES). The mainframe of our approach is structured into three components: what kind of experience, how to learn, and when to transfer. Specifically, we first design an augmented form of experience. By sharing (i.e., transmitting) the experience from one agent to its peers, the learning speed can be effectively enhanced with guaranteed scalability. A synchronized learning pattern is then adopted, which reduces the nonstationarity brought by experience replay, and at the same time retains data efficiency. Moreover, to avoid redundant transfer when the agents’ policies have converged, we further design two trigger conditions, one is modified Q value-based and another is normalized Shannon entropy-based, to determine when to conduct experience sharing. Empirical studies indicate that the proposed approach outperforms the other knowledge transfer methods in efficacy, efficiency, and scalability. We also provide ablation experiments to demonstrate the necessity of the key ingredients.


2009 ◽  
Vol 129 (7) ◽  
pp. 1253-1263
Author(s):  
Toru Eguchi ◽  
Takaaki Sekiai ◽  
Akihiro Yamada ◽  
Satoru Shimizu ◽  
Masayuki Fukai

Author(s):  
Gokhan Demirkiran ◽  
Ozcan Erdener ◽  
Onay Akpinar ◽  
Pelin Demirtas ◽  
M. Yagiz Arik ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document