An Artifitial Economy Based on Reinforcement Learning and Agent Based Modeling

2007 ◽  
Author(s):  
Fernando Lozano ◽  
Jaime Lozano ◽  
Mario García Molina
2019 ◽  
Vol 20 (S18) ◽  
Author(s):  
Hanxu Hou ◽  
Tian Gan ◽  
Yaodong Yang ◽  
Xianglei Zhu ◽  
Sen Liu ◽  
...  

Abstract Background Collective cell migration is a significant and complex phenomenon that affects many basic biological processes. The coordination between leader cell and follower cell affects the rate of collective cell migration. However, there are still very few papers on the impacts of the stimulus signal released by the leader on the follower. Tracking cell movement using 3D time-lapse microscopy images provides an unprecedented opportunity to systematically study and analyze collective cell migration. Results Recently, deep reinforcement learning algorithms have become very popular. In our paper, we also use this method to train the number of cells and control signals. By experimenting with single-follower cell and multi-follower cells, it is concluded that the number of stimulation signals is proportional to the rate of collective movement of the cells. Such research provides a more diverse approach and approach to studying biological problems. Conclusion Traditional research methods are always based on real-life scenarios, but as the number of cells grows exponentially, the research process is too time consuming. Agent-based modeling is a robust framework that approximates cells to isotropic, elastic, and sticky objects. In this paper, an agent-based modeling framework is used to establish a simulation platform for simulating collective cell migration. The goal of the platform is to build a biomimetic environment to demonstrate the importance of stimuli between the leading and following cells.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Egemen Sert ◽  
Yaneer Bar-Yam ◽  
Alfredo J. Morales

2021 ◽  
Vol 11 (18) ◽  
pp. 8368
Author(s):  
Saeed Harati ◽  
Liliana Perez ◽  
Roberto Molowny-Horas

One of the complexities of social systems is the emergence of behavior norms that are costly for individuals. Study of such complexities is of interest in diverse fields ranging from marketing to sustainability. In this study we built a conceptual Agent-Based Model to simulate interactions between a group of agents and a governing agent, where the governing agent encourages other agents to perform, in exchange for recognition, an action that is beneficial for the governing agent but costly for the individual agents. We equipped the governing agent with six Temporal Difference Reinforcement Learning algorithms to find sequences of decisions that successfully encourage the group of agents to perform the desired action. Our results show that if the individual agents’ perceived cost of the action is low, then the desired action can become a trend in the society without the use of learning algorithms by the governing agent. If the perceived cost to individual agents is high, then the desired output may become rare in the space of all possible outcomes but can be found by appropriate algorithms. We found that Double Learning algorithms perform better than other algorithms we used. Through comparison with a baseline, we showed that our algorithms made a substantial difference in the rewards that can be obtained in the simulations.


2017 ◽  
Vol 133 ◽  
pp. 235-248 ◽  
Author(s):  
Ammar Jalalimanesh ◽  
Hamidreza Shahabi Haghighi ◽  
Abbas Ahmadi ◽  
Madjid Soltani

Sign in / Sign up

Export Citation Format

Share Document