Mean Field Games for Large-Population Multiagent Systems with Markov Jump Parameters

2012 ◽  
Vol 50 (4) ◽  
pp. 2308-2334 ◽  
Author(s):  
Bing-Chang Wang ◽  
Ji-Feng Zhang
2017 ◽  
Vol 27 (01) ◽  
pp. 75-113 ◽  
Author(s):  
Yves Achdou ◽  
Martino Bardi ◽  
Marco Cirant

This paper introduces and analyzes some models in the framework of mean field games (MFGs) describing interactions between two populations motivated by the studies on urban settlements and residential choice by Thomas Schelling. For static games, a large population limit is proved. For differential games with noise, the existence of solutions is established for the systems of partial differential equations of MFG theory, in the stationary and in the evolutive case. Numerical methods are proposed with several simulations. In the examples and in the numerical results, particular emphasis is put on the phenomenon of segregation between the populations.


Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8517
Author(s):  
Samuel M. Muhindo ◽  
Roland P. Malhamé ◽  
Geza Joos

We develop a strategy, with concepts from Mean Field Games (MFG), to coordinate the charging of a large population of battery electric vehicles (BEVs) in a parking lot powered by solar energy and managed by an aggregator. A yearly parking fee is charged for each BEV irrespective of the amount of energy extracted. The goal is to share the energy available so as to minimize the standard deviation (STD) of the state of charge (SOC) of batteries when the BEVs are leaving the parking lot, while maintaining some fairness and decentralization criteria. The MFG charging laws correspond to the Nash equilibrium induced by quadratic cost functions based on an inverse Nash equilibrium concept and designed to favor the batteries with the lower SOCs upon arrival. While the MFG charging laws are strictly decentralized, they guarantee that a mean of instantaneous charging powers to the BEVs follows a trajectory based on the solar energy forecast for the day. That day ahead forecast is broadcasted to the BEVs which then gauge the necessary SOC upon leaving their home. We illustrate the advantages of the MFG strategy for the case of a typical sunny day and a typical cloudy day when compared to more straightforward strategies: first come first full/serve and equal sharing. The behavior of the charging strategies is contrasted under conditions of random arrivals and random departures of the BEVs in the parking lot.


2020 ◽  
Vol 34 (05) ◽  
pp. 7143-7150
Author(s):  
Romuald Elie ◽  
Julien Pérolat ◽  
Mathieu Laurière ◽  
Matthieu Geist ◽  
Olivier Pietquin

Learning by experience in Multi-Agent Systems (MAS) is a difficult and exciting task, due to the lack of stationarity of the environment, whose dynamics evolves as the population learns. In order to design scalable algorithms for systems with a large population of interacting agents (e.g., swarms), this paper focuses on Mean Field MAS, where the number of agents is asymptotically infinite. Recently, a very active burgeoning field studies the effects of diverse reinforcement learning algorithms for agents with no prior information on a stationary Mean Field Game (MFG) and learn their policy through repeated experience. We adopt a high perspective on this problem and analyze in full generality the convergence of a fictitious iterative scheme using any single agent learning algorithm at each step. We quantify the quality of the computed approximate Nash equilibrium, in terms of the accumulated errors arising at each learning iteration step. Notably, we show for the first time convergence of model free learning algorithms towards non-stationary MFG equilibria, relying only on classical assumptions on the MFG dynamics. We illustrate our theoretical results with a numerical experiment in a continuous action-space environment, where the approximate best response of the iterative fictitious play scheme is computed with a deep RL algorithm.


2019 ◽  
Vol 29 (17) ◽  
pp. 6081-6104 ◽  
Author(s):  
Bing‐Chang Wang ◽  
Yuan‐Hua Ni ◽  
Huanshui Zhang

Sign in / Sign up

Export Citation Format

Share Document