scholarly journals Discounted Stochastic Games With No Stationary Nash Equilibrium: Two Examples

Econometrica ◽  
2013 ◽  
Vol 81 (5) ◽  
pp. 1973-2007 ◽  
2021 ◽  
Vol 14 ◽  
pp. 290-301
Author(s):  
Dmitrii Lozovanu ◽  
◽  
Stefan Pickl ◽  

In this paper we consider the problem of the existence and determining stationary Nash equilibria for switching controller stochastic games with discounted and average payoffs. The set of states and the set of actions in the considered games are assumed to be finite. For a switching controller stochastic game with discounted payoffs we show that all stationary equilibria can be found by using an auxiliary continuous noncooperative static game in normal form in which the payoffs are quasi-monotonic (quasi-convex and quasi-concave) with respect to the corresponding strategies of the players. Based on this we propose an approach for determining the optimal stationary strategies of the players. In the case of average payoffs for a switching controller stochastic game we also formulate an auxiliary noncooperative static game in normal form with quasi-monotonic payoffs and show that such a game possesses a Nash equilibrium if the corresponding switching controller stochastic game has a stationary Nash equilibrium.


2013 ◽  
Vol 15 (04) ◽  
pp. 1340025
Author(s):  
VIKAS VIKRAM SINGH ◽  
N. HEMACHANDRA ◽  
K. S. MALLIKARJUNA RAO

Blackwell optimality in a finite state-action discounted Markov decision process (MDP) gives an optimal strategy which is optimal for every discount factor close enough to one. In this article we explore this property, which we call as Blackwell–Nash equilibrium, in two player finite state-action discounted stochastic games. A strategy pair is said to be a Blackwell–Nash equilibrium if it is a Nash equilibrium for every discount factor close enough to one. A stationary Blackwell–Nash equilibrium in a stochastic game may not always exist as can be seen from "Big Match" example where a stationary Nash equilibrium does not exist in undiscounted case. For a Single Controller Additive Reward (SC-AR) stochastic game, we show that there exists a stationary deterministic Blackwell–Nash equilibrium which is also a Nash equilibrium for undiscounted case. For general stochastic games, we give some conditions which together are sufficient for any stationary Nash equilibrium of a discounted stochastic game to be a Blackwell–Nash equilibrium and it is also a Nash equilibrium of an undiscounted stochastic game. We illustrate our results on general stochastic games through a variant of the pollution tax model.


2015 ◽  
Vol 17 (02) ◽  
pp. 1540018
Author(s):  
Vikas Vikram Singh ◽  
N. Hemachandra

We consider a two player finite state-action general sum single controller constrained stochastic game with both discounted and average cost criteria. We consider the situation where player 1 has subscription-based constraints and player 2, who controls the transition probabilities, has realization-based constraints which can also depend on the strategies of player 1. It is known that a stationary Nash equilibrium for discounted case exists under strong Slater condition, while, for the average case, stationary Nash equilibrium exists if additionally the Markov chain is unichain. For each case we show that the set of stationary Nash equilibria of this game has one to one correspondence with the set of global minimizers of a certain nonconvex mathematical program. If the constraints of player 2 do not depend on the strategies of player 1, then the mathematical program reduces to a quadratic program. The known linear programs for zero sum games of this class can be obtained as a special case of above quadratic programs.


Author(s):  
Yue Guan ◽  
Qifan Zhang ◽  
Panagiotis Tsiotras

We explore the use of policy approximations to reduce the computational cost of learning Nash equilibria in zero-sum stochastic games. We propose a new Q-learning type algorithm that uses a sequence of entropy-regularized soft policies to approximate the Nash policy during the Q-function updates. We prove that under certain conditions, by updating the entropy regularization, the algorithm converges to a Nash equilibrium. We also demonstrate the proposed algorithm's ability to transfer previous training experiences, enabling the agents to adapt quickly to new environments. We provide a dynamic hyper-parameter scheduling scheme to further expedite convergence. Empirical results applied to a number of stochastic games verify that the proposed algorithm converges to the Nash equilibrium, while exhibiting a major speed-up over existing algorithms.


Sign in / Sign up

Export Citation Format

Share Document