scholarly journals Recursive Lexicographical Search: Finding All Markov Perfect Equilibria of Finite State Directional Dynamic Games

2014 ◽  
Author(s):  
Fedor Iskhakov ◽  
John Rust ◽  
Bertel Schjerning
2019 ◽  
Vol 14 (2) ◽  
pp. 597-646
Author(s):  
Ulrich Doraszelski ◽  
Juan F. Escobar

We characterize a class of dynamic stochastic games that we call separable dynamic games with noisy transitions and establish that these widely used models are protocol invariant provided that periods are sufficiently short. Protocol invariance means that the set of Markov perfect equilibria is nearly the same irrespective of the order in which players are assumed to move within a period. Protocol invariance can facilitate applied work, and renders the implications and predictions of a model more robust. Our class of dynamic stochastic games includes investment games, research and development races, models of industry dynamics, dynamic public contribution games, asynchronously repeated games, and many other models from the extant literature.


2014 ◽  
Vol 419 (2) ◽  
pp. 1322-1332 ◽  
Author(s):  
Anna Jaśkiewicz ◽  
Andrzej S. Nowak

Author(s):  
João P. Hespanha

This chapter focuses on the computation of the saddle-point equilibrium of a zero-sum discrete time dynamic game in a state-feedback policy. It begins by considering solution methods for two-player zero sum dynamic games in discrete time, assuming a finite horizon stage-additive cost that Player 1 wants to minimize and Player 2 wants to maximize, and taking into account a state feedback information structure. The discussion then turns to discrete time dynamic programming, the use of MATLAB to solve zero-sum games with finite state spaces and finite action spaces, and discrete time linear quadratic dynamic games. The chapter concludes with a practice exercise that requires computing the cost-to-go for each state of the tic-tac-toe game, and the corresponding solution.


Author(s):  
Anna Jaśkiewicz ◽  
Andrzej S. Nowak

AbstractWe study Markov decision processes with Borel state spaces under quasi-hyperbolic discounting. This type of discounting nicely models human behaviour, which is time-inconsistent in the long run. The decision maker has preferences changing in time. Therefore, the standard approach based on the Bellman optimality principle fails. Within a dynamic game-theoretic framework, we prove the existence of randomised stationary Markov perfect equilibria for a large class of Markov decision processes with transitions having a density function. We also show that randomisation can be restricted to two actions in every state of the process. Moreover, we prove that under some conditions, this equilibrium can be replaced by a deterministic one. For models with countable state spaces, we establish the existence of deterministic Markov perfect equilibria. Many examples are given to illustrate our results, including a portfolio selection model with quasi-hyperbolic discounting.


2014 ◽  
Vol 165 (1) ◽  
pp. 295-315 ◽  
Author(s):  
Łukasz Balbus ◽  
Anna Jaśkiewicz ◽  
Andrzej S. Nowak

Sign in / Sign up

Export Citation Format

Share Document