How good is Howard's policy improvement algorithm?

1985 ◽  
Vol 29 (7) ◽  
pp. 315-316 ◽  
Author(s):  
N. Schmitz
1989 ◽  
Vol 3 (3) ◽  
pp. 397-403 ◽  
Author(s):  
P. Whittle

A condition expressed in Eq. (7) is given which, with one simplifying regularity condition, ensures that the policy-improvement algorithm is equivalent to application of the Newton–Raphson algorithm to an optimality condition. It is shown that this condition covers the two known cases of such equivalence, and another example is noted. The condition is believed to be necessary to within transformations of the problem, but this has not been proved.


Author(s):  
Ari Arapostathis ◽  
Anup Biswas ◽  
Somnath Pradhan

In this article we consider the ergodic risk-sensitive control problem for a large class of multidimensional controlled diffusions on the whole space. We study the minimization and maximization problems under either a blanket stability hypothesis, or a near-monotone assumption on the running cost. We establish the convergence of the policy improvement algorithm for these models. We also present a more general result concerning the region of attraction of the equilibrium of the algorithm.


2020 ◽  
Vol 22 (02) ◽  
pp. 2040008
Author(s):  
P. Mondal ◽  
S. K. Neogy ◽  
A. Gupta ◽  
D. Ghorui

Zero-sum two-person discounted semi-Markov games with finite state and action spaces are studied where a collection of states having Perfect Information (PI) property is mixed with another collection of states having Additive Reward–Additive Transition and Action Independent Transition Time (AR-AT-AITT) property. For such a PI/AR-AT-AITT mixture class of games, we prove the existence of an optimal pure stationary strategy for each player. We develop a policy improvement algorithm for solving discounted semi-Markov decision processes (one player version of semi-Markov games) and using it we obtain a policy-improvement type algorithm for computing an optimal strategy pair of a PI/AR-AT-AITT mixture semi-Markov game. Finally, we extend our results when the states having PI property are replaced by a subclass of Switching Control (SC) states.


Stochastics ◽  
2016 ◽  
Vol 89 (1) ◽  
pp. 348-359 ◽  
Author(s):  
Saul D. Jacka ◽  
Aleksandar Mijatović

Author(s):  
Elad Sarafian ◽  
Aviv Tamar ◽  
Sarit Kraus

We propose a policy improvement algorithm for Reinforcement Learning (RL) termed Rerouted Behavior Improvement (RBI). RBI is designed to take into account the evaluation errors of the Q-function. Such errors are common in RL when learning the Q-value from finite experience data. Greedy policies or even constrained policy optimization algorithms that ignore these errors may suffer from an improvement penalty (i.e., a policy impairment). To reduce the penalty, the idea of RBI is to attenuate rapid policy changes to actions that were rarely sampled. This approach is shown to avoid catastrophic performance degradation and reduce regret when learning from a batch of transition samples. Through a two-armed bandit example, we show that it also increases data efficiency when the optimal action has a high variance. We evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from observations of multiple behavior policies and (2) iterative RL. Our results demonstrate the advantage of RBI over greedy policies and other constrained policy optimization algorithms both in learning from observations and in RL tasks.


OR Spectrum ◽  
1986 ◽  
Vol 8 (1) ◽  
pp. 37-40 ◽  
Author(s):  
U. Meister ◽  
U. Holzbaur

Sign in / Sign up

Export Citation Format

Share Document