scholarly journals Weak Dynamic Programming Principle for Viscosity Solutions

2011 ◽  
Vol 49 (3) ◽  
pp. 948-962 ◽  
Author(s):  
Bruno Bouchard ◽  
Nizar Touzi
2018 ◽  
Vol 24 (1) ◽  
pp. 437-461 ◽  
Author(s):  
Huyên Pham ◽  
Xiaoli Wei

We consider the stochastic optimal control problem of McKean−Vlasov stochastic differential equation where the coefficients may depend upon the joint law of the state and control. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution of the process as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to probability measures recently introduced by [P.L. Lions, Cours au Collège de France: Théorie des jeux à champ moyens, audio conference 2006−2012], and a special Itô formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem, and prove a verification theorem in our McKean−Vlasov framework. We give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. We also consider a notion of lifted viscosity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean−Vlasov control problem. Finally, we consider the case of McKean−Vlasov control problem with open-loop controls and discuss the associated dynamic programming equation that we compare with the case of closed-loop controls.


2019 ◽  
Vol 19 (03) ◽  
pp. 1950019 ◽  
Author(s):  
R. C. Hu ◽  
X. F. Wang ◽  
X. D. Gu ◽  
R. H. Huan

In this paper, nonlinear stochastic optimal control of multi-degree-of-freedom (MDOF) partially observable linear systems subjected to combined harmonic and wide-band random excitations is investigated. Based on the separation principle, the control problem of a partially observable system is converted into a completely observable one. The dynamic programming equation for the completely observable control problem is then set up based on the stochastic averaging method and stochastic dynamic programming principle, from which the nonlinear optimal control law is derived. To illustrate the feasibility and efficiency of the proposed control strategy, the responses of the uncontrolled and optimal controlled systems are respectively obtained by solving the associated Fokker–Planck–Kolmogorov (FPK) equation. Numerical results show the proposed control strategy can dramatically reduce the response of stochastic systems subjected to both harmonic and wide-band random excitations.


Sign in / Sign up

Export Citation Format

Share Document