scholarly journals Optimal Control of Diffusions with Hard Terminal State Restrictions

2012 ◽  
Vol 2012 ◽  
pp. 1-38
Author(s):  
Atle Seierstad

A maximum principle is proved for certain problems of optimal control of diffusions where hard end constraints occur. The results apply to several dimensional problems, where some of the state equations involve Brownian motions, but not the equations corresponding to states being hard restricted at the terminal time.

1996 ◽  
Vol 2 (1) ◽  
pp. 3-15 ◽  
Author(s):  
I.S. Sadek ◽  
J.M. Sloss ◽  
S. Adali ◽  
J.C. Bruch

A maximum principle is developed for a class of problems involving the optimal control of a damped parameter system governed by a not-necessarily separable linear hyperbolic equation in two space dimensions. An index of performance is formulated, which consists of functions of the state variable, its first and second order space derivatives and first order time derivative, and a penalty function involving the open-loop control force. The solution of the optimal control problem is shown to be unique using convexity arguments. The maximum principle given involves a Hamiltonian, which contains an adjoint variable as well as an admissible control function. The state and adjoint variables are linked by terminal conditions leading to a boundary/initial/terminal value problem. The maximum principle can be used to compute the optimal control function and is particularly suitable for problems involving the active control of two-dimensional structural elements for vibration suppression.


2012 ◽  
Vol 2012 ◽  
pp. 1-29 ◽  
Author(s):  
Shaolin Ji ◽  
Qingmeng Wei ◽  
Xiumin Zhang

We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.


2020 ◽  
Vol 2020 ◽  
pp. 1-5
Author(s):  
Hongyong Deng ◽  
Wei Zhang ◽  
Changchun Shen

Due to the need for numerical calculation and mathematical modelling, this paper focuses on the stability of optimal trajectories for optimal control problems. The basic ideas and techniques are based on the compactness of the optimal trajectory set and set-valued mapping theorem. Through lack of optimal control stability, the result of generic stability for optimal trajectories is obtained under the perturbations of the right-hand side functions of the state equations; in the sense of Baire category, the right-hand side functions of the state equations of optimal control can be approximated by other functions.


2001 ◽  
Vol 42 (4) ◽  
pp. 532-551 ◽  
Author(s):  
Liping Pan ◽  
Jiongmin Yong

AbstractWe study an optimal control problem for a quasilinear parabolic equation which has delays in the highest order spatial derivative terms. The cost functional is Lagrange type and some terminal state constraints are presented. A Pontryagin-type maximum principle is derived.


2000 ◽  
Vol 123 (3) ◽  
pp. 518-527 ◽  
Author(s):  
Yongcai Xu ◽  
Masami Iwase ◽  
Katsuhisa Furuta

Swing-up of a rotating type pendulum from the pendant to the inverted state is known to be one of most difficult control problems, since the system is nonlinear, underactuated, and has uncontrollable states. This paper studies a time optimal swing-up control of the pendulum using bounded input. Time optimal control of a nonlinear system can be formulated by Pontryagin’s Maximum Principle, which is, however, hard to compute practically. In this paper, a new computational approach is presented to attain a numerical solution of the time optimal swing-up problem. Time optimal control problem is described as minimization of the achievable time to attain the terminal state under the bounded input amplitude, although algorithms to solve this problem are known to be complicated. Therefore, in this paper, it is shown how the optimal time swing-up control is formulated as an auxiliary problem in that the minimal input amplitude is searched so that the terminal state satisfies a specification at a given time. Through the proposed approach, time optimal control can be solved by nonlinear optimization. Its approach is evaluated by numerical simulations of a simplified pendulum model, is checked satisfying the necessary condition of Maximum Principle, and is experimentally verified using the rotating type pendulum.


2020 ◽  
Vol 26 ◽  
pp. 47 ◽  
Author(s):  
N.P. Osmolovskii ◽  
V.M. Veliov

The paper investigates the property of Strong Metric sub-Regularity (SMsR) of the mapping representing the first order optimality system for a Lagrange-type optimal control problem which is affine with respect to the control. The terminal time is fixed, the terminal state is free, and the control values are restricted in a convex compact set U. The SMsR property is associated with a reference solution of the optimality system and ensures that small additive perturbations in the system result in solutions which are at distance to the reference one, at most proportional to the size of the perturbations. A general sufficient condition for SMsR is obtained for appropriate space settings and then specialized in the case of a polyhedral set U and purely bang-bang reference control. Sufficient second-order optimality conditions are obtained as a by-product of the analysis. Finally, the obtained results are utilized for error analysis of the Euler discretization scheme applied to affine problems.


SIMULATION ◽  
1966 ◽  
Vol 7 (5) ◽  
pp. 238-245 ◽  
Author(s):  
Richard L. Maybach

This paper presents a method for finding the solutions to minimum-time optimal control problems. The procedure is to implement Pontryagin's Maximum Principle on an iterative hybrid computer. The state and adjoint equations as well as the control law are simulated using conven tional analog components. The troublesome two-point boundary-value problem, which is always associated with Maximum Principle, is solved by iteration, using a digital parameter optimizer. Thus, a manual trial-and-error search for the proper initial values of the adjoint variables is un necessary. We show that, for a large class of systems, it is not necessary to generate the Hamiltonian, because the neces sary condition that it normally must satisfy is redundant. This allows many problems to be greatly simplified. We also present an optimizing routine that solves the boun dary-value problem. This permits the proposed method to be used on any hybrid computer that incorporates a general-purpose digital computer. The solutions to two problems show that the proposed method is feasible. Average convergence times range from less than one second to about 70 seconds. These vary with the initial conditions on the state variables. The examples were solved using ASTRAC II, a small (40 amplifier), high speed (up to 1000 solutions per second), iterative hybrid computer with only modest component accuracy (0.25 per cent). Although the discussion and examples are limited to a minimum-time performance index, the method is easily extended to cover other criteria.


Sign in / Sign up

Export Citation Format

Share Document