On the Optimal Control Problem With Unrestricted Final Time

1969 ◽  
Vol 91 (2) ◽  
pp. 155-160 ◽  
Author(s):  
C. T. Leondes ◽  
R. A. Niemann

In problems of optimal control, the final time T may be fixed or it may be unrestricted. For the unrestricted final time case, an additional necessary condition that the Hamiltonian be zero is added to the conditions for optimality used for the fixed time case. In this paper, it will be shown that this necessary condition may lead to a local maximum of the performance criterion with respect to final times as well as a local minimum. This paper first develops a computational algorithm using only the H = 0 condition, and then develops a sufficient condition for a local minimum with respect to final time and a computational algorithm employing this condition. Numerical examples are given to illustrate all results.

1971 ◽  
Vol 93 (4) ◽  
pp. 218-220
Author(s):  
R. A. Niemann

The class of optimal control problems is considered in which the terminal point is constrained to lie on some surface in the state space. A computational algorithm is developed which solves the problem for a series of terminal points on the given surface and iterates until the transversality condition is satisfied. An example is considered in which there are several solutions which satisfy the transversality condition, some producing a local minimum of the performance criterion with respect to the variable terminal point and some a local maximum. A sufficient condition for a local minimum is derived.


Author(s):  
John M. Blatt

AbstractWe consider an optimal control problem with, possibly time-dependent, constraints on state and control variables, jointly. Using only elementary methods, we derive a sufficient condition for optimality. Although phrased in terms reminiscent of the necessary condition of Pontryagin, the sufficient condition is logically independent, as can be shown by a simple example.


Author(s):  
K. L. Teo ◽  
G. Jepps ◽  
E. J. Moore ◽  
S. Hayes

AbstractA class of non-standard optimal control problems is considered. The non-standard feature of these optimal control problems is that they are of neither fixed final time nor of fixed final state. A method of solution is devised which employs a computational algorithm based on control parametrization techniques. The method is applied to the problem of maximizing the range of an aircraft-like gliding projectile with angle of attack control.


Author(s):  
Yeşim Saraç

We get symbolic and numeric solutions developing a MAPLE® program which uses the initial velocity on the state variable of a wave equation as control function. Solution of this problem implies the minimization at the final time of the distance measured in a suitable norm between the solution of the problem and a given target. An iterative algorithm is constructed to compute the required optimal control as the limit of a suitable subsequence of controls. Results are tested with some numerical examples.


2020 ◽  
Vol 21 (4) ◽  
pp. 200-207
Author(s):  
T. G. Rzaev

We analyze the known problems of optimal control of speed (OCS) and methods for their solution. It is shown that the use of the one criteria in these tasks (the speed criterion) does not sufficiently reflect real situations. The solution of the OCS problem in real situations leads to a deviation from the nominal or optimal values of a number of other indicators. Proceeding from this, a generalization of the OCS problem is considered taking into account other indicators as a criterion for optimal control. In this aspect, three generalized statements of the OCS problem are analyzed, where in the first formulation, the OCS task is expanded with additional constraints on other indicators; in the second setting, other indicators were used as criteria alongside with the performance criterion; and in the third formulation, the expansion of the formulation is considered with the introduction of restrictions also on the criteria themselves, formed from other measured indicators. In the article, the most general — the third multicriteria problem is considered as the subject of research and the necessary condition for optimality of its solution in the form of the maximum principle is obtained. A traditional and iterative scheme for solving the generalized by OCS problem is presented, based on the obtained necessary optimality condition, in contrast to the traditional criteria, which are also dependent on the degree of preference. 


1975 ◽  
Vol 7 (1) ◽  
pp. 154-178 ◽  
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


1975 ◽  
Vol 7 (01) ◽  
pp. 154-178
Author(s):  
N. U. Ahmed ◽  
K. L. Teo

In this paper, the authors consider a class of stochastic systems described by Ito differential equations for which both controls and parameters are to be chosen optimally with respect to a certain performance index over a fixed time interval. The controls to be optimized depend only on partially observed current states as in a work of Fleming. However, he considered, instead, a problem of optimal control of systems governed by stochastic Ito differential equations with Markov terminal time. The fixed time problems usually give rise to the Cauchy problems (unbounded domain) whereas the Markov time problems give rise to the first boundary value problems (bounded domain). This fact makes the former problems relatively more involved than the latter. For the latter problems, Fleming has reported a necessary condition for optimality and an existence theorem of optimal controls. In this paper, a necessary condition for optimality for both controls and parameters combined together is presented for the former problems.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Lifan Kang ◽  
Yue Wang ◽  
Ting Hou

This note focuses on the finite horizon H2/H∞ control for stochastic nonlinear jump systems with partially unknown transition probabilities. We derive the nonlinear stochastic bounded real lemma and the nonlinear optimal regular result for the considered system at first. A sufficient condition and a necessary condition for the solution of H2/H∞ control are, respectively, offered by four cross-coupled Hamilton–Jacobi equations (HJEs). Besides, numerical examples show the effectiveness of the obtained results.


1995 ◽  
Vol 05 (02) ◽  
pp. 225-237 ◽  
Author(s):  
SUZANNE LENHART

We consider optimal control of a parabolic differential equation, modeling one-dimensional fluid flow through a soil-packed tube in which a contaminant is initially distributed. A fluid is pumped through the tube to remove the contaminant. The convective velocity due to the fluid pumping is the nonlinear control action. The goal is to minimize a performance criterion which is a combination of the total contaminant at the final time and the cost of the control. The optimal control is characterized by an optimality system.


Sign in / Sign up

Export Citation Format

Share Document