Optimal Control for Partially Observed Nonlinear Interval Systems

2019 ◽  
Vol 141 (9) ◽  
Author(s):  
T. E. Dabbous

In this paper, we consider the optimal control problem for a class of systems governed by nonlinear time-varying partially observed interval differential equations. The control process is assumed to be governed by linear time varying interval differential equation driven by the observed process. Using the fact that the state, observation, and control processes possess lower and upper bounds, we have developed sets of (ordinary) differential equations that describe the behavior of the bounds of these processes. Using these differential equations, the interval control problem can be transformed into an equivalent ordinary control problem in which interval mathematics and extension principle of Moore are not required. Using variational arguments, we have developed the necessary conditions of optimality for the equivalent (ordinary) control problem. Finally, we present some numerical simulations to illustrate the effectiveness of the proposed control scheme.

2012 ◽  
Vol 2012 ◽  
pp. 1-22 ◽  
Author(s):  
Li Chen ◽  
Zhen Wu ◽  
Zhiyong Yu

We discuss a quadratic criterion optimal control problem for stochastic linear system with delay in both state and control variables. This problem will lead to a kind of generalized forward-backward stochastic differential equations (FBSDEs) with Itô’s stochastic delay equations as forward equations and anticipated backward stochastic differential equations as backward equations. Especially, we present the optimal feedback regulator for the time delay system via a new type of Riccati equations and also apply to a population optimal control problem.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Youjun Xu ◽  
Shu Zhou

We establish the necessary condition of optimality for optimal control problem governed by some pseudoparabolic differential equations involving monotone graphs. Some approximating control process and examples are given.


2014 ◽  
Vol 24 (1) ◽  
pp. 5-25 ◽  
Author(s):  
Asatur Zh. Khurshudyan

Abstract A method of optimal control problems investigation for linear partial integro-differential equations of convolution type is proposed, when control process is carried out by boundary functions and right hand side of equation. Using Fourier real generalized integral transform control problem solution is reduced to minimization procedure of chosen optimality criterion under constraints of equality type on desired control function. Optimality of control impacts is obtained for two criteria, evaluating their linear momentum and total energy. Necessary and sufficient conditions of control problem solvability are obtained for both criteria. Numerical calculations are done and control functions are plotted for both cases of control process realization.


2008 ◽  
Vol 08 (01) ◽  
pp. 23-33 ◽  
Author(s):  
LAURENT MAZLIAK ◽  
IVAN NOURDIN

In this note, we consider an optimal control problem associated to a differential equation driven by a Hölder continuous function g of index β > 1/2. We split our study into two cases. If the coefficient of dgt does not depend on the control process, we prove an existence theorem for a slightly generalized control problem, that is we obtain a literal extension of the corresponding situation for ordinary differential equations. If the coefficient of dgt depends on the control process, we also prove an existence theorem but here we are obliged to restrict the set of controls to sufficiently regular functions.


Author(s):  
Tayel Dabbous

In this paper, we consider the adaptive control problem for a class of systems governed by linear time-varying interval differential equations having unknown (interval) parameters. Using the fact that system output posses lower and upper bounds, we have converted the interval differential equation into two sets of ordinary differential equations that describe the behavior of lower and upper bounds of system output. With this approach, interval analysis could be replaced by real analysis, and hence, adaptive control of interval systems can be treated as an ordinary adaptive control problem. Using variation arguments, we have developed the necessary conditions of optimality for the equivalent adaptive control problem. Finally, we present a numerical example to illustrate the effectiveness of the proposed (interval) control scheme.


Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 23
Author(s):  
Alexander Arguchintsev ◽  
Vasilisa Poplevko

This paper deals with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. Normally, general optimal control methods are used for these problems because of bilinear ordinary differential equations. In this paper, the problem is reduced to an optimal control problem for a system of ordinary differential equations. The reduction is based on non-classic exact increment formulas for the cost-functional. This treatment allows to use a number of efficient optimal control methods for the problem. An example illustrates the approach.


Sign in / Sign up

Export Citation Format

Share Document