Dynamic Programming and a Distributed Parameter Maximum Principle

1968 ◽  
Vol 90 (2) ◽  
pp. 152-156 ◽  
Author(s):  
W. L. Brogan

A proof of a distributed parameter maximum principle is given by using dynamic programming. An example problem involving a nonhomogeneous boundary condition is also treated by using the dynamic programming technique and by extending the definition of the differential operator. It is thus demonstrated that for linear systems the dynamic programming approach is just as powerful as the variational approach originally used to derive the maximum principle.

Robotica ◽  
1995 ◽  
Vol 13 (2) ◽  
pp. 209-213
Author(s):  
Guy Jumarie

SummaryIn the tracking control of manipulators via the sliding scheme, it may happen that sometimes, because of various inaccuracies, the definition of the actual sliding surface involves errors terms which may be either deterministic or, on the contrary, stochastic. This paper considers this last case and shows how one can estimate the new performances of the system so disturbed. A stochastic Hamilton's principle is applied, by combining the Lagrange parameter technique with results of the dynamic programming approach.


1968 ◽  
Vol 5 (3) ◽  
pp. 679-692 ◽  
Author(s):  
Richard Morton

Suppose that the state variables x = (x1,…,xn)′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


1968 ◽  
Vol 5 (03) ◽  
pp. 679-692
Author(s):  
Richard Morton

Suppose that the state variables x = (x 1,…,x n )′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x 1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


Author(s):  
Sameer Kulkarni ◽  
Rajesh Ganesan ◽  
Lance Sherry

On the basis of weather and high traffic, the Next Generation Air Transportation System envisions an airspace that is adaptable, flexible, controller friendly, and dynamic. Sector geometries, developed with average traffic patterns, have remained structurally static with occasional changes in geometry due to limited forming of sectors. Dynamic airspace configuration aims at migrating from a rigid to a more flexible airspace structure. Efficient management of airspace capacity is important to ensure safe and systematic operation of the U.S. National Airspace System and maximum benefit to stakeholders. The primary initiative is to strike a balance between airspace capacity and air traffic demand. Imbalances in capacity and demand are resolved by initiatives such as the ground delay program and rerouting, often resulting in systemwide delays. This paper, a proof of concept for the dynamic programming approach to dynamic airspace configuration by static forming of sectors, addresses static forming of sectors by partitioning airspace according to controller workload. The paper applies the dynamic programming technique to generate sectors in the Fort Worth, Texas, Air Route Traffic Control Center; compares it with current sectors; and lays a foundation for future work. Initial results of the dynamic programming methodology are promising in terms of sector shapes and the number of sectors that are comparable to current operations.


Sign in / Sign up

Export Citation Format

Share Document