Direct Optimal Control of Nonlinear Systems Via Hamilton’s Law of Varying Action
Based on direct application of Hamilton’s Law of Varying Action in conjunction with an assumed-time-modes approach for both the generalized coordinates and input functions, a direct optimal control methodology is developed for the control of nonlinear, time varying, spatially discrete mechanical systems. Expansion coefficients of admissible time modes for the dependent variables of the dynamic system and those for the inputs constitute the states and controls, respectively. This representation permits explicit a priori integration in time of the energy expressions in Hamilton’s law and leads to the algebraic equations of motion for the system which replace the conventional differential state equations; therefore the customary extremum principles of calculus of variations involving differential form constraints are also bypassed. Similarly, the standard integral form of the quadratic regulator performance measure employed in the formulation of the optimality problem is transformed into an algebraic performance measure via assumed-time-modes expansion of the generalized coordinates and the control inputs. The proposed methodology results in an algebraic optimality problem from which a closed-form explicit solution for the nonlinear feedback control law is obtained directly. Simulations of two nonlinear nonconservative systems are, included.