scholarly journals The viscosity approximation to the Hamilton-Jacobi-Bellman equation in optimal feedback control: Upper bounds for extended domains

2010 ◽  
Vol 6 (1) ◽  
pp. 161-175 ◽  
Author(s):  
Steven Richardson ◽  
◽  
Song Wang ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 168781401983320
Author(s):  
Yan Li ◽  
Yuanchun Li

A novel framework of rapid exponential stability and optimal feedback control is investigated and analyzed for a class of nonlinear systems through a variant of continuous Lyapunov functions and Hamilton–Jacobi–Bellman equation. Rapid exponential stability means that the trajectories of nonlinear systems converge to equilibrium states in accelerated time. The sufficient conditions of rapid exponential stability are developed using continuous Lyapunov functions for nonlinear systems. Furthermore, according to a variant of continuous Lyapunov functions, a rapid exponential stability is guaranteed which satisfies some canonical conditions and Hamilton–Jacobi–Bellman equation for controlled nonlinear systems. It is can be seen that the solution of Hamilton–Jacobi–Bellman equation is a continuous Lyapunov function, and, therefore, rapid exponential stability and optimality are guaranteed for nonlinear systems. Last, the main result of this article is investigated via a nonlinear model of a spacecraft with one axis of symmetry through simulations and is used to check rapid exponential stability. Moreover, for the disturbance problem of initial point, a rapid exponential stable controller can reject the large-scale disturbances for controlled nonlinear systems. In addition, the proposed optimal feedback controller is applied to the tracking trajectories of 2-degree-of-freedom manipulator, and the numerical results have illustrated high efficiency and robustness in real time. The simulation results demonstrate the use of the rapid exponential stability and optimal feedback approach for real-time nonlinear systems.


Author(s):  
Ross P. Anderson ◽  
Dejan Milutinović

Motivated by applications in which a nonholonomic robotic vehicle should sequentially hit a series of waypoints in the presence of stochastic drift, we formulate a new version of the Dubins vehicle traveling salesperson problem. In our approach, we first compute the minimum expected time feedback control to hit one waypoint based on the Hamilton-Jacobi-Bellman equation. Next, minimum expected times associated with the control are used to construct a traveling salesperson problem based on a waypoint hitting angle discretization. We provide numerical results illustrating our solution and analyze how the stochastic drift affects the solution.


Author(s):  
Jongeun Choi ◽  
Dejan Milutinović

This tutorial paper presents the expositions of stochastic optimal feedback control theory and Bayesian spatiotemporal models in the context of robotics applications. The presented material is self-contained so that readers can grasp the most important concepts and acquire knowledge needed to jump-start their research. To facilitate this, we provide a series of educational examples from robotics and mobile sensor networks.


Sign in / Sign up

Export Citation Format

Share Document