Operator-Based Nonlinear Control with Unknown Disturbance Rejection

Author(s):  
Mengyang LI ◽  
Mingcong DENG
2005 ◽  
Vol 38 (1) ◽  
pp. 84-89 ◽  
Author(s):  
Rafael Becerril-Arreola ◽  
Amir G. Aghdam

1995 ◽  
Vol 34 (7) ◽  
pp. 2383-2392 ◽  
Author(s):  
Maria C. Colantonio ◽  
Alfredo C. Desages ◽  
Jose A. Romagnoli ◽  
Ahmet Palazoglu

2020 ◽  
Vol 10 (16) ◽  
pp. 5564 ◽  
Author(s):  
Dada Hu ◽  
Zhongcai Pei ◽  
Zhiyong Tang

In this paper, methods are presented for designing a quadrotor attitude control system with disturbance rejection ability, wherein only one parameter needs to be tuned for each axis. The core difference between quadrotor platforms are extracted as critical gain parameters (CGPs). Reinforcement learning (RL) technology is introduced in order to automatically optimize the controlling law for quadrotors with different CGPs, and the CGPs are used to extend the RL state list. A deterministic policy gradient (DPG) algorithm that is based on an actor-critic structure in a model-free style is used as the learning algorithm. Mirror sampling and reward shaping methods are designed in order to eliminate the steady-state errors of the RL controller and accelerate the training process. Active disturbance rejection control (ADRC) is applied to reject unknown external disturbances. A set of extended state observers (ESOs) is designed to estimate the total disturbance to the roll and pitch axes. The covariance matrix adaptation evolution strategy (CMA-ES) algorithm is used to automatically tune the ESO parameters and improve the final performance. The complete controller is tested on an F550 quadrotor in both simulation and real flight environments. The quadrotor can hover and move around stably and accurately in the air, even with a severe disturbance.


2020 ◽  
Vol 67 (8) ◽  
pp. 6894-6903 ◽  
Author(s):  
Kai Zhao ◽  
Jinhui Zhang ◽  
Dailiang Ma ◽  
Yuanqing Xia

Sign in / Sign up

Export Citation Format

Share Document