IDA-PBC for a class of underactuated mechanical systems with application to a rotary inverted pendulum

Author(s):  
Mutaz Ryalat ◽  
Dina Shona Laila
Author(s):  
Dong Eui Chang ◽  
Soo Jeon

Conservation of momentum is often used in controlling underactuated mechanical systems with symmetry. If a symmetry-breaking force is applied to the system, then the momentum is not conserved any longer in general. However, there exist forces linear in velocity such as the damping force that break the symmetry but induce a new conserved quantity in place of the original momentum map. This paper formalizes a new conserved quantity which can be constructed by combining the time integral of a general damping force and the original momentum map associated with the symmetry. From the perspective of stability theories, the new conserved quantity implies the corresponding variable possesses the self recovery phenomenon, i.e., it will be globally attractive to the initial condition of the variable. We discover that what is fundamental in the damping-induced self recovery is not the positivity of the damping coefficient but certain properties of the time integral of the damping force. The self recovery effect and theoretical findings are demonstrated by simulation results using the two-link planar manipulator and the torque-controlled inverted pendulum on a passive cart. The results in this paper will be useful in designing and controlling mechanical systems with underactuation.


Author(s):  
Afef Hfaiedh ◽  
Ahmed Chemori ◽  
Afef Abdelkrim

In this paper, the control problem of a class I of underactuated mechanical systems (UMSs) is addressed. The considered class includes nonlinear UMSs with two degrees of freedom and one control input. Firstly, we propose the design of a robust integral of the sign of the error (RISE) control law, adequate for this special class. Based on a change of coordinates, the dynamics is transformed into a strict-feedback (SF) form. A Lyapunov-based technique is then employed to prove the asymptotic stability of the resulting closed-loop system. Numerical simulation results show the robustness and performance of the original RISE toward parametric uncertainties and disturbance rejection. A comparative study with a conventional sliding mode control reveals a significant robustness improvement with the proposed original RISE controller. However, in real-time experiments, the amplification of the measurement noise is a major problem. It has an impact on the behaviour of the motor and reduces the performance of the system. To deal with this issue, we propose to estimate the velocity using the robust Levant differentiator instead of the numerical derivative. Real-time experiments were performed on the testbed of the inertia wheel inverted pendulum to demonstrate the relevance of the proposed observer-based RISE control scheme. The obtained real-time experimental results and the obtained evaluation indices show clearly a better performance of the proposed observer-based RISE approach compared to the sliding mode and the original RISE controllers.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 417-428
Author(s):  
Yanyan Dai ◽  
KiDong Lee ◽  
SukGyu Lee

For real applications, rotary inverted pendulum systems have been known as the basic model in nonlinear control systems. If researchers have no deep understanding of control, it is difficult to control a rotary inverted pendulum platform using classic control engineering models, as shown in section 2.1. Therefore, without classic control theory, this paper controls the platform by training and testing reinforcement learning algorithm. Many recent achievements in reinforcement learning (RL) have become possible, but there is a lack of research to quickly test high-frequency RL algorithms using real hardware environment. In this paper, we propose a real-time Hardware-in-the-loop (HIL) control system to train and test the deep reinforcement learning algorithm from simulation to real hardware implementation. The Double Deep Q-Network (DDQN) with prioritized experience replay reinforcement learning algorithm, without a deep understanding of classical control engineering, is used to implement the agent. For the real experiment, to swing up the rotary inverted pendulum and make the pendulum smoothly move, we define 21 actions to swing up and balance the pendulum. Comparing Deep Q-Network (DQN), the DDQN with prioritized experience replay algorithm removes the overestimate of Q value and decreases the training time. Finally, this paper shows the experiment results with comparisons of classic control theory and different reinforcement learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document