scholarly journals Adaptive PID Control of a Nonlinear Servomechanism Using Recurrent Neural Networks

Author(s):  
Reza Jafari ◽  
Rached Dhaouadi
Author(s):  
Jua-Liang Chen ◽  
Zong-Lin Chen

The objective of this paper is to design a suitable controller for a piezoelectric microprecision positioner. The dimensions of this positioner are 150 mm × 150 mm × 10 mm and with X-Y-θZ three degrees of freedoms. Piezo-electric actuators are used to drive the positioner, which is constructed by flexure structures. In order to improve the short stroke of PZT, simple-levers are added to the structures. In this research, a diagonal recurrent neural networks (DRNN) controller is added to the system with aim to reduce the effect causes by the hysteresis, inaccurate system model and phase lag, and to save time for adjusting control gains for PID control. From the experimental results, it shows that the positioning errors for the X-axis, Y-axis, and θ-axis of continuous stepping test are less than 20 nm and 0.15 μrad. For the ramp tracking test, the tracking errors are less than 30 nm and 0.3 μrad. For the circular tracking test, the tracking error is less than 55 nm for both X- and Y-axis.


2018 ◽  
Vol 33 (1) ◽  
pp. 74-91 ◽  
Author(s):  
Claudio Rosales ◽  
Carlos Miguel Soria ◽  
Francisco G. Rossomando

2012 ◽  
Vol 150 ◽  
pp. 174-177 ◽  
Author(s):  
Yan Hong Zhang ◽  
De An Zhao ◽  
Jian Sheng Zhang

As a branch of the intelligent control, neural networks is applied in control more and more widely, the single neuron adaptive PID control algorithm is studied in this paper, and the program is written by MATLAB, the common object of single neuron adaptive PID is simulated, and the effect of single neuron adaptive PID control parameters on control effect is analyzed, experimental results show that the single neuron PID control has more obvious advantages than general PID control.


2016 ◽  
Vol 84 ◽  
pp. 80-90 ◽  
Author(s):  
Miroslav B. Milovanović ◽  
Dragan S. Antić ◽  
Marko T. Milojković ◽  
Saša S. Nikolić ◽  
Staniša Lj. Perić ◽  
...  

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Sign in / Sign up

Export Citation Format

Share Document