scholarly journals Single-Neuron Adaptive Hysteresis Compensation of Piezoelectric Actuator Based on Hebb Learning Rules

Micromachines ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 84 ◽  
Author(s):  
Yanding Qin ◽  
Heng Duan

This paper presents an adaptive hysteresis compensation approach for a piezoelectric actuator (PEA) using single-neuron adaptive control. For a given desired trajectory, the control input to the PEA is dynamically adjusted by the error between the actual and desired trajectories using Hebb learning rules. A single neuron with self-learning and self-adaptive capabilities is a non-linear processing unit, which is ideal for time-variant systems. Based on the single-neuron control, the compensation of the PEA’s hysteresis can be regarded as a process of transmitting biological neuron information. Through the error information between the actual and desired trajectories, the control input is adjusted via the weight adjustment method of neuron learning. In addition, this paper also integrates the combination of Hebb learning rules and supervised learning as teacher signals, which can quickly respond to control signals. The weights of the single-neuron controller can be constantly adjusted online to improve the control performance of the system. Experimental results show that the proposed single-neuron adaptive hysteresis compensation method can track continuous and discontinuous trajectories well. The single-neuron adaptive controller has better adaptive and self-learning performance against the rate-dependence of the PEA’s hysteresis.

2021 ◽  
Author(s):  
Carlos Wert-Carvajal ◽  
Melissa Reneaux ◽  
Tatjana Tchumatchenko ◽  
Claudia Clopath

AbstractDopamine and serotonin are important modulators of synaptic plasticity and their action has been linked to our ability to learn the positive or negative outcomes or valence learning. In the hippocampus, both neuromodulators affect long-term synaptic plasticity but play different roles in the encoding of uncertainty or predicted reward. Here, we examine the differential role of these modulators on learning speed and cognitive flexibility in a navigational model. We compare two reward-modulated spike time-dependent plasticity (R-STDP) learning rules to describe the action of these neuromodulators. Our results show that the interplay of dopamine (DA) and serotonin (5-HT) improves overall learning performance and can explain experimentally reported differences in spatial task performance. Furthermore, this system allows us to make predictions regarding spatial reversal learning.


Processes ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 50
Author(s):  
Song Xu ◽  
Seiji Hashimoto ◽  
YuQi Jiang ◽  
Katsutoshi Izaki ◽  
Takeshi Kihara ◽  
...  

Artificial neural networks (ANNs), which have excellent self-learning performance, have been applied to various applications, such as target detection and industrial control. In this paper, a reference-model-based ANN controller with integral-proportional-derivative (I-PD) compensation has been proposed for temperature control systems. To improve the ANN self-learning efficiency, a reference model is introduced for providing the teaching signal for the ANN. System simulations were carried out in the MATLAB/SIMULINK environment and experiments were carried out on a digital-signal-processor (DSP)-based experimental platform. The simulation and experimental results were compared with those for a conventional I-PD control system. The effectiveness of the proposed method was verified.


2004 ◽  
Vol 14 (01) ◽  
pp. 1-8 ◽  
Author(s):  
RALF MÖLLER

The paper reviews single-neuron learning rules for minor component analysis and suggests a novel minor component learning rule. In this rule, the weight vector length is self-stabilizing, i.e., moving towards unit length in each learning step. In simulations with low- and medium-dimensional data, the performance of the novel learning rule is compared with previously suggested rules.


1996 ◽  
Vol 07 (06) ◽  
pp. 671-687 ◽  
Author(s):  
AAPO HYVÄRINEN ◽  
ERKKI OJA

Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.


2021 ◽  
Author(s):  
Dang Xuan Ba ◽  
Joonbum Bae

Humanoid robots are complicated systems both in hardware and software designs. Furthermore, the robots normally work in unstructured environments at which unpredictable disturbances could degrade control performances of whole systems. As a result, simple yet effective controllers are favorite employed in low-level layers. Gain-learning algorithms applied to conventional control frameworks, such as Proportional-Integral-Derivative, Sliding-mode, and Backstepping controllers, could be reasonable solutions. The adaptation ability integrated is adopted to automatically tune proper control gains subject to the optimal control criterion both in transient and steady-state phases. The learning rules could be realized by using analytical nonlinear functions. Their effectiveness and feasibility are carefully discussed by theoretical proofs and experimental discussion.


Sign in / Sign up

Export Citation Format

Share Document