scholarly journals Stochastic Memristive Quaternion-Valued Neural Networks with Time Delays: An Analysis on Mean Square Exponential Input-to-State Stability

Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 815 ◽  
Author(s):  
Usa Humphries ◽  
Grienggrai Rajchakit ◽  
Pramet Kaewmesri ◽  
Pharunyou Chanthorn ◽  
Ramalingam Sriraman ◽  
...  

In this paper, we study the mean-square exponential input-to-state stability (exp-ISS) problem for a new class of neural network (NN) models, i.e., continuous-time stochastic memristive quaternion-valued neural networks (SMQVNNs) with time delays. Firstly, in order to overcome the difficulties posed by non-commutative quaternion multiplication, we decompose the original SMQVNNs into four real-valued models. Secondly, by constructing suitable Lyapunov functional and applying It o ^ ’s formula, Dynkin’s formula as well as inequity techniques, we prove that the considered system model is mean-square exp-ISS. In comparison with the conventional research on stability, we derive a new mean-square exp-ISS criterion for SMQVNNs. The results obtained in this paper are the general case of previously known results in complex and real fields. Finally, a numerical example has been provided to show the effectiveness of the obtained theoretical results.

2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Tianqing Yang ◽  
Zuoliang Xiong ◽  
Cuiping Yang

This paper is concerned with the mean-square exponential input-to-state stability problem for a class of stochastic Cohen-Grossberg neural networks. Different from prior works, neutral terms and mixed delays are discussed in our system. By employing the Lyapunov-Krasovskii functional method, Itô formula, Dynkin formula, and stochastic analysis theory, we obtain some novel sufficient conditions to ensure that the addressed system is mean-square exponentially input-to-state stable. Moreover, two numerical examples and their simulations are given to illustrate the correctness of the theoretical results.


2020 ◽  
Vol 25 (1) ◽  
Author(s):  
Ruoyu Wei ◽  
Jinde Cao

This paper extends the memristive neural networks (MNNs) to quaternion field, a new class of neural networks named quaternion-valued memristive neural networks (QVMNNs) is then established, and the problem of drive-response global synchronization of this type of networks is investigated in this paper. Two cases are taken into consideration: one is with the conventional differential inclusion assumption, the other without. Criteria for the global synchronization of these two cases are achieved respectively by appropriately choosing the Lyapunov functional and applying some inequality techniques. Finally, corresponding simulation examples are presented to demonstrate the correctness of the proposed results derived in this paper.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jingwei Liu ◽  
Peixuan Li ◽  
Xuehan Tang ◽  
Jiaxin Li ◽  
Jiaming Chen

AbstractArtificial neural networks (ANN) which include deep learning neural networks (DNN) have problems such as the local minimal problem of Back propagation neural network (BPNN), the unstable problem of Radial basis function neural network (RBFNN) and the limited maximum precision problem of Convolutional neural network (CNN). Performance (training speed, precision, etc.) of BPNN, RBFNN and CNN are expected to be improved. Main works are as follows: Firstly, based on existing BPNN and RBFNN, Wavelet neural network (WNN) is implemented in order to get better performance for further improving CNN. WNN adopts the network structure of BPNN in order to get faster training speed. WNN adopts the wavelet function as an activation function, whose form is similar to the radial basis function of RBFNN, in order to solve the local minimum problem. Secondly, WNN-based Convolutional wavelet neural network (CWNN) method is proposed, in which the fully connected layers (FCL) of CNN is replaced by WNN. Thirdly, comparative simulations based on MNIST and CIFAR-10 datasets among the discussed methods of BPNN, RBFNN, CNN and CWNN are implemented and analyzed. Fourthly, the wavelet-based Convolutional Neural Network (WCNN) is proposed, where the wavelet transformation is adopted as the activation function in Convolutional Pool Neural Network (CPNN) of CNN. Fifthly, simulations based on CWNN are implemented and analyzed on the MNIST dataset. Effects are as follows: Firstly, WNN can solve the problems of BPNN and RBFNN and have better performance. Secondly, the proposed CWNN can reduce the mean square error and the error rate of CNN, which means CWNN has better maximum precision than CNN. Thirdly, the proposed WCNN can reduce the mean square error and the error rate of CWNN, which means WCNN has better maximum precision than CWNN.


1977 ◽  
Vol 44 (3) ◽  
pp. 487-491 ◽  
Author(s):  
S. F. Masri ◽  
F. Udwadia

The transient mean-square displacement, slope, and relative motion of a viscously damped shear beam subjected to correlated random boundary excitation is presented. The effects of various system parameters including the spectral characteristics of the excitation, the delay time between the beam support motion, and the beam damping have been investigated. Marked amplifications in the mean-square response are shown to occur for certain dimensionless time delays.


2019 ◽  
Vol 2019 ◽  
pp. 1-5
Author(s):  
Long Shi

In this work, a generalization of continuous time random walk is considered, where the waiting times among the subsequent jumps are power-law correlated with kernel function M(t)=tρ(ρ>-1). In a continuum limit, the correlated continuous time random walk converges in distribution a subordinated process. The mean square displacement of the proposed process is computed, which is of the form 〈x2(t)〉∝tH=t1/(1+ρ+1/α). The anomy exponent H varies from α to α/(1+α) when -1<ρ<0 and from α/(1+α) to 0 when ρ>0. The generalized diffusion equation of the process is also derived, which has a unified form for the above two cases.


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Chuangxia Huang ◽  
Jie Cao ◽  
Peng Wang

We address the problem of stochastic attractor and boundedness of a class of switched Cohen-Grossberg neural networks (CGNN) with discrete and infinitely distributed delays. With the help of stochastic analysis technology, the Lyapunov-Krasovskii functional method, linear matrix inequalities technique (LMI), and the average dwell time approach (ADT), some novel sufficient conditions regarding the issues of mean-square uniformly ultimate boundedness, the existence of a stochastic attractor, and the mean-square exponential stability for the switched Cohen-Grossberg neural networks are established. Finally, illustrative examples and their simulations are provided to illustrate the effectiveness of the proposed results.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Long Shi ◽  
Aiguo Xiao

We consider a particular type of continuous time random walk where the jump lengths between subsequent waiting times are correlated. In a continuum limit, the process can be defined by an integrated Brownian motion subordinated by an inverse α-stable subordinator. We compute the mean square displacement of the proposed process and show that the process exhibits subdiffusion when 0<α<1/3, normal diffusion when α=1/3, and superdiffusion when 1/3<α<1. The time-averaged mean square displacement is also employed to show weak ergodicity breaking occurring in the proposed process. An extension to the fractional case is also considered.


Sign in / Sign up

Export Citation Format

Share Document