scholarly journals Neutrosophic Compound Orthogonal Neural Network and Its Applications in Neutrosophic Function Approximation

Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 147 ◽  
Author(s):  
Jun Ye ◽  
Wenhua Cui

Neural networks are powerful universal approximation tools. They have been utilized for functions/data approximation, classification, pattern recognition, as well as their various applications. Uncertain or interval values result from the incompleteness of measurements, human observation and estimations in the real world. Thus, a neutrosophic number (NsN) can represent both certain and uncertain information in an indeterminate setting and imply a changeable interval depending on its indeterminate ranges. In NsN settings, however, existing interval neural networks cannot deal with uncertain problems with NsNs. Therefore, this original study proposes a neutrosophic compound orthogonal neural network (NCONN) for the first time, containing the NsN weight values, NsN input and output, and hidden layer neutrosophic neuron functions, to approximate neutrosophic functions/NsN data. In the proposed NCONN model, single input and single output neurons are the transmission notes of NsN data and hidden layer neutrosophic neurons are constructed by the compound functions of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. In addition, illustrative and actual examples are provided to verify the effectiveness and learning performance of the proposed NCONN model for approximating neutrosophic nonlinear functions and NsN data. The contribution of this study is that the proposed NCONN can handle the approximation problems of neutrosophic nonlinear functions and NsN data. However, the main advantage is that the proposed NCONN implies a simple learning algorithm, higher speed learning convergence, and higher learning accuracy in indeterminate/NsN environments.

2014 ◽  
Vol 556-562 ◽  
pp. 6081-6084
Author(s):  
Qian Huang ◽  
Wen Long Li ◽  
Jian Kang ◽  
Jun Yang

In this paper, based on the study analyzed on the basis of a variety of neural networks, a kind of new type pulse neural network is implemented based on the FPGA [1]. The neural network adopts the Sigmoid function as its hidden layer nonlinear excitation function, at the same time, to reduce ROM table storage space and improve the efficiency of look-up table [2], it also adopts the STAM algorithm based nonlinear storage. Choose Altera Corporation’s EDA tools Quartus II as compilation, simulation platform, Cyclone II series EP2C20F484C6 devices and realized the pulse neural networks finally. In the last, we use XOR problem as example to carry out the hardware simulation, and simulation results are consistent with the theoretical value. Neural network to improve the complex, nonlinear, time-varying, uncertainty about the system reliability and security provides a new way.


Author(s):  
Untari Novia Wisesty

The eye state detection is one of various task toward Brain Computer Interface system. The eye state can be read in brain signals. In this paper use EEG Eye State dataset (Rosler, 2013) from UCI Machine Learning Repository Database. Dataset is consisting of continuous 14 EEG measurements in 117 seconds. The eye states were marked as “1” or “0”. “1” indicates the eye-closed and “0” the eye-open state. The proposed schemes use Multi Layer Neural Network with Levenberg Marquardt optimization learning algorithm, as classification method.  Levenberg Marquardt method used to optimize the learning algorithm of neural network, because the standard algorithm has a weak convergence rate. It is need many iterations to have minimum error. Based on the analysis towards the experiment on the EEG dataset, it can be concluded that the proposed scheme can be implemented to detect the Eye State. The best accuracy gained from combination variable sigmoid function, data normalization and number of neurons are 31 (95.71%) for one hidden layer, and 98.912% for two hidden layers with number of neurons are 39 and 47 neurons and linear function.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yumin Dong ◽  
Xiang Li ◽  
Wei Liao ◽  
Dong Hou

In this paper, a quantum neural network with multilayer activation function is proposed by using multilayer Sigmoid function superposition and learning algorithm to adjust quantum interval. On this basis, the quasiuniform stability of fractional quantum neural networks with mixed delays is studied. According to the order of two different cases, the conditions of quasi uniform stability of networks are given by using the techniques of linear matrix inequality analysis, and the sufficiency of the conditions is proved. Finally, the feasibility of the conclusion is verified by experiments.


Author(s):  
Qingsong Xu

Extreme learning machine (ELM) is a learning algorithm for single-hidden layer feedforward neural networks. In theory, this algorithm is able to provide good generalization capability at extremely fast learning speed. Comparative studies of benchmark function approximation problems revealed that ELM can learn thousands of times faster than conventional neural network (NN) and can produce good generalization performance in most cases. Unfortunately, the research on damage localization using ELM is limited in the literature. In this chapter, the ELM is extended to the domain of damage localization of plate structures. Its effectiveness in comparison with typical neural networks such as back-propagation neural network (BPNN) and least squares support vector machine (LSSVM) is illustrated through experimental studies. Comparative investigations in terms of learning time and localization accuracy are carried out in detail. It is shown that ELM paves a new way in the domain of plate structure health monitoring. Both advantages and disadvantages of using ELM are discussed.


Author(s):  
Serkan Kiranyaz ◽  
Junaid Malik ◽  
Habib Ben Abdallah ◽  
Turker Ince ◽  
Alexandros Iosifidis ◽  
...  

AbstractThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the “Synaptic Plasticity” paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an “elite” ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.


2020 ◽  
Vol 8 (4) ◽  
pp. 469
Author(s):  
I Gusti Ngurah Alit Indrawan ◽  
I Made Widiartha

Artificial Neural Networks or commonly abbreviated as ANN is one branch of science from the field of artificial intelligence which is often used to solve various problems in fields that involve grouping and pattern recognition. This research aims to classify Letter Recognition datasets using Artificial Neural Networks which are weighted optimally using the Artificial Bee Colony algorithm. The best classification accuracy results from this study were 92.85% using a combination of 4 hidden layers with each hidden layer containing 10 neurons.


2005 ◽  
Vol 128 (4) ◽  
pp. 773-782 ◽  
Author(s):  
H. S. Tan

The conventional approach to neural network-based aircraft engine fault diagnostics has been mainly via multilayer feed-forward systems with sigmoidal hidden neurons trained by back propagation as well as radial basis function networks. In this paper, we explore two novel approaches to the fault-classification problem using (i) Fourier neural networks, which synthesizes the approximation capability of multidimensional Fourier transforms and gradient-descent learning, and (ii) a class of generalized single hidden layer networks (GSLN), which self-structures via Gram-Schmidt orthonormalization. Using a simulation program for the F404 engine, we generate steady-state engine parameters corresponding to a set of combined two-module deficiencies and require various neural networks to classify the multiple faults. We show that, compared to the conventional network architecture, the Fourier neural network exhibits stronger noise robustness and the GSLNs converge at a much superior speed.


2008 ◽  
Vol 20 (11) ◽  
pp. 2757-2791 ◽  
Author(s):  
Yoshifusa Ito

We have constructed one-hidden-layer neural networks capable of approximating polynomials and their derivatives simultaneously. Generally, optimizing neural network parameters to be trained at later steps of the BP training is more difficult than optimizing those to be trained at the first step. Taking into account this fact, we suppressed the number of parameters of the former type. We measure degree of approximation in both the uniform norm on compact sets and the Lp-norm on the whole space with respect to probability measures.


2016 ◽  
Vol 36 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Liang Chen ◽  
Leitao Cui ◽  
Rong Huang ◽  
Zhengyun Ren

Purpose This paper aims to present a bio-inspired neural network for improvement of information processing capability of the existing artificial neural networks. Design/methodology/approach In the network, the authors introduce a property often found in biological neural system – hysteresis – as the neuron activation function and a bionic algorithm – extreme learning machine (ELM) – as the learning scheme. The authors give the gradient descent procedure to optimize parameters of the hysteretic function and develop an algorithm to online select ELM parameters, including number of the hidden-layer nodes and hidden-layer parameters. The algorithm combines the idea of the cross validation and random assignment in original ELM. Finally, the authors demonstrate the advantages of the hysteretic ELM neural network by applying it to automatic license plate recognition. Findings Experiments on automatic license plate recognition show that the bio-inspired learning system has better classification accuracy and generalization capability with consideration to efficiency. Originality/value Comparing with the conventional sigmoid function, hysteresis as the activation function enables has two advantages: the neuron’s output not only depends on its input but also on derivative information, which provides the neuron with memory; the hysteretic function can switch between the two segments, thus avoiding the neuron falling into local minima and having a quicker learning rate. The improved ELM algorithm in some extent makes up for declining performance because of original ELM’s complete randomness with the cost of a litter slower than before.


Sign in / Sign up

Export Citation Format

Share Document