scholarly journals An Enhanced Fusion Strategy for Reliable Attitude Measurement Utilizing Vision and Inertial Sensors

2019 ◽  
Vol 9 (13) ◽  
pp. 2656
Author(s):  
Zhang ◽  
Shen ◽  
Chen ◽  
Cao ◽  
Zhao ◽  
...  

In this paper, we present a radial basis function (RBF) and cubature Kalman filter (CKF) based enhanced fusion strategy for vision and inertial integrated attitude measurement for sampling frequency discrepancy and divergence. First, the multi-frequency problem of the integrated system and the reason for attitude divergence are analyzed. Second, the filter equation and attitude differential equation are constructed to calculate attitudes separately in time series when visual and inertial data are available or when there are only inertial data. Third, attitude errors between inertial and vision are sent to the input layer of RBF for training. After this, through the activation function of the hidden layer, the errors are transferred to the output layer for weighting the sums, and the training model is established. To overcome the problem of divergence inherent in a multi-frequency system, the well-trained RBF, which can output the attitude errors, is utilized to compensate the attitudes calculated by pure inertial data. Finally, semi-physical simulation experiments under different scenarios are performed to validate the effectiveness and superiority of the proposed scheme in accurate attitude measurements and enhanced anti-divergence capability.

Micromachines ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 402
Author(s):  
Ning Liu ◽  
Tianqi Tian ◽  
Zhong Su ◽  
Wenhao Qi

This paper studies the measurement of motion parameters of a parachute scanning platform. The movement of a parachute scanning platform has fast rotational velocity and a complex attitude. Therefore, traditional measurement methods cannot measure the motion parameters accurately, and thus fail to satisfy the requirements for the measurement of parachute scanning platform motion parameters. In order to solve these problems, a method for measuring the motion parameters of a parachute scanning platform based on a combination of magnetic and inertial sensors is proposed in this paper. First, scanning motion characteristics of a parachute-terminal-sensitive projectile are analyzed. Next, a high-precision parachute scanning platform attitude measurement device is designed to obtain the data of magnetic and inertial sensors. Then the extended Kalman filter is used to filter and observe errors. The scanning angle, the scanning angle velocity, the falling velocity, and the 2D scanning attitude are obtained. Finally, the accuracy and feasibility of the algorithm are analyzed and validated by MATLAB simulation, semi-physical simulation, and airdrop experiments. The presented research results can provide helpful references for the design and analysis of parachute scanning platforms, which can reduce development time and cost.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2021 ◽  
pp. 1063293X2110251
Author(s):  
K Vijayakumar ◽  
Vinod J Kadam ◽  
Sudhir Kumar Sharma

Deep Neural Network (DNN) stands for multilayered Neural Network (NN) that is capable of progressively learn the more abstract and composite representations of the raw features of the input data received, with no need for any feature engineering. They are advanced NNs having repetitious hidden layers between the initial input and the final layer. The working principle of such a standard deep classifier is based on a hierarchy formed by the composition of linear functions and a defined nonlinear Activation Function (AF). It remains uncertain (not clear) how the DNN classifier can function so well. But it is clear from many studies that within DNN, the AF choice has a notable impact on the kinetics of training and the success of tasks. In the past few years, different AFs have been formulated. The choice of AF is still an area of active study. Hence, in this study, a novel deep Feed forward NN model with four AFs has been proposed for breast cancer classification: hidden layer 1: Swish, hidden layer, 2:-LeakyReLU, hidden layer 3: ReLU, and final output layer: naturally Sigmoidal. The purpose of the study is twofold. Firstly, this study is a step toward a more profound understanding of DNN with layer-wise different AFs. Secondly, research is also aimed to explore better DNN-based systems to build predictive models for breast cancer data with improved accuracy. Therefore, the benchmark UCI dataset WDBC was used for the validation of the framework and evaluated using a ten-fold CV method and various performance indicators. Multiple simulations and outcomes of the experimentations have shown that the proposed solution performs in a better way than the Sigmoid, ReLU, and LeakyReLU and Swish activation DNN in terms of different parameters. This analysis contributes to producing an expert and precise clinical dataset classification method for breast cancer. Furthermore, the model also achieved improved performance compared to many established state-of-the-art algorithms/models.


2021 ◽  
Vol 11 (14) ◽  
pp. 6348
Author(s):  
Zijun Yang ◽  
Bowen Wang ◽  
Xia Sheng ◽  
Yupeng Wang ◽  
Qiang Ren ◽  
...  

The dead-ended anode (DEA) and anode recirculation operations are commonly used to improve the hydrogen utilization of automotive proton exchange membrane (PEM) fuel cells. The cell performance will decline over time due to the nitrogen crossover and liquid water accumulation in the anode. Highly efficient prediction of the short-term degradation behaviors of the PEM fuel cell has great significance. In this paper, we propose a data-driven degradation prediction method based on multivariate polynomial regression (MPR) and artificial neural network (ANN). This method first predicts the initial value of cell performance, and then the cell performance variations over time are predicted to describe the degradation behaviors of the PEM fuel cell. Two cases of degradation data, the PEM fuel cell in the DEA and anode recirculation modes, are employed to train the model and demonstrate the validation of the proposed method. The results show that the mean relative errors predicted by the proposed method are much smaller than those by only using the ANN or MPR. The predictive performance of the two-hidden-layer ANN is significantly better than that of the one-hidden-layer ANN. The performance curves predicted by using the sigmoid activation function are smoother and more realistic than that by using rectified linear unit (ReLU) activation function.


2019 ◽  
Vol 2 (1) ◽  
pp. 1
Author(s):  
Hijratul Aini ◽  
Haviluddin Haviluddin

Crude palm oil (CPO) production at PT. Perkebunan Nusantara (PTPN) XIII from January 2015 to January 2018 have been treated. This paper aims to predict CPO production using intelligent algorithms called Backpropagation Neural Network (BPNN). The accuracy of prediction algorithms have been measured by mean square error (MSE). The experiment showed that the best hidden layer architecture (HLA) is 5-10-11-12-13-1 with learning function (LF) of trainlm, activation function (AF) of logsig and purelin, and learning rate (LR) of 0.5. This architecture has a good accuracy with MSE of 0.0643. The results showed that this model can predict CPO production in 2019.


2016 ◽  
Vol 36 (2) ◽  
pp. 172-178 ◽  
Author(s):  
Liang Chen ◽  
Leitao Cui ◽  
Rong Huang ◽  
Zhengyun Ren

Purpose This paper aims to present a bio-inspired neural network for improvement of information processing capability of the existing artificial neural networks. Design/methodology/approach In the network, the authors introduce a property often found in biological neural system – hysteresis – as the neuron activation function and a bionic algorithm – extreme learning machine (ELM) – as the learning scheme. The authors give the gradient descent procedure to optimize parameters of the hysteretic function and develop an algorithm to online select ELM parameters, including number of the hidden-layer nodes and hidden-layer parameters. The algorithm combines the idea of the cross validation and random assignment in original ELM. Finally, the authors demonstrate the advantages of the hysteretic ELM neural network by applying it to automatic license plate recognition. Findings Experiments on automatic license plate recognition show that the bio-inspired learning system has better classification accuracy and generalization capability with consideration to efficiency. Originality/value Comparing with the conventional sigmoid function, hysteresis as the activation function enables has two advantages: the neuron’s output not only depends on its input but also on derivative information, which provides the neuron with memory; the hysteretic function can switch between the two segments, thus avoiding the neuron falling into local minima and having a quicker learning rate. The improved ELM algorithm in some extent makes up for declining performance because of original ELM’s complete randomness with the cost of a litter slower than before.


2022 ◽  
pp. 202-226
Author(s):  
Leema N. ◽  
Khanna H. Nehemiah ◽  
Elgin Christo V. R. ◽  
Kannan A.

Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.


Agriculture ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 567
Author(s):  
Jolanta Wawrzyniak

Artificial neural networks (ANNs) constitute a promising modeling approach that may be used in control systems for postharvest preservation and storage processes. The study investigated the ability of multilayer perceptron and radial-basis function ANNs to predict fungal population levels in bulk stored rapeseeds with various temperatures (T = 12–30 °C) and water activity in seeds (aw = 0.75–0.90). The neural network model input included aw, temperature, and time, whilst the fungal population level was the model output. During the model construction, networks with a different number of hidden layer neurons and different configurations of activation functions in neurons of the hidden and output layers were examined. The best architecture was the multilayer perceptron ANN, in which the hyperbolic tangent function acted as an activation function in the hidden layer neurons, while the linear function was the activation function in the output layer neuron. The developed structure exhibits high prediction accuracy and high generalization capability. The model provided in the research may be readily incorporated into control systems for postharvest rapeseed preservation and storage as a support tool, which based on easily measurable on-line parameters can estimate the risk of fungal development and thus mycotoxin accumulation.


2003 ◽  
Vol 15 (9) ◽  
pp. 2199-2226
Author(s):  
Yoshifusa Ito

Let g be a slowly increasing function of locally bounded variation defined on Rc, 1 ≤c≤d. We investigate when g can be an activation function of the hidden-layer units of three-layer neural networks that approximate continuous functions on compact sets. If the support of the Fourier transform of g includes a converging sequence of points with distinct distances from the origin, it can be an activation function without scaling. If and only if the support of its Fourier transform includes a point other than the origin, it can be an activation function with scaling. We also look for a condition on which an activation function can be used for approximation without rotation. Any nonpolynomial functions can be activation functions with scaling, and many familiar functions, such as sigmoid functions and radial basis functions, can be activation functions without scaling. With or without scaling, some of them defined on Rd can be used without rotation even if they are not spherically symmetric.


Sign in / Sign up

Export Citation Format

Share Document