An Electronic Nose Using Neural Networks with Effective Training Data Selection

2003 ◽  
Vol 15 (4) ◽  
pp. 369-376 ◽  
Author(s):  
Bancha Charumporn ◽  
◽  
Michifumi Yoshioka ◽  
Toru Fujinaka ◽  
Sigeru Omatu

An electronic nose developed from metal oxide gas sensors is applied to test smoke of three general household burning materials under different environments. Generally training data is randomly selected for a layered neural network with error back-propagation (BP). Randomized training data always contain redundant data that lengthen training time without improving classification performance. This paper proposes an effective method to select training data based on a similarity index (SI). The SI ensures that only the most valuable training data is included in the training data set. The proposed method is applied to remove redundant data from the training data set before being fed to the layered neural network based on BP. Results verified high classification performance by using a small number of training data from proposed method.

2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
R. Manjula Devi ◽  
S. Kuppuswami ◽  
R. C. Suganthe

Artificial neural network has been extensively consumed training model for solving pattern recognition tasks. However, training a very huge training data set using complex neural network necessitates excessively high training time. In this correspondence, a new fast Linear Adaptive Skipping Training (LAST) algorithm for training artificial neural network (ANN) is instituted. The core essence of this paper is to ameliorate the training speed of ANN by exhibiting only the input samples that do not categorize perfectly in the previous epoch which dynamically reducing the number of input samples exhibited to the network at every single epoch without affecting the network’s accuracy. Thus decreasing the size of the training set can reduce the training time, thereby ameliorating the training speed. This LAST algorithm also determines how many epochs the particular input sample has to skip depending upon the successful classification of that input sample. This LAST algorithm can be incorporated into any supervised training algorithms. Experimental result shows that the training speed attained by LAST algorithm is preferably higher than that of other conventional training algorithms.


1999 ◽  
Vol 39 (1) ◽  
pp. 451 ◽  
Author(s):  
H. Crocker ◽  
C.C. Fung ◽  
K.W. Wong

The producing M. australis Sandstone of the Stag Oil Field is a bioturbated glauconitic sandstone that is difficult to evaluate using conventional methods. Well log and core data are available for the Stag Field and for the nearby Centaur–1 well. Eight wells have log data; six also have core data.In the past few years artificial intelligence has been applied to formation evaluation. In particular, artificial neural networks (ANN) used to match log and core data have been studied. The ANN approach has been used to analyse the producing Stag Field sands. In this paper, new ways of applying the ANN are reported. Results from simple ANN approach are unsatisfactory. An integrated ANN approach comprising the unsupervised Self-Organising Map (SOM) and the Supervised Back Propagation Neural Network (BPNN) appears to give a more reasonable analysis.In this case study the mineralogical and petrophysical characteristics of a cored well are predicted from the 'training' data set of the other cored wells in the field. The prediction from the ANN model is then used for comparison with the known core data. In this manner, the accuracy of the prediction is determined and a prediction qualifier computed.This new approach to formation evaluation should provide a match between log and core data that may be used to predict the characteristics of a similar uncored interval. Although the results for the Stag Field are satisfactory, further study applying the method to other fields is required.


2013 ◽  
Vol 373-375 ◽  
pp. 1212-1219
Author(s):  
Afrias Sarotama ◽  
Benyamin Kusumoputro

A good model is necessary in order to design a controller of a system off-line. It is especially beneficial in the implementation of new advanced control schemes in Unmanned Aerial Vehicle (UAV). Considering the safety and benefit of an off-line tuning of the UAV controllers, this paper identifies a dynamic MIMO UAV nonlinear system which is derived based on the collection of input-output data taken from a test flights (36250 samples data). These input-output sample flight data are grouped into two flight data sets. The first flight data set, a chirp signal, is used for training the neural network in order to determine parameters (weights) for the network. Validation of the network is performed using the second data set, which is not used for training, and is a representation of UAV circular flight movement. An artificial neural network is trained using the training data set and thereafter the network is excited by the second set input data set. The predicted outputs based on our proposed Neural Network model is similar to the desired outputs (roll, pitch and yaw) which has been produced by real UAV system.


2003 ◽  
Vol 56 (2) ◽  
pp. 291-304 ◽  
Author(s):  
Dah-Jing Jwo ◽  
Chien-Cheng Lai

The neural networks (NN)-based geometry classification for good or acceptable navigation satellite subset selection is presented. The approach is based on classifying the values of satellite Geometry Dilution of Precision (GDOP) utilizing the classification-type NNs. Unlike some of the NNs that approximate the function, such as the back-propagation neural network (BPNN), the NNs here are employed as classifiers. Although BPNN can also be employed as a classifier, it takes a long training time. Two other methods that feature a fast learning speed will be implemented, including Optimal Interpolative (OI) Net and Probabilistic Neural Network (PNN). Simulation results from these three neural networks are presented. The classification performance and computational expense of neural network-based GDOP classification are explored.


2016 ◽  
Vol 818 ◽  
pp. 96-100 ◽  
Author(s):  
Novizon ◽  
Zulkurnain Abdul-Malek

— Neural networks are frequently used as a classifier for tasks in many classifications. However there are disadvantages in terms of amount of training data required, and length of training time. This paper, develop an intelligent diagnosis system for zinc oxide (ZnO) surge arrester fault classification. First the features were extracted from 600 ZnO surge arrester thermal images and leakage currents. Then these features were presented to several neural network architectures to investigate the most suitable network model for classifying the ZnO surge arrester fault condition effectively. Three classification models were used namely feed forward back propagation (FFBP), radial basis function (RBF) and learning vector quantization (LVQ) algorithm. The performance of the networks was compared based on resulted of misclassify and correct rate. The method was evaluated using 24 testing datasets. Comparison results showed that LVQ was the best training algorithm for the ZnO surge arrester fault classification compared to the others system. Also the LVQ is faster than FFBP and RBF.


2020 ◽  
Vol 5 (2) ◽  
pp. 1-6
Author(s):  
Zeni Permatasari ◽  
Agus Sifaunajah ◽  
Nur Khafidhoh

Electrical Energy has a large contribution to the operational costs that must be incurred. The selection of electrical equipment can be one alternative that might be implemented to reduce operational costs incurred. In its use sometimes users do not know any electrical equipment that uses high electrical power and low electrical power. Therefore a system was made to classify data on electric power usage. This data will be classified into four classes, such as: very efficient, efficient, quite efficient and wasteful. Data classification is done using a back propagation neural network algorithm. The training data set used is 190 data and the test data set is 30 data. Based on the training that has been done, the optimal parameters are learning rate 0.5, target error 0.001, max epoch 10000, and 25 hidden neurons. Tests show that the system is able to recognize data with an accuracy level of 96.67% and MSE of 0.03333. Of the 30 data that have been tested obtained 29 data in accordance with the target. Where the 29 data are classified into 4 classes, namely 9 data classes are very efficient, 6 data classes are efficient, 5 data classes are quite efficient and 9 data classes are wasteful. The results of this study can be concluded that the backpropagation neural network algorithm can be implemented to classify electrical power usage data.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2324 ◽  
Author(s):  
Haiqi Zhang ◽  
Jiahe Cui ◽  
Lihui Feng ◽  
Aiying Yang ◽  
Huichao Lv ◽  
...  

In this letter, we propose an indoor visible light positioning technique using a Modified Momentum Back-Propagation (MMBP) algorithm based on received signal strength (RSS) with sparse training data set. Unlike other neural network algorithms that require a large number of training data points to locate accurately, we have realized high-precision positioning for 100 test points with only 20 training points in a 1.8 m × 1.8 m × 2.1 m localization area. In order to verify the adaptability of the MMBP algorithm, we experimentally demonstrate two different training data acquisition methods adopting either even or arbitrary training sets. In addition, we also demonstrate the positioning accuracy of the traditional RSS algorithm. Experimental results show that the average localization accuracy optimized by our proposed algorithm is only 1.88 cm for the arbitrary set and 1.99 cm for the even set, while the average positioning error of the traditional RSS algorithm reaches 14.34 cm. Comparison indicates that the positioning accuracy of our proposed algorithm is 7.6 times higher. Results also show that the performance of our system is higher than some previous reports based on RSS and RSS fingerprint databases using complex machine learning algorithms trained by a large amount of training points.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Oded Medina ◽  
Roi Yozevitch ◽  
Nir Shvalb

It is often hard to relate the sensor’s electrical output to the physical scenario when a multidimensional measurement is of interest. An artificial neural network may be a solution. Nevertheless, if the training data set is extracted from a real experimental setup, it can become unreachable in terms of time resources. The same issue arises when the physical measurement is expected to extend across a wide range of values. This paper presents a novel method for overcoming the long training time in a physical experiment set up by bootstrapping a relatively small data set for generating a synthetic data set which can be used for training an artificial neural network. Such a method can be applied to various measurement systems that yield sensor output which combines simultaneous occurrences or wide-range values of physical phenomena of interest. We discuss to which systems our method may be applied. We exemplify our results on three study cases: a seismic sensor array, a linear array of strain gauges, and an optical sensor array. We present the experimental process, its results, and the resulting accuracies.


2021 ◽  
Vol 16 ◽  
pp. 155892502110379
Author(s):  
Hao Jiang ◽  
Jiuxiang Song ◽  
Baowei Zhang ◽  
Suna Zhao ◽  
Yonghua Wang

With the continuous development of deep learning, due to the complexity of the deep neural network structure and the limitation of training time, some scholars have proposed broad learning, the Broad Learning System (BLS). However, BLS currently only verifies that it has excellent effects on some of the network training data sets, and it does not necessarily have excellent effects on some actual data sets. In response to this, this paper uses the effect of BLS in predicting the unevenness of yarn quality in the yarn data set, and proposes a BLS-based multi-layer neural network (MNN) for the problems, which is called Broad Multilayer Neural Network (BMNN).


2020 ◽  
Vol 39 (5) ◽  
pp. 6419-6430
Author(s):  
Dusan Marcek

To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.


Sign in / Sign up

Export Citation Format

Share Document