scholarly journals Penentuan Arsitektur Jaringan Syaraf Tiruan Backpropagation (Bobot Awal dan Bias Awal) Menggunakan Algoritma Genetika

Author(s):  
Christian Dwi Suhendra ◽  
Retantyo Wardoyo

AbstrakKelemahan dari jaringan syaraf tiruan backpropagation adalah sangat lama untuk konvergen dan permasalahan lokal mininum yang membuat jaringan syaraf tiruan (JST) sering terjebak pada lokal minimum. Kombinasi parameter arsiktektur, bobot awal dan bias awal yang baik sangat menentukan kemampuan belajar dari JST untuk mengatasi kelemahan dari JST backpropagation.            Pada penelitian Ini dikembangkan sebuah metode untuk menentukan kombinasi parameter arsitektur, bobot awal dan bias awal. Selama ini kombinasi ini dilakukan dengan mencoba kemungkinan satu per satu, baik kombinasi hidden layer pada architecture maupun bobot awal, dan bias awal. Bobot awal dan bias awal digunakan sebagai parameter dalam perhitungan nilai fitness. Ukuran setiap individu terbaik dilihat dari besarnya jumlah kuadrat galat (sum of squared error = SSE) masing – masing individu, individu dengan SSE terkecil merupakan individu terbaik. Kombinasi parameter arsiktektur, bobot awal dan bias awal yang terbaik akan digunakan sebagai parameter dalam pelatihan JST backpropagation.Hasil dari penelitian ini adalah sebuah solusi alternatif untuk menyelesaikan permasalahan pada pembelajaran backpropagation yang sering mengalami masalah dalam penentuan parameter pembelajaran. Hasil penelitian ini menunjukan bahwa metode algoritma genetika dapat memberikan solusi bagi pembelajaran backpropagation dan memberikan tingkat akurasi yang lebih baik, serta menurunkan lama pembelajaran jika dibandingkan dengan penentuan parameter yang dilakukan secara manual. Kata kunci  Jaringan syaraf tiruan, algoritma genetika, backpropagation, SSE, lokal minimum AbstractThe weakness of back propagation neural network is very slow to converge and local minima issues that makes artificial neural networks (ANN) are often being trapped in a local minima. A good combination between architecture, intial weight and bias are so important to overcome the weakness of backpropagation neural network.This study developed a method to determine the combination parameter of architectur, initial weight and bias. So far, trial and error is commonly used to select the combination of hidden layer, intial weight and bias. Initial weight and bias is used as a parameter in order to evaluate fitness value. Sum of squared error(SSE) is used to determine best individual. individual with the smallest SSE is the best individual. Best combination parameter of architecture, initial weight and bias will be used as a paramater in the backpropagation neural network learning.            The results of this study is an alternative solution to solve the problems on the backpropagation learning that often have problems in determining the parameters of the learning. The result shows genetic algorithm method can provide a solution for backpropagation learning and can improve the accuracy, also reduce long learning when it compared with the parameters were determined manually. Keywords: Artificial neural network, genetic algorithm, backpropagation, SSE, local minima.

Author(s):  
Ade chandra Saputra

One of the weakness in backpropagation Artificial neural network(ANN) is being stuck in local minima. Learning rate parameter is an important parameter in order to determine how fast the ANN Learning. This research is conducted to determine a method of finding the value of learning rate parameter using a genetic algorithm when neural network learning stops and the error value is not reached the stopping criteria or has not reached the convergence. Genetic algorithm is used to determine the value of learning rate used is based on the calculation of the fitness function with the input of the ANN weights, gradient error, and bias. The calculation of the fitness function will produce an error value of each learning rate which represents each candidate solutions or individual genetic algorithms. Each individual is determined by sum of squared error value. One with the smallest SSE is the best individual. The value of learning rate has chosen will be used to continue learning so that it can lower the value of the error or speed up the learning towards convergence. The final result of this study is to provide a new solution to resolve the problem in the backpropagation learning that often have problems in determining the learning parameters. These results indicate that the method of genetic algorithms can provide a solution for backpropagation learning in order to decrease the value of SSE when learning of ANN has been static in large error conditions, or stuck in local minima


2016 ◽  
Vol 5 (4) ◽  
pp. 126 ◽  
Author(s):  
I MADE DWI UDAYANA PUTRA ◽  
G. K. GANDHIADI ◽  
LUH PUTU IDA HARINI

Weather information has an important role in human life in various fields, such as agriculture, marine, and aviation. The accurate weather forecasts are needed in order to improve the performance of various fields. In this study, use artificial neural network method with backpropagation learning algorithm to create a model of weather forecasting in the area of ??South Bali. The aim of this study is to determine the effect of the number of neurons in the hidden layer and to determine the level of accuracy of the method of artificial neural network with backpropagation learning algorithm in weather forecast models. Weather forecast models in this study use input of the factors that influence the weather, namely air temperature, dew point, wind speed, visibility, and barometric pressure.The results of testing the network with a different number of neurons in the hidden layer of artificial neural network method with backpropagation learning algorithms show that the increase in the number of neurons in the hidden layer is not directly proportional to the value of the accuracy of the weather forecasts, the increase in the number of neurons in the hidden layer does not necessarily increase or decrease value accuracy of weather forecasts we obtain the best accuracy rate of 51.6129% on a network model with three neurons in the hidden layer.


2017 ◽  
Vol 31 (4) ◽  
pp. 436-456 ◽  
Author(s):  
Abbas Javed ◽  
Hadi Larijani ◽  
Ali Ahmadinia ◽  
Rohinton Emmanuel

The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.


2014 ◽  
Vol 28 (1) ◽  
pp. 73-83 ◽  
Author(s):  
Abozar Nasirahmadi ◽  
Mohammad H. Abbaspour-Fard ◽  
Bagher Emadi ◽  
Nasser Behroozi Khazaei

Abstract The present investigation deals with analyzing the compressive strength properties of two varieties (Tarom and Fajr) of parboiled paddy and milled rice including: ultimate stress, modulus of elasticity, rupture force and rupture energy. Combined artificial neural network and genetic algorithm were also applied to model these properties. The parboiled samples were prepared with three soaking temperatures (25, 50 and 75°C) and three steaming times (10, 15 and 20 min). The samples were then dried to final moisture contents of 8, 10 and 12% (w.b.). In general, Tarom variety had higher compressive strength properties for paddy and milled rice than Fajr variety. With increase in steaming time from 10 to 20 min, all mentioned properties increased significantly, whereas these properties were decreased with increasing moisture content from 8 to 12% (w.b.). Coupled artificial neural network and genetic algorithm model with one hidden layer, three inputs (soaking temperature, steaming time and moisture content), was developed to predict the compressive strength properties as model outputs. Results indicated that this model could predict these properties with high correlation and low mean squared error.


Author(s):  
Wahyudin S

Inflasi merupakan indikator makro ekonomi yang sangat penting. Berbagai macam metoda prediksi inflasi Indonesia telah dipublikasikan. Namun pencarian metoda prediksi inflasi yang lebih akurat masih menjadi topik menarik. Pada penulisan ini diusulkan sebuah metoda baru untuk prediksi inflasi memakai model ARIMA dan Artificial Neural Network (ANN). Data inflasi yang digunakan adalah data inflasi bulanan year-on-year dari tahun 2010 sampai dengan tahun 2018 yang diterbitkan oleh Badan Pusat Statistik (BPS). Pertama dibuat 2 model ARIMA yaitu model ARIMA tanpa siklus tahunan dan dengan siklus tahunan. Prosedur standar dan diagostics test telah dilakukan antara lain: summary of statistics, analysis of variance (ANOVA), significance of coefficients test, residuals normality, heterocesdacity, dan stability. Dari hasil perbandingan kinerja memakai Root Mean Squared Error (RMSE) diperoleh bahwa model ARIMA dengan siklus tahunan lebih baik. Model tersebut berupa model ARIMA (2,1,0) (2,0,0) [12]. Kemudian, untuk meningkatkan kinerja prediksi inflasi, ANN telah dibuat berbasis model ARIMA tersebut. Model ANN memakai satu hidden layer dan dua neuron. Hasil pengujian menunjukkan bahwa model ANN menghasilkan RMSE yang lebih kecil daripada model ARIMA (2,1,0) (2,0,0) [12]. Hal ini kemungkinan disebabkan oleh kemampuan mengolah hubungan nonlinear antara variabel target dan variabel penjelas.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document