require output
Recently Published Documents


TOTAL DOCUMENTS

3
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

2020 ◽  
Vol 1 (2) ◽  
pp. 57-68
Author(s):  
IRWAN IRWAN ◽  
ASTRI YUNI HASHARI ◽  
HISYAM IHSAN ◽  
AHMAD ZAKI

Self Organizing Map (SOM) is one of the topology forms of Unsupervised Neural Network where in the learning process does not require output target. Clusters in this research consist of one or more regency/city areas that have certain characteristics based on the variables. Each cluster had to be validated by using the Davies Bouldin Index value to get the best cluster formation from the SOM algorithm learning process. The best cluster model is the cluster model that has the smallest Davies Bouldin Index value. This research used 30 variables that refer to the key statistics of South Sulawesi Province People's Prosperity in 2018 by BPS of South Sulawesi Province. In this research, four cluster formation models were formed which began by forming 2 cluster model to form 5 cluster. Based on the Davies Bouldin Index value, it was found that the  5 cluster model have minimum value of 0.17.


2015 ◽  
Vol 2 (1) ◽  
pp. 28
Author(s):  
Dahriani Hakim Tanjung

Penelitian ini bertujuan untuk memprediksi penyakit asma menggunakan teknik pengenalan pola yaitu jaringan saraf tiruan dengan metode backpropagation. Data penilaian asma mengacu pada riwayat penyakit asma seseorang. Jaringan saraf tiruan dilakukan dengan menentukan jumlah unit untuk setiap lapisan dengan fungsi aktivasi sigmoid biner. Pengujian dilakukan menggunakan perangkat lunak matlab yang diuji dengan beberapa bentuk arsitektur jaringan. Arsitektur dengan konfigurasi terbaik terdiri dari 18 lapisan masukan, 8 lapisan tersembunyi dan 4 lapisan keluaran dengan nilai learning rate sebesar 0.5, nilai toleransi error 0.001, menghasilkan maksimal epoch 4707 dan MSE 0.00100139. MSE berada di bawah nilai error yaitu 0.001, Parameter tersebut dipilih menjadi parameter terbaik karena menghasilkan jumlah iterasi yang memiliki nilai akurasi MSE yang cukup baik, karena nilai MSE paling kecil dari arsitektur yang lain serta nilai MSE dibawah dari nilai error yang ditentukan. Sigmoid Biner Fungsi ini digunakan untuk jaringan saraf yang dilatih dengan menggunakan metode backpropagation. Fungsi sigmoid memiliki nilai range 0 sampai 1. Oleh karena itu, fungsi ini sering digunakan untuk jaringan saraf yang membutuhkan nilai output yang terletak pada interval 0 sampai 1.This study aims to predict asthma using pattern recognition techniques namely artificial neural network with back propagation method. Asthma assessment data refers to a person's history of asthma. Artificial neural network is done by determining the number of units for each layer with binary sigmoid activation function. Testing is done using matlab software being tested with some form of network architecture. Architecture with the best configuration consists of 18 layers of input, 8 hidden layer and output layer 4 with a value of learning rate of 0.5, the error tolerance value 0001, 4707 and resulted in the maximum epoch MSE .00100139. MSE is under the error value is 0.001, the parameter is chosen to be the best parameters for generating the number of iterations that have an accuracy value of MSE is quite good, because the smallest MSE value than other architectures as well as the value of the MSE under a specified error value. Binary sigmoid function is used for neural network trained using the backpropagation method. Sigmoid function has a value in the range 0 to 1. Therefore, this function is often used for neural networks that require output value lies in the interval 0 to 1.


Author(s):  
Vsevolod Kapatsinski

AbstractRussian velar palatalization changes velars into alveopalatals before certain suffixes, including the stem extension -i and the diminutive suffixes -ok and -ek/ik. While velar palatalization always applies before the relevant suffixes in the established lexicon, it often fails with nonce loanwords before -i and -ik but not before -ok or -ek. This is shown to be predicted by the Minimal Generalization Learner (MGL), a model of rule induction and weighting developed by Albright and Hayes (Cognition 90: 119–161, 2003), by a novel version of Network Theory (Bybee, Morphology: A study of the relation between meaning and form, John Benjamins, 1985, Phonology and language use, Cambridge University Press, 2001), which uses competing unconditional product-oriented schemas weighted by type frequency and paradigm uniformity constraints, and by stochastic Optimality Theory with language-specific constraints learned using the Gradual Learning Algorithm (GLA, Boersma, Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam 21: 43–58, 1997). The successful models are shown to predict that a morphophonological rule will fail if the triggering suffix comes to attach to inputs that are not eligible to undergo the rule. This prediction is confirmed in an artificial grammar learning experiment. Under either model, the choice between generalizations or output forms is shown to be stochastic, which requires retrieving known word-forms from the lexicon as wholes, rather than generating them through the grammar. Furthermore, MGL and GLA are shown to succeed only if the suffix and the stem shape are chosen simultaneously, as opposed to the suffix being chosen first and then triggering (or failing to trigger) a stem change. In addition, the GLA is shown to require output-output faithfulness to be ranked above markedness at the beginning of learning (Hayes, Phonological acquisition in Optimality Theory: the early stages, Cambridge University Press, 2004) to account for the present data.


Sign in / Sign up

Export Citation Format

Share Document