scholarly journals Analisis Fungsi Aktivasi Jaringan Syaraf Tiruan untuk Mendeteksi Karakteristik Bentuk Gelombang Spektra Babi dan Sapi

CAUCHY ◽  
2012 ◽  
Vol 2 (3) ◽  
pp. 154
Author(s):  
Shofwan Ali Fauzi

<div class="standard"><a id="magicparlabel-1110">Artificial Neural Network (ANN) is beginning little by little to replace the task of an expert, even with the ANN can be a tool to replace a doctor. One of kind of ANN is Backpropagation networks, this network can be used to training programs in order to be able to recognize whether it is pig or cow wave spectra. To determine the output in Backpropagation training required suitable activation functions. Therefore, in this research will be compared to some of the activation function that can be used in training. Activation functions will be tested with the ratio test to determine the interval convergence. After tested with the ratio test it was found that the activation function was the best activation function to use the Backpropagation network training, because it has a weight range that can meet the methods used in the determination of weights. When tested with the data, the activation function is able to recognize correctly all trial datas. Expected in future research to examine the weight that makes the interval training to achieve fast convergence and the error bit.</a></div>

2021 ◽  
Vol 26 (jai2021.26(1)) ◽  
pp. 32-41
Author(s):  
Bodyanskiy Y ◽  
◽  
Antonenko T ◽  

Modern approaches in deep neural networks have a number of issues related to the learning process and computational costs. This article considers the architecture grounded on an alternative approach to the basic unit of the neural network. This approach achieves optimization in the calculations and gives rise to an alternative way to solve the problems of the vanishing and exploding gradient. The main issue of the article is the usage of the deep stacked neo-fuzzy system, which uses a generalized neo-fuzzy neuron to optimize the learning process. This approach is non-standard from a theoretical point of view, so the paper presents the necessary mathematical calculations and describes all the intricacies of using this architecture from a practical point of view. From a theoretical point, the network learning process is fully disclosed. Derived all necessary calculations for the use of the backpropagation algorithm for network training. A feature of the network is the rapid calculation of the derivative for the activation functions of neurons. This is achieved through the use of fuzzy membership functions. The paper shows that the derivative of such function is a constant, and this is a reason for the statement of increasing in the optimization rate in comparison with neural networks which use neurons with more common activation functions (ReLU, sigmoid). The paper highlights the main points that can be improved in further theoretical developments on this topic. In general, these issues are related to the calculation of the activation function. The proposed methods cope with these points and allow approximation using the network, but the authors already have theoretical justifications for improving the speed and approximation properties of the network. The results of the comparison of the proposed network with standard neural network architectures are shown


Author(s):  
Hüseyin Gürbüz

Activation functions are the most significant properties of artificial neural networks (ANN) because these functions are directly related with the ability of ANN in learning or modelling a system or a function. Furthermore, another reason for the significance of the fact that determination of optimal activation function in ANN is its relationship with success level. In this experimental study, the effects of different types of wire electrodes, cooling techniques and workpiece materials on surface roughness (Ra) and cutting speed (Vc) in wire electrical discharge machining (WEDM) were investigated by using trainable activation functions (AFt) and modelling them in ANNs. So far, a number of methods have been performed according to the data set in order to optimally predict Ra and Vc results. Among these methods, randomized ANN with AFt was found to be the best one for robust prediction according to RMSE values. While the value was 0.280 for Vc, it was 0.2104 for Ra. Optimum activation functions in Ra and Vc were found at first and third degree trainable functions, respectively.


2019 ◽  
Author(s):  
Margaret Gullick ◽  
James R. Booth

Crossmodal integration is a critical component of successful reading, and yet it has been less studied than reading’s unimodal subskills. Proficiency with the sounds of a language (i.e., the phonemes) and with the visual representations of these sounds (graphemes) are both important and necessary precursors for reading, but the formation of a stable integrated representation that combines and links these aspects, and subsequent fluent and automatic access to this crossmodal representation, is unique to reading and is required for its success. Indeed, individuals with specific difficulties in reading, as in dyslexia, demonstrate impairments not only in phonology and orthography but also in integration. Impairments in only crossmodal integration could result in disordered reading via disrupted formation of or access to phoneme–grapheme associations. Alternately, the phonological deficits noted in many individuals with dyslexia may lead to reading difficulties via issues with integration: children who cannot consistently identify and manipulate the sounds of their language will also have trouble matching these sounds to their visual representations, resulting in the manifested deficiencies. We here discuss the importance of crossmodal integration in reading, both generally and as a potential specific causal deficit in the case of dyslexia. We examine the behavioral, functional, and structural neural evidence for a crossmodal, as compared to unimodal, processing issue in individuals with dyslexia in comparison to typically developing controls. We then present an initial review of work using crossmodal- versus unimodal-based reading interventions and training programs aimed at the amelioration of reading difficulties. Finally, we present some remaining questions reflecting potential areas for future research into this topic.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Author(s):  
Tera D. Letzring

This chapter identifies several well-established findings and overarching themes within personality trait accuracy research, and highlights especially promising directions for future research. Topics include (1) theoretical frameworks for accuracy, (2) moderators of accuracy and the context or situation in which judgments are made, (3) the important consequences of accuracy, (4) interventions and training programs to increase judgmental ability and judgability, (5) the generalizability of previous findings, and (6) standardized tests of the accuracy of judging personality traits. The chapter ends by stating that it is an exciting time to be a researcher studying the accuracy of personality trait judgments.


Author(s):  
Volodymyr Shymkovych ◽  
Sergii Telenyk ◽  
Petro Kravets

AbstractThis article introduces a method for realizing the Gaussian activation function of radial-basis (RBF) neural networks with their hardware implementation on field-programmable gaits area (FPGAs). The results of modeling of the Gaussian function on FPGA chips of different families have been presented. RBF neural networks of various topologies have been synthesized and investigated. The hardware component implemented by this algorithm is an RBF neural network with four neurons of the latent layer and one neuron with a sigmoid activation function on an FPGA using 16-bit numbers with a fixed point, which took 1193 logic matrix gate (LUTs—LookUpTable). Each hidden layer neuron of the RBF network is designed on an FPGA as a separate computing unit. The speed as a total delay of the combination scheme of the block RBF network was 101.579 ns. The implementation of the Gaussian activation functions of the hidden layer of the RBF network occupies 106 LUTs, and the speed of the Gaussian activation functions is 29.33 ns. The absolute error is ± 0.005. The Spartan 3 family of chips for modeling has been used to get these results. Modeling on chips of other series has been also introduced in the article. RBF neural networks of various topologies have been synthesized and investigated. Hardware implementation of RBF neural networks with such speed allows them to be used in real-time control systems for high-speed objects.


2021 ◽  
Vol 11 (15) ◽  
pp. 6704
Author(s):  
Jingyong Cai ◽  
Masashi Takemoto ◽  
Yuming Qiu ◽  
Hironori Nakajo

Despite being heavily used in the training of deep neural networks (DNNs), multipliers are resource-intensive and insufficient in many different scenarios. Previous discoveries have revealed the superiority when activation functions, such as the sigmoid, are calculated by shift-and-add operations, although they fail to remove multiplications in training altogether. In this paper, we propose an innovative approach that can convert all multiplications in the forward and backward inferences of DNNs into shift-and-add operations. Because the model parameters and backpropagated errors of a large DNN model are typically clustered around zero, these values can be approximated by their sine values. Multiplications between the weights and error signals are transferred to multiplications of their sine values, which are replaceable with simpler operations with the help of the product to sum formula. In addition, a rectified sine activation function is utilized for further converting layer inputs into sine values. In this way, the original multiplication-intensive operations can be computed through simple add-and-shift operations. This trigonometric approximation method provides an efficient training and inference alternative for devices with insufficient hardware multipliers. Experimental results demonstrate that this method is able to obtain a performance close to that of classical training algorithms. The approach we propose sheds new light on future hardware customization research for machine learning.


2021 ◽  
Vol 11 (11) ◽  
pp. 4754
Author(s):  
Assia Aboubakar Mahamat ◽  
Moussa Mahamat Boukar ◽  
Nurudeen Mahmud Ibrahim ◽  
Tido Tiwa Stanislas ◽  
Numfor Linda Bih ◽  
...  

Earth-based materials have shown promise in the development of ecofriendly and sustainable construction materials. However, their unconventional usage in the construction field makes the estimation of their properties difficult and inaccurate. Often, the determination of their properties is conducted based on a conventional materials procedure. Hence, there is inaccuracy in understanding the properties of the unconventional materials. To obtain more accurate properties, a support vector machine (SVM), artificial neural network (ANN) and linear regression (LR) were used to predict the compressive strength of the alkali-activated termite soil. In this study, factors such as activator concentration, Si/Al, initial curing temperature, water absorption, weight and curing regime were used as input parameters due to their significant effect in the compressive strength. The experimental results depict that SVM outperforms ANN and LR in terms of R2 score and root mean square error (RMSE).


Toxins ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 133 ◽  
Author(s):  
Annika Jagels ◽  
Viktoria Lindemann ◽  
Sebastian Ulrich ◽  
Christoph Gottschalk ◽  
Benedikt Cramer ◽  
...  

The genus Stachybotrys produces a broad diversity of secondary metabolites, including macrocyclic trichothecenes, atranones, and phenylspirodrimanes. Although the class of the phenylspirodrimanes is the major one and consists of a multitude of metabolites bearing various structural modifications, few investigations have been carried out. Thus, the presented study deals with the quantitative determination of several secondary metabolites produced by distinct Stachybotrys species for comparison of their metabolite profiles. For that purpose, 15 of the primarily produced secondary metabolites were isolated from fungal cultures and structurally characterized in order to be used as analytical standards for the development of an LC-MS/MS multimethod. The developed method was applied to the analysis of micro-scale extracts from 5 different Stachybotrys strains, which were cultured on different media. In that process, spontaneous dialdehyde/lactone isomerization was observed for some of the isolated secondary metabolites, and novel stachybotrychromenes were quantitatively investigated for the first time. The metabolite profiles of Stachybotrys species are considerably influenced by time of growth and substrate availability, as well as the individual biosynthetic potential of the respective species. Regarding the reported adverse effects associated with Stachybotrys growth in building environments, combinatory effects of the investigated secondary metabolites should be addressed and the role of the phenylspirodrimanes re-evaluated in future research.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 766
Author(s):  
Rashad A. R. Bantan ◽  
Ramadan A. Zeineldin ◽  
Farrukh Jamal ◽  
Christophe Chesneau

Deanship of scientific research established by the King Abdulaziz University provides some research programs for its staff and researchers and encourages them to submit proposals in this regard. Distinct research study (DRS) is one of these programs. It is available all the year and the King Abdulaziz University (KAU) staff can submit more than one proposal at the same time up to three proposals. The rules of the DSR program are simple and easy so it contributes in increasing the international rank of KAU. The authors are offered financial and moral reward after publishing articles from these proposals in Thomson-ISI journals. In this paper, multiplayer perceptron (MLP) artificial neural network (ANN) is employed to determine the factors that have more effect on the number of ISI published articles. The proposed study used real data of the finished projects from 2011 to April 2019.


Sign in / Sign up

Export Citation Format

Share Document