scholarly journals Forecasting water demand using back propagation networks in the operation of reservoirs in the citarum cascade, west java, indonesia

Author(s):  
Mulya R. Mashudi

This study investigates the use of Neural Networks (NN) as a potential means of more accurately forecasting water demand in the Citarum River basin cascade. Neural Networks have the ability to recognise nonlinear patterns when sufficiently trained with historical data. The study constructs a NN model of the cascade, based on Back Propagation Networks (BPN). Data representing physical characteristics and meteorological conditions in the Citarum River basin from 1989 through 1995 were used to train the BPN. Nonlinear activation functions (sigmoid, tangent, and gaussian functions) and hidden layers in the BPN were chosen for the study.

2009 ◽  
Vol 19 (06) ◽  
pp. 437-448 ◽  
Author(s):  
MD. ASADUZZAMAN ◽  
MD. SHAHJAHAN ◽  
KAZUYUKI MURASE

Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back propagation (BP) is a famous training method used in the multilayer networks but it often suffers from the drawback of slow convergence. To make the learning faster, we propose 'Fusion of Activation Functions' (FAF) in which different conventional activation functions (AFs) are combined to compute final activation. This has not been studied extensively yet. One of the sub goals of the paper is to check the role of linear AFs in combination. We investigate whether FAF can enable the learning to be faster. Validity of the proposed method is examined by performing simulations on challenging nine real benchmark classification and time series prediction problems. The FAF has been applied to 2-bit, 3-bit and 4-bit parity, the breast cancer, Diabetes, Heart disease, Iris, wine, Glass and Soybean classification problems. The algorithm is also tested with Mackey-Glass chaotic time series prediction problem. The algorithm is shown to work better than other AFs used independently in BP such as sigmoid (SIG), arctangent (ATAN), logarithmic (LOG).


2016 ◽  
Vol 12 (9) ◽  
pp. 108 ◽  
Author(s):  
Muhammad Tayyab ◽  
Jianzhong Zhou ◽  
Xiaofan Zeng ◽  
Rana Adnan

Flood prediction methods play an important role in providing early warnings to government offices. The ability to predict future river flows helps people anticipate and plan for upcoming flooding, preventing deaths and decreasing property destruction. Different hydrological models supporting these predictions have different characteristics, driven by available data and the research area. This study applied three different types of Artificial Neural Networks (ANN) and an autoregressive model to study the Jinsha river basin (JRB), in the upper part of the Yangtze River in China. The three ANN techniques include feedforward back propagation neural networks (FFBPNN), generalized regression neural networks (GRNN), and the radial basis function neural networks (RBFNN). Artificial Neural Networks (ANN) has shown Great deal of accuracy as compared to statistical autoregressive (AR) model because statistical model cannot able to simulate the non-linear pattern. The results varied across the cases used in the study; based on available data and the study area, FFBPNN showed the best applicability, compared to other techniques.


Author(s):  
Sherif S. Ishak ◽  
Haitham M. Al-Deek

Pattern recognition techniques such as artificial neural networks continue to offer potential solutions to many of the existing problems associated with freeway incident-detection algorithms. This study focuses on the application of Fuzzy ART neural networks to incident detection on freeways. Unlike back-propagation models, Fuzzy ART is capable of fast, stable learning of recognition categories. It is an incremental approach that has the potential for on-line implementation. Fuzzy ART is trained with traffic patterns that are represented by 30-s loop-detector data of occupancy, speed, or a combination of both. Traffic patterns observed at the incident time and location are mapped to a group of categories. Each incident category maps incidents with similar traffic pattern characteristics, which are affected by the type and severity of the incident and the prevailing traffic conditions. Detection rate and false alarm rate are used to measure the performance of the Fuzzy ART algorithm. To reduce the false alarm rate that results from occasional misclassification of traffic patterns, a persistence time period of 3 min was arbitrarily selected. The algorithm performance improves when the temporal size of traffic patterns increases from one to two 30-s periods for all traffic parameters. An interesting finding is that the speed patterns produced better results than did the occupancy patterns. However, when combined, occupancy–speed patterns produced the best results. When compared with California algorithms 7 and 8, the Fuzzy ART model produced better performance.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Sign in / Sign up

Export Citation Format

Share Document