scholarly journals Flood Hydrograph Prediction Using Machine Learning Methods

Water ◽  
2018 ◽  
Vol 10 (8) ◽  
pp. 968 ◽  
Author(s):  
Gokmen Tayfur ◽  
Vijay Singh ◽  
Tommaso Moramarco ◽  
Silvia Barbetta

Machine learning (soft) methods have a wide range of applications in many disciplines, including hydrology. The first application of these methods in hydrology started in the 1990s and have since been extensively employed. Flood hydrograph prediction is important in hydrology and is generally done using linear or nonlinear Muskingum (NLM) methods or the numerical solutions of St. Venant (SV) flow equations or their simplified forms. However, soft computing methods are also utilized. This study discusses the application of the artificial neural network (ANN), the genetic algorithm (GA), the ant colony optimization (ACO), and the particle swarm optimization (PSO) methods for flood hydrograph predictions. Flow field data recorded on an equipped reach of Tiber River, central Italy, are used for training the ANN and to find the optimal values of the parameters of the rating curve method (RCM) by the GA, ACO, and PSO methods. Real hydrographs are satisfactorily predicted by the methods with an error in peak discharge and time to peak not exceeding, on average, 4% and 1%, respectively. In addition, the parameters of the Nonlinear Muskingum Model (NMM) are optimized by the same methods for flood routing in an artificial channel. Flood hydrographs generated by the NMM are compared against those obtained by the numerical solutions of the St. Venant equations. Results reveal that the machine learning models (ANN, GA, ACO, and PSO) are powerful tools and can be gainfully employed for flood hydrograph prediction. They use less and easily measurable data and have no significant parameter estimation problem.

1985 ◽  
Vol 16 (1) ◽  
pp. 1-10 ◽  
Author(s):  
V. P. Singh ◽  
C. Corradini ◽  
F. Melone

The geomorphological instantaneous unit hydrograph (IUH) proposed by Gupta et al. (1980) was compared with the IUH derived by commonly used time-area and Nash methods. This comparison was performed by analyzing the effective rainfall-direct runoff relationship for four large basins in Central Italy ranging in area from 934 to 4,147 km2. The Nash method was found to be the most accurate of the three methods. The geomorphological method, with only one parameter estimated in advance from the observed data, was found to be little less accurate than the Nash method which has two parameters determined from observations. Furthermore, if the geomorphological and Nash methods employed the same information represented by basin lag, then they produced similar accuracy provided the other Nash parameter, expressed by the product of peak flow and time to peak, was empirically assessed within a wide range of values. It was concluded that it was more appropriate to use the geomorphological method for ungaged basins and the Nash method for gaged basins.


1970 ◽  
Vol 7 ◽  
pp. 60-64 ◽  
Author(s):  
Ruchi Khare ◽  
Vishnu Prasad Prasad ◽  
Sushil Kumar

The testing of physical turbine models is costly, time consuming and subject to limitations of laboratory setup to meet International Electro technical Commission (IEC) standards. Computational fluid dynamics (CFD) has emerged as a powerful tool for funding numerical solutions of wide range of flow equations whose analytical solutions are not feasible. CFD also minimizes the requirement of model testing. The present work deals with simulation of 3D flow in mixed flow (Francis) turbine passage; i.e., stay vane, guide vane, runner and draft tube using ANSYS CFX 10 software for study of flow pattern within turbine space and computation of various losses and efficiency at different operating regimes. The computed values and variation of performance parameters are found to bear close comparison with experimental results.Key words: Hydraulic turbine; Performance; Computational fluid dynamics; Efficiency; LossesDOI: 10.3126/hn.v7i0.4239Hydro Nepal Journal of Water, Energy and EnvironmentVol. 7, July, 2010Page: 60-64Uploaded date: 31 January, 2011


2020 ◽  
Author(s):  
Mazin Mohammed ◽  
Karrar Hameed Abdulkareem ◽  
Mashael S. Maashi ◽  
Salama A. Mostafa A. Mostafa ◽  
Abdullah Baz ◽  
...  

BACKGROUND In most recent times, global concern has been caused by a coronavirus (COVID19), which is considered a global health threat due to its rapid spread across the globe. Machine learning (ML) is a computational method that can be used to automatically learn from experience and improve the accuracy of predictions. OBJECTIVE In this study, the use of machine learning has been applied to Coronavirus dataset of 50 X-ray images to enable the development of directions and detection modalities with risk causes.The dataset contains a wide range of samples of COVID-19 cases alongside SARS, MERS, and ARDS. The experiment was carried out using a total of 50 X-ray images, out of which 25 images were that of positive COVIDE-19 cases, while the other 25 were normal cases. METHODS An orange tool has been used for data manipulation. To be able to classify patients as carriers of Coronavirus and non-Coronavirus carriers, this tool has been employed in developing and analysing seven types of predictive models. Models such as , artificial neural network (ANN), support vector machine (SVM), linear kernel and radial basis function (RBF), k-nearest neighbour (k-NN), Decision Tree (DT), and CN2 rule inducer were used in this study.Furthermore, the standard InceptionV3 model has been used for feature extraction target. RESULTS The various machine learning techniques that have been trained on coronavirus disease 2019 (COVID-19) dataset with improved ML techniques parameters. The data set was divided into two parts, which are training and testing. The model was trained using 70% of the dataset, while the remaining 30% was used to test the model. The results show that the improved SVM achieved a F1 of 97% and an accuracy of 98%. CONCLUSIONS :. In this study, seven models have been developed to aid the detection of coronavirus. In such cases, the learning performance can be improved through knowledge transfer, whereby time-consuming data labelling efforts are not required.the evaluations of all the models are done in terms of different parameters. it can be concluded that all the models performed well, but the SVM demonstrated the best result for accuracy metric. Future work will compare classical approaches with deep learning ones and try to obtain better results. CLINICALTRIAL None


2020 ◽  
Vol 9 (1) ◽  
pp. 1374-1377

Rainfall is one of the major livelihood of this world. Each and every organism in this universe need some of water to order to survive in its own living conditions. As rainfall is the main source of water and its need to agriculture is inevitable, there arises a necessity to analyze the pattern of the rainfall. The main aim of our paper is to predict the rainfall considering various factors like temperature, pressure, cloud cover, wind speed, pollution and precipitation. There are various ideas and new methodologies proposed in order to predict rainfall. But our proposed concept is based on machine learning because of its wide range of development and preferability nowadays. Among the various technologies built in Machine Learning (ML), Feed Forward Neural Network (FFNN) which is the simplest form of Artificial Neural Network (ANN) is preferred because this model learns the complex relationships among the various input parameters and helps to model them easily. Rainfall in our proposed model is predicted using different parameters influencing the rainfall along with their combinations and patterns. The experimental results depicts that the proposed model based on FFNN indicates suitable accuracy.


1995 ◽  
Vol 117 (2) ◽  
pp. 234-241 ◽  
Author(s):  
V. C. Patel ◽  
J. Y. Yoon

Principal results of classical experiments on the effects of sandgrain roughness are briefly reviewed, along with various models that have been proposed to account for these effects in numerical solutions of the fluid-flow equations. Two models that resolve the near-wall flow are applied to the flow in a two-dimensional, rough-wall channel. Comparisons with analytical results embodied in the well-known Moody diagram show that the k–ω model of Wilcox performs remarkably well over a wide range of roughness values, while a modified two-layer k–ε based model requires further refinement. The k–ω model is applied to water flow over a fixed sand dune for which extensive experimental data are available. The solutions are found to be in agreement with data, including the flow in the separation eddy and its recovery after reattachment. The results suggest that this modeling approach may be extended to other types of surface roughness, and to more complex flows.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5519
Author(s):  
Kenneth E. Schackart ◽  
Jeong-Yeol Yoon

Since their inception, biosensors have frequently employed simple regression models to calculate analyte composition based on the biosensor’s signal magnitude. Traditionally, bioreceptors provide excellent sensitivity and specificity to the biosensor. Increasingly, however, bioreceptor-free biosensors have been developed for a wide range of applications. Without a bioreceptor, maintaining strong specificity and a low limit of detection have become the major challenge. Machine learning (ML) has been introduced to improve the performance of these biosensors, effectively replacing the bioreceptor with modeling to gain specificity. Here, we present how ML has been used to enhance the performance of these bioreceptor-free biosensors. Particularly, we discuss how ML has been used for imaging, Enose and Etongue, and surface-enhanced Raman spectroscopy (SERS) biosensors. Notably, principal component analysis (PCA) combined with support vector machine (SVM) and various artificial neural network (ANN) algorithms have shown outstanding performance in a variety of tasks. We anticipate that ML will continue to improve the performance of bioreceptor-free biosensors, especially with the prospects of sharing trained models and cloud computing for mobile computation. To facilitate this, the biosensing community would benefit from increased contributions to open-access data repositories for biosensor data.


Author(s):  
Tatsuya Yokoi ◽  
Kosuke Adachi ◽  
Sayuri Iwase ◽  
Katsuyuki Matsunaga

To accurately predict grain boundary (GB) atomic structures and their energetics in CdTe, the present study constructs an artificial-neural-network (ANN) interatomic potential. To cover a wide range of atomic environments,...


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Majid Niazkar

In this study, two machine learning (ML) models named as artificial neural network (ANN) and genetic programming (GP) were applied to design optimum canals with circular shapes. In this application, the earthwork and lining costs were considered as the objective function, while Manning’s equation was utilized as the hydraulic constraint. In this design problem, two different scenarios were considered for Manning’s coefficient: (1) constant Manning’s coefficient and (2) the experimentally proved variation of Manning’s coefficient with water depth. The defined design problem was solved for a wide range of different dimensionless variables involved to produce a large enough database. The first part of these data was used to train the ML models, while the second part was utilized to compare the performances of ANN and GP in optimum design of circular channels with those of explicit design relations available in the literature. The comparison obviously indicated that the ML models improved the accuracy of the circular channel design from 55% to 91% based on two performance evaluation criteria. Finally, application of the ML models to optimum design of circular channels demonstrates a considerable improvement over the explicit design equations available in the literature.


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


Sign in / Sign up

Export Citation Format

Share Document