scholarly journals TDNN with logical values for hydrologic modeling in a cold and snowy climate

2008 ◽  
Vol 10 (4) ◽  
pp. 289-300 ◽  
Author(s):  
Yonas B. Dibike ◽  
Paulin Coulibaly

Watershed runoff in areas with heavy seasonal snow cover is usually estimated using physically based conceptual hydrologic models. Such simulation models normally require a snowmelt algorithm consisting of a surface energy balance and some accounting of internal snowpack processes to be part of the modeling system. On the other hand, artificial neural networks are flexible mathematical structures that are capable of identifying such complex nonlinear relationships between input and output datasets from historical precipitation, temperature and streamflow records. This paper presents the findings of a study on using a form of time-delayed neural network, namely time-lagged feedforward neural network (TLFN), that implicitly accounts for snow accumulation and snowmelt processes through the use of logical values and tapped delay lines. The logical values (in the form of symbolic inputs) are used to implicitly include seasonal information in the TLFN model. The proposed method has been successfully applied for improved precipitation–runoff modeling of both the Chute-du-Diable reservoir inflows and the Serpent River flows in northeastern Canada where river flows and reservoir inflows are highly influenced by seasonal snowmelt effects. The study demonstrates that the TLFN with logical values is capable of modeling the precipitation–runoff process in a cold and snowy climate by relying on ‘logical input values’ and tapped delay lines to implicitly recognize the temporal input–output patterns in the historical data. The study results also show that, once the appropriate input patterns are identified, the time-lagged neural network based models performed quite well, especially for spring peak flows, and demonstrated comparable performance in simulating the precipitation–runoff processes to that of a physically based hydrological model, namely HBV.

2009 ◽  
Author(s):  
Richard F. Olson ◽  
William J. Braselton ◽  
Richard D. Mohlere

1992 ◽  
Vol 26 (9-11) ◽  
pp. 2109-2112
Author(s):  
J. G. Cleary ◽  
T. J. Boehm ◽  
R. J. Geary

Schoeller Technical Papers, Inc. (Schoeller), which manufactures photographic and other specialty papers, is located in Pulaski, New York. The wastewater treatment system consists of a primary clarifier and two settling lagoons. Secondary treatment using a biotower was proposed to meet the new New York State Pollutant Discharge Elimination System (SPDES) discharge limits for BOD and TSS. The effluent from each basin is discharged directly to the Salmon River, at an approximate average flow of 1.6 million gallons/day (mgd). A biotower pilot study was performed to evaluate the suitability of a biotower treatment process for treating the total effluent from Schoeller's facility. The pilot study was used to select the media for the full-scale biotower and to confirm the design loading for the full-scale biotower, which proceeded in parallel with the pilot study due to the schedule constraints. Two pilot systems were operated to compare a conventional cross-flow and vertical media. Test data were collected to evaluate the performance of each pilot treatment system at a range of loading conditions and to develop the design loading information for the full-scale plant. The pilot units were operated for a period of 10 months. BOD concentrations to the pilot units averaged 58 mg/l with a peak of 210 mg/l. Approximately 80% of the BOD was soluble. BOD loadings averaged 21 lb BOD/day/1,000 cubic feet with a peak of 77 lb BOD/day/1,000 cubic feet. Both pilot units achieved excellent BOD removals exceeding 75%, with average effluent soluble BOD concentration less than 10 mg/l and average effluent TSS concentrations of 12 mg/l. The two media achieved comparable performance throughout most of the pilot study.


Author(s):  
Klaus Rollmann ◽  
Aurea Soriano-Vargas ◽  
Forlan Almeida ◽  
Alessandra Davolio ◽  
Denis Jose Schiozer ◽  
...  

1981 ◽  
Vol 29 (6) ◽  
pp. 923-936 ◽  
Author(s):  
J. Mayhan ◽  
A. Simmons ◽  
W. Cummings

2021 ◽  
Author(s):  
Mohammad Al Kadem ◽  
Ali Al Ssafwany ◽  
Ahmed Abdulghani ◽  
Hussain Al Nasir

Abstract Stabilization time is an essential key for pressure measurement accuracy. Obtaining representative pressure points in build-up tests for pressure-sensitive reservoirs is driven by optimizing stabilization time. An artificial intelligence technique was used in the study for testing pressure-sensitive reservoirs using measuring gauges. The stabilization time function of reservoir characteristics is generally calculated using the diffusivity equation where rock and fluid properties are honored. The artificial neural network (ANN) technique will be used to predict the stabilization time and optimize it using readily available and known inputs or parameters. The values obtained from the formula known as the diffusion formula and the ANN technique are then compared against the actual values measured from pressure gauges in the reservoirs. The optimization of the number of datasets required to be fed to the network to allow for coverage over the whole range is essential as opposed to the clustering of the datasets. A total of about 3000 pressure derivative samples from the wells were used in the testing, training, and validation of the ANN. The datasets are optimized by dividing them into three fractional parts, and the number optimized through monitoring the ANN performance. The optimization of the stabilization time is essential and leads to the improvement of the ANN learning process. The sensitivity analysis proves that the use of the formula and ANN technique, compared to actual datasets, is better since, in the formula and ANN technique, the time was optimized with an average absolute relative error of 3.67%. The results are near the same, especially when the ANN technique undergoes testing using known and easily available parameters. Time optimization is essential since discreet points or datasets in the ANN technique and formula would not work, allowing ANN to work in situations of optimization. The study was expected to provide additional data and information, considering that stabilization time is essential in obtaining the pressure map representation. ANN is a superior technique and, through its superiority, allows for proper optimization of time as a parameter. Thus it can predict reservoir log data almost accurately. The method used in the study shows the importance of optimizing pressure stabilization time through reduction. The study results can, therefore, be applied in reservoir testing to achieve optimal results.


2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


2010 ◽  
Vol 7 (1) ◽  
pp. 1103-1141 ◽  
Author(s):  
X. Fang ◽  
J. W. Pomeroy ◽  
C. J. Westbrook ◽  
X. Guo ◽  
A. G. Minke ◽  
...  

Abstract. The eastern Canadian Prairies are dominated by cropland, pasture, woodland and wetland areas. The region is characterized by many poor and internal drainage systems and large amounts of surface water storage. Consequently, basins here have proven challenging to hydrological model predictions which assume good drainage to stream channels. The Cold Regions Hydrological Modelling platform (CRHM) is an assembly system that can be used to set up physically based, flexible, object oriented models. CRHM was used to create a prairie hydrological model for the externally drained Smith Creek Research Basin (~400 km2), east-central Saskatchewan. Physically based modules were sequentially linked in CRHM to simulate snow processes, frozen soils, variable contributing area and wetland storage and runoff generation. Five "representative basins" (RBs) were used and each was divided into seven hydrological response units (HRUs): fallow, stubble, grassland, river channel, open water, woodland, and wetland as derived from a supervised classification of SPOT 5 imagery. Two types of modelling approaches calibrated and uncalibrated, were set up for 2007/08 and 2008/09 simulation periods. For the calibrated modelling, only the surface depression capacity of upland area was calibrated in the 2007/08 simulation period by comparing simulated and observed hydrographs; while other model parameters and all parameters in the uncalibrated modelling were estimated from field observations of soils and vegetation cover, SPOT 5 imagery, and analysis of drainage network and wetland GIS datasets as well as topographic map based and LiDAR DEMs. All the parameters except for the initial soil properties and antecedent wetland storage were kept the same in the 2008/09 simulation period. The model performance in predicting snowpack, soil moisture and streamflow was evaluated and comparisons were made between the calibrated and uncalibrated modelling for both simulation periods. Calibrated and uncalibrated predictions of snow accumulation were very similar and compared fairly well with the distributed field observations for the 2007/08 period with slightly poorer results for the 2008/09 period. Soil moisture content at a point during the early spring was adequately simulated and very comparable between calibrated and uncalibrated results for both simulation periods. The calibrated modelling had somewhat better performance in simulating spring streamflow in both simulation periods, whereas the uncalibrated modelling was still able to capture the streamflow hydrographs with good accuracy. This suggests that prediction of prairie basins without calibration is possible if sufficient data on meteorology, basin landcover and physiography are available.


2016 ◽  
Vol 20 (1) ◽  
pp. 27
Author(s):  
Hongyi Li ◽  
Di Zhao ◽  
Shaofeng Xu ◽  
Pidong Wang ◽  
Jiaxin Chen

In this paper, we study the spectral characteristics and global representations of strongly nonlinear, non-stationary electromagnetic interferences (EMI), which is of great significance in analysing the mathematical modelling of electromagnetic capability (EMC) for a large scale integrated system. We firstly propose to use Self-Organizing Feature Map Neural Network (SOM) to cluster EMI signals. To tackle with the high dimensionality of EMI signals, we combine the dimension reduction and clustering approaches, and find out the global features of different interference factors, in order to finally provide precise mathematical simulation models for EMC design, analysis, forecasting and evaluation. Experimental results have demonstrated the validity and effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document