scholarly journals Impact of uncertainties in discharge determination on the parameter estimation and performance of a hydrological model

2012 ◽  
Vol 44 (3) ◽  
pp. 454-466 ◽  
Author(s):  
Sander P. M. van den Tillaart ◽  
Martijn J. Booij ◽  
Maarten S. Krol

Uncertainties in discharge determination may have serious consequences for hydrological modelling and resulting discharge predictions used for flood forecasting, climate change impact assessment and reservoir operation. The aim of this study is to quantify the effect of discharge errors on parameters and performance of a conceptual hydrological model for discharge prediction applied to two catchments. Six error sources in discharge determination are considered: random measurement errors without autocorrelation; random measurement errors with autocorrelation; systematic relative measurement errors; systematic absolute measurement errors; hysteresis in the discharge–water level relation and effects of an outdated discharge–water level relation. Assuming realistic magnitudes for each error source, results show that systematic errors and an outdated discharge–water level relation have a considerable influence on model performance, while other error sources have a small to negligible effect. The effects of errors on parameters are large if the effects on model performance are large as well and vice versa. Parameters controlling the water balance are influenced by systematic errors and parameters related to the shape of the hydrograph are influenced by random errors. Large effects of discharge errors on model performance and parameters should be taken into account when using discharge predictions for flood forecasting and impact assessment.

Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2032
Author(s):  
Pâmela A. Melo ◽  
Lívia A. Alvarenga ◽  
Javier Tomasella ◽  
Carlos R. Mello ◽  
Minella A. Martins ◽  
...  

Landform classification is important for representing soil physical properties varying continuously across the landscape and for understanding many hydrological processes in watersheds. Considering it, this study aims to use a geomorphology map (Geomorphons) as an input to a physically based hydrological model (Distributed Hydrology Soil Vegetation Model (DHSVM)) in a mountainous headwater watershed. A sensitivity analysis of five soil parameters was evaluated for streamflow simulation in each Geomorphons feature. As infiltration and saturation excess overland flow are important mechanisms for streamflow generation in complex terrain watersheds, the model’s input soil parameters were most sensitive in the “slope”, “hollow”, and “valley” features. Thus, the simulated streamflow was compared with observed data for calibration and validation. The model performance was satisfactory and equivalent to previous simulations in the same watershed using pedological survey and moisture zone maps. Therefore, the results from this study indicate that a geomorphologically based map is applicable and representative for spatially distributing hydrological parameters in the DHSVM.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


1982 ◽  
Vol 26 ◽  
pp. 11-24 ◽  
Author(s):  
Allan Brown

Different procedures used in precision measurements of lattice parameters are, strictly, only valid if they can be shown to give results that are mutually reproducible. For this purpose reproducibility is defined in terms of the parameters a. and standard deviations a. obtained for X-ray specimens of one or more reference materials. The requirement is that all systematic errors should be minimized to a level below that of the random measurement errors. Where these have a Gaussian distribution the significance of the difference, Δa°, between two , measurements can then be Let;Led by evaluating . Thus, if K < 2 the difference, Δa°, cannot be distinguished from the effects of random measurement errors. This condition should be met for specimens of the same sample if reproducibility is good. For K ≥ 3 the value of Δa° is then taken to reflect real differences in the crystalline Jattice of two X-ray specimens of a given compound. A basis is thus created for the study of solid solubility and for the precise characterization of crystalline compounds.


2014 ◽  
Vol 10 (1) ◽  
pp. 77-90 ◽  
Author(s):  
Péter Torma ◽  
Borbála Széles ◽  
Géza Hajnal

Abstract This study aims to test and compare the applicability and performance of two different hydrological model concepts on a small Hungarian watershed. The lumped model of HEC-HMS and the semi-distributed TOPMODEL have been implemented to predict streamflow of Bükkös Creek. Models were calibrated against the highest flood event recorded in the basin in May, 2010. Validation was done in an extended interval when smaller floods were observed. Acceptable results can be achieved with the semi-distributed approach. Model comparison is made by means of sensitivity analysis of model parameters. For TOPMODEL the effect of spatial resolution of the digital terrain model while for HMS the complexity of the model setup was further explored. The results were quantified with model performance indices.


2010 ◽  
Vol 10 (9) ◽  
pp. 4145-4165 ◽  
Author(s):  
D. F. Baker ◽  
H. Bösch ◽  
S. C. Doney ◽  
D. O'Brien ◽  
D. S. Schimel

Abstract. We quantify how well column-integrated CO2 measurements from the Orbiting Carbon Observatory (OCO) should be able to constrain surface CO2 fluxes, given the presence of various error sources. We use variational data assimilation to optimize weekly fluxes at a 2°×5° resolution (lat/lon) using simulated data averaged across each model grid box overflight (typically every ~33 s). Grid-scale simulations of this sort have been carried out before for OCO using simplified assumptions for the measurement error. Here, we more accurately describe the OCO measurements in two ways. First, we use new estimates of the single-sounding retrieval uncertainty and averaging kernel, both computed as a function of surface type, solar zenith angle, aerosol optical depth, and pointing mode (nadir vs. glint). Second, we collapse the information content of all valid retrievals from each grid box crossing into an equivalent multi-sounding measurement uncertainty, factoring in both time/space error correlations and data rejection due to clouds and thick aerosols. Finally, we examine the impact of three types of systematic errors: measurement biases due to aerosols, transport errors, and mistuning errors caused by assuming incorrect statistics. When only random measurement errors are considered, both nadir- and glint-mode data give error reductions over the land of ~45% for the weekly fluxes, and ~65% for seasonal fluxes. Systematic errors reduce both the magnitude and spatial extent of these improvements by about a factor of two, however. Improvements nearly as large are achieved over the ocean using glint-mode data, but are degraded even more by the systematic errors. Our ability to identify and remove systematic errors in both the column retrievals and atmospheric assimilations will thus be critical for maximizing the usefulness of the OCO data.


2008 ◽  
Vol 12 (6) ◽  
pp. 1273-1283 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


2019 ◽  
Author(s):  
Henrique Freitas ◽  
Celso Luiz Mendes

The Roofline model gives insights about the performance behavior of applications bounded by either memory or processor limits, providing useful guidelines for performance improvements. This work uses the Roofline model on the analysis of the MGB model that simulates hydrological processes in largescale watersheds. Real-world input data are used to characterize the performance on two multicore architectures, one with only CPUs and one with CPUs/GPU. The MGB model performance is improved with optimizations for better memory use, and also with shared-memory (OpenMP) and GPU (OpenACC) parallelism. CPU performance achieves 42.51 % and 50.17 % of each system’s peak, whereas GPU performance is low due to overheads caused by the MGB model structure.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


1993 ◽  
Vol 27 (3-4) ◽  
pp. 1-13 ◽  
Author(s):  
Arie H. Havelaar ◽  
Siem H. Heisterkamp ◽  
Janneke A. Hoekstra ◽  
Kirsten A. Mooijman

The general concept of measurement errors is applied to quantitative bacteriological counts on membrane filters or agar plates. The systematic errors of these methods are related to the growth characteristics of the medium (recovery of target organisms and inhibition of non-target organisms) and to its differential characteristics (sensitivity and specificity). Factors that influence the precision of microbiological counts are the variation between replicates, within samples, between operators and between laboratories. It is also affected by the linearity of the method, the verification rate and, where applicable, the number of colonies subcultured for verification. Repeatability (r) and reproducibility (R) values can be calculated on the logarithmic scale.


Water ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 872
Author(s):  
Vesna Đukić ◽  
Ranka Erić

Due to the improvement of computation power, in recent decades considerable progress has been made in the development of complex hydrological models. On the other hand, simple conceptual models have also been advanced. Previous studies on rainfall–runoff models have shown that model performance depends very much on the model structure. The purpose of this study is to determine whether the use of a complex hydrological model leads to more accurate results or not and to analyze whether some model structures are more efficient than others. Different configurations of the two models of different complexity, the Système Hydrologique Européen TRANsport (SHETRAN) and Hydrologic Modeling System (HEC-HMS), were compared and evaluated in simulating flash flood runoff for the small (75.9 km2) Jičinka River catchment in the Czech Republic. The two models were compared with respect to runoff simulations at the catchment outlet and soil moisture simulations within the catchment. The results indicate that the more complex SHETRAN model outperforms the simpler HEC HMS model in case of runoff, but not for soil moisture. It can be concluded that the models with higher complexity do not necessarily provide better model performance, and that the reliability of hydrological model simulations can vary depending on the hydrological variable under consideration.


Sign in / Sign up

Export Citation Format

Share Document