scholarly journals Bayesian joint inference of hydrological and generalized error models with the enforcement of Total Laws

Author(s):  
Mario R. Hernández-López ◽  
Félix Francés

Abstract. Over the years, the Standard Least Squares (SLS) has been the most commonly adopted criterion for the calibration of hydrological models, despite the fact that they generally do not fulfill the assumptions made by the SLS method: very often errors are autocorrelated, heteroscedastic, biased and/or non-Gaussian. Similarly to recent papers, which suggest more appropriate models for the errors in hydrological modeling, this paper addresses the challenging problem of jointly estimate hydrological and error model parameters (joint inference) in a Bayesian framework, trying to solve some of the problems found in previous related researches. This paper performs a Bayesian joint inference through the application of different inference models, as the known SLS or WLS and the new GL++ and GL++Bias error models. These inferences were carried out on two lumped hydrological models which were forced with daily hydrometeorological data from a basin of the MOPEX project. The main finding of this paper is that a joint inference, to be statistically correct, must take into account the joint probability distribution of the state variable to be predicted and its deviation from the observations (the errors). Consequently, the relationship between the marginal and conditional distributions of this joint distribution must be taken into account in the inference process. This relation is defined by two general statistical expressions called the Total Laws (TLs): the Total Expectation and the Total Variance Laws. Only simple error models, as SLS, do not explicitly need the TLs implementation. An important consequence of the TLs enforcement is the reduction of the degrees of freedom in the inference problem namely, the reduction of the parameter space dimension. This research demonstrates that non-fulfillment of TLs produces incorrect error and hydrological parameter estimates and unreliable predictive distributions. The target of a (joint) inference must be fulfilling the error model hypotheses rather than to achieve the better fitting to the observations. Consequently, for a given hydrological model, the resulting performance of the prediction, the reliability of its predictive uncertainty, as well as the robustness of the parameter estimates, will be exclusively conditioned by the degree in which errors fulfill the error model hypotheses.

2019 ◽  
Vol 23 (4) ◽  
pp. 2147-2172 ◽  
Author(s):  
Lorenz Ammann ◽  
Fabrizio Fenicia ◽  
Peter Reichert

Abstract. The widespread application of deterministic hydrological models in research and practice calls for suitable methods to describe their uncertainty. The errors of those models are often heteroscedastic, non-Gaussian and correlated due to the memory effect of errors in state variables. Still, residual error models are usually highly simplified, often neglecting some of the mentioned characteristics. This is partly because general approaches to account for all of those characteristics are lacking, and partly because the benefits of more complex error models in terms of achieving better predictions are unclear. For example, the joint inference of autocorrelation of errors and hydrological model parameters has been shown to lead to poor predictions. This study presents a framework for likelihood functions for deterministic hydrological models that considers correlated errors and allows for an arbitrary probability distribution of observed streamflow. The choice of this distribution reflects prior knowledge about non-normality of the errors. The framework was used to evaluate increasingly complex error models with data of varying temporal resolution (daily to hourly) in two catchments. We found that (1) the joint inference of hydrological and error model parameters leads to poor predictions when conventional error models with stationary correlation are used, which confirms previous studies; (2) the quality of these predictions worsens with higher temporal resolution of the data; (3) accounting for a non-stationary autocorrelation of the errors, i.e. allowing it to vary between wet and dry periods, largely alleviates the observed problems; and (4) accounting for autocorrelation leads to more realistic model output, as shown by signatures such as the flashiness index. Overall, this study contributes to a better description of residual errors of deterministic hydrological models.


2018 ◽  
Author(s):  
Lorenz Ammann ◽  
Peter Reichert ◽  
Fabrizio Fenicia

Abstract. The widespread application of deterministic hydrological models in research and practise calls for suitable methods to describe their uncertainty. The errors of those models are often heteroscedastic, non-Gaussian and correlated due to the memory effect of errors in state variables. Still, the residual error models used to describe them are usually highly simplified, often neglecting some of the mentioned characteristics. This is partly because general approaches to account for all of those characteristics are lacking, and partly because the benefits of more complex error models in terms of achieving better predictions are unclear. For example, the joint inference of autocorrelation and hydrological model parameters has been shown to lead to poor predictions. This study presents a framework for likelihood functions for deterministic hydrological models that considers correlated errors and allows for an arbitrary probability distribution of observed streamflow. The choice of this distribution reflects prior knowledge about non-normality of the errors. The framework was used to evaluate increasingly complex error models with data of varying temporal resolution (daily to hourly) in two catchments. We found that (1) the joint inference of hydrological and error model parameters leads to poor predictions when conventional error models with stationary correlation are used, which confirms previous studies, (2) the quality of these predictions worsens with higher temporal resolution of the data, (3) accounting for a non-stationary autocorrelation of the errors, i.e. allowing it to vary between wet and dry periods, largely alleviates the observed problems, and (4) accounting for autocorrelation leads to more realistic model output as shown by signatures such as the Flashiness Index. Overall, this study contributes to a better description of residual errors of deterministic hydrological models.


Author(s):  
Rodric Mérimé Nonki ◽  
André Lenouo ◽  
Christopher J. Lennard ◽  
Raphael M. Tshimanga ◽  
Clément Tchawoua

AbstractPotential Evapotranspiration (PET) plays a crucial role in water management, including irrigation systems design and management. It is an essential input to hydrological models. Direct measurement of PET is difficult, time-consuming and costly, therefore a number of different methods are used to compute this variable. This study compares the two sensitivity analysis approaches generally used for PET impact assessment on hydrological model performance. We conducted the study in the Upper Benue River Basin (UBRB) located in northern Cameroon using two lumped-conceptual rainfall-runoff models and nineteen PET estimation methods. A Monte-Carlo procedure was implemented to calibrate the hydrological models for each PET input while considering similar objective functions. Although there were notable differences between PET estimation methods, the hydrological models performance was satisfactory for each PET input in the calibration and validation periods. The optimized model parameters were significantly affected by the PET-inputs, especially the parameter responsible to transform PET into actual ET. The hydrological models performance was insensitive to the PET input using a dynamic sensitivity approach, while he was significantly affected using a static sensitivity approach. This means that the over-or under-estimation of PET is compensated by the model parameters during the model recalibration. The model performance was insensitive to the rescaling PET input for both dynamic and static sensitivities approaches. These results demonstrate that the effect of PET input to model performance is necessarily dependent on the sensitivity analysis approach used and suggest that the dynamic approach is more effective for hydrological modeling perspectives.


Environments ◽  
2019 ◽  
Vol 6 (12) ◽  
pp. 124
Author(s):  
Johannes Ranke ◽  
Stefan Meinecke

In the kinetic evaluation of chemical degradation data, degradation models are fitted to the data by varying degradation model parameters to obtain the best possible fit. Today, constant variance of the deviations of the observed data from the model is frequently assumed (error model “constant variance”). Allowing for a different variance for each observed variable (“variance by variable”) has been shown to be a useful refinement. On the other hand, experience gained in analytical chemistry shows that the absolute magnitude of the analytical error often increases with the magnitude of the observed value, which can be explained by an error component which is proportional to the true value. Therefore, kinetic evaluations of chemical degradation data using a two-component error model with a constant component (absolute error) and a component increasing with the observed values (relative error) are newly proposed here as a third possibility. In order to check which of the three error models is most adequate, they have been used in the evaluation of datasets obtained from pesticide evaluation dossiers published by the European Food Safety Authority (EFSA). For quantitative comparisons of the fits, the Akaike information criterion (AIC) was used, as the commonly used error level defined by the FOrum for the Coordination of pesticide fate models and their USe(FOCUS) is based on the assumption of constant variance. A set of fitting routines was developed within the mkin software package that allow for robust fitting of all three error models. Comparisons using parent only degradation datasets, as well as datasets with the formation and decline of transformation products showed that in many cases, the two-component error model proposed here provides the most adequate description of the error structure. While it was confirmed that the variance by variable error model often provides an improved representation of the error structure in kinetic fits with metabolites, it could be shown that in many cases, the two-component error model leads to a further improvement. In addition, it can be applied to parent only fits, potentially improving the accuracy of the fit towards the end of the decline curve, where concentration levels are lower.


1990 ◽  
Vol 47 (12) ◽  
pp. 2315-2327 ◽  
Author(s):  
Terrance J. Quinn II ◽  
Richard B. Deriso ◽  
Philip R. Neal

We review techniques for estimating the abundance of migratory populations and develop a new technique based on catch-age data from geographic regions and our earlier technique, catch-age analysis with auxiliary information (Deriso et al. 1985, 1989). Data requirements are catch-age data over several years, some auxiliary information, and migration rates among regions. The model, containing parameters for year-class abundance, age selectivity, full-recruitment fishing mortality, and catchability, is fitted to data with a nonlinear least squares algorithm. We present a measurement error model and a process error model and favor the process error model because all model parameters can be jointly estimated. By application to data on Pacific halibut, the process error model converges readily and produces estimates with no significant bias. These estimates have relatively high precision compared to those from analyses which did not incorporate migration information. The error structure used in a model has a more significant impact on parameter estimates than migration rates. A sensitivity study of migration rates shows sensitivity of the order of the rates themselves.


2020 ◽  
Author(s):  
Wenyan Qi ◽  
Jie Chen ◽  
Lu Li ◽  
Chong-yu Xu ◽  
Jingjing Li ◽  
...  

Abstract. To provide an accurate estimate of global water resources and help to formulate water allocation policies, global hydrological models (GHMs) have been developed. However, it is difficult to obtain parameter values for GHMs, which results in large uncertainty in estimation of the global water balance components. In this study, a framework is developed for building GHMs based on parameter regionalization of catchment scale conceptual hydrological models. That is, using appropriate global scale regionalization scheme (GSRS) and conceptual hydrological models to simulate runoff at the grid scale globally and the Network Response Routing (NRF) method to converge the grid runoff to catchment streamflow. To achieve this, five regionalization methods (i.e. the global mean method, the spatial proximity method, the physical similarity method, the physical similarity method considering distance, and the regression method) are first tested for four conceptual hydrological models over thousands medium-sized catchments (2500–50000 km2) around the world to find the appropriate global scale regionalization scheme. The selected GSRS is then used to regionalize conceptual model parameters for global land grids with 0.5°×0.5° resolution on latitude and longitude. The results show that: (1) Spatial proximity method with the Inverse Distance Weighting (IDW) method and the output average option (SPI-OUT) offers the best regionalization solution, and the greatest gains of the SPI-OUT method were achieved with mean distance between the donor catchments and the target catchment is no more than 1500 km. (2) It was found the Kling-Gupta efficiency (KGE) value of 0.5 is a good threshold value to select donor catchments. And (3) Four different GHMs established based on framework were able to produce reliable streamflow simulations. Overall, the proposal framework can be used with any conceptual hydrological model for estimating global water resources, even though uncertainty exists in terms of using difference conceptual models.


2020 ◽  
Vol 14 (3) ◽  
pp. 369-379
Author(s):  
Kanglin Xing ◽  
◽  
J. R. R. Mayer ◽  
Sofiane Achiche

The scale and master ball artefact (SAMBA) method allows estimating the inter- and intra-axis error parameters as well as volumetric errors (VEs) of a five-axis machine tool by using simple ball artefacts and the machine tool’s own touch-trigger probe. The SAMBA method can use two different machine error models named after the number of model parameters, i.e., the “13” and “84” machine error models, to estimate the VEs. In this study, we compare these two machine error models when using VE vector directions and values for monitoring the machine tool condition for three cases of machine malfunctions: 1) a C-axis encoder fault, 2) an induced X-axis linear positioning error, and 3) an induced straightness error simulated fault. The results show that the “13” machine error model produces more focused concentrated VE directions but smaller VE values when compared with the “84” machine error model; furthermore, although both models can recognize the three faults and are effective in monitoring the machine tool condition, the “13” machine error model achieves a better recognition rate of the machine condition. This paper provides guidelines for selecting machine error models for the SAMBA method when using VEs to monitor the machine tool condition.


2017 ◽  
Vol 8 (4) ◽  
pp. 557-575 ◽  
Author(s):  
Manjula Devak ◽  
C. T. Dhanya

Abstract Different hydrological models provide diverse perspectives of the system being modeled, and inevitably, are imperfect representations of reality. Irrespective of the choice of models, the major source of error in any hydrological modeling is the uncertainty in the determination of model parameters, owing to the mismatch between model complexity and available data. Sensitivity analysis (SA) methods help to identify the parameters that have a strong impact on the model outputs and hence influence the model response. In addition, SA assists in analyzing the interaction between parameters, its preferable range and its spatial variability, which in turn influence the model outcomes. Various methods are available to perform SA and the perturbation technique varies widely. This study attempts to categorize the SA methods depending on the assumptions and methodologies involved in various methods. The pros and cons associated with each SA method are discussed. The sensitivity pertaining to the impact of space and time resolutions on model results is highlighted. The applicability of different SA approaches for various purposes is understood. This study further elaborates the objectives behind selection and application of SA approaches in hydrological modeling, hence providing valuable insights on the limitations, knowledge gaps, and future research directions.


2014 ◽  
Vol 11 (9) ◽  
pp. 10787-10828 ◽  
Author(s):  
R. P. Bartholomeus ◽  
J. H. Stagge ◽  
L. M. Tallaksen ◽  
J. P. M. Witte

Abstract. Hydrological modeling frameworks require an accurate representation of evaporation fluxes for appropriate quantification of e.g. the soil moisture budget, droughts, recharge and groundwater processes. Many frameworks have used the concept of potential evaporation, often estimated for different vegetation classes by multiplying the evaporation from a reference surface ("reference evaporation") with crop specific scaling factors ("crop factors"). Though this two-step potential evaporation approach undoubtedly has practical advantages, the empirical nature of both reference evaporation methods and crop factors limits its usability in extrapolations and non-stationary climatic conditions. In this paper we assess the sensitivity of potential evaporation estimates for different vegetation classes using the two-step approach when calibrated using a non-stationary climate. We used the past century's time series of observed climate, containing non-stationary signals of multi-decadal atmospheric oscillations, global warming, and global dimming/brightening, to evaluate the sensitivity of potential evaporation estimates to the choice and length of the calibration period. We show that using empirical coefficients outside their calibration range may lead to systematic differences between process-based and empirical reference evaporation methods, and systematic errors in estimated potential evaporation components. Such extrapolations of time-variant model parameters are not only relevant for the calculation of potential evaporation, but also for hydrological modeling in general, and they may limit the temporal robustness of hydrological models.


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


Sign in / Sign up

Export Citation Format

Share Document