scholarly journals Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

Author(s):  
Christa Kelleher ◽  
Brian McGlynn ◽  
Thorsten Wagener

Abstract. Distributed catchment models are widely used tools for predicting hydrologic behaviour. While distributed models require many parameters to describe a system, they are expected to simulate behaviour that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several ‘behavioural’ sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modelling application. We outline a hierarchical approach to reduce the number of behavioural sets based on regional, observation-driven, and expert knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of ‘behavioural’ parameter sets and increased certainty in spatio-temporal simulations, simulating a well-studied headwater catchment, Stringer Creek, MT using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10,000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series further reduced the number of behavioural parameter sets, but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioural parameter sets can reduce equifinality and bolster more careful application and simulation of spatio-temporal processes via distributed modelling at the catchment scale.

2017 ◽  
Vol 21 (7) ◽  
pp. 3325-3352 ◽  
Author(s):  
Christa Kelleher ◽  
Brian McGlynn ◽  
Thorsten Wagener

Abstract. Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology–soil–vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.


2020 ◽  
Author(s):  
Ondrej Hotovy ◽  
Michal Jenicek

<p>Seasonal snowpack significantly influences the catchment runoff and thus represents an important input for the hydrological cycle. Changes in the precipitation distribution and intensity, as well as a shift from snowfall to rain is expected in the future due to climate changes. As a result, rain-on-snow events, which are considered to be one of the main causes of floods in winter and spring, may occur more frequently.</p><p>The objective of this study is 1) to evaluate the frequency, inter-annual variability and extremity of rain-on-snow events in the past based on existing measurements and 2) to simulate the effect of predicted increase in air temperature on the occurrence of rain-on-snow events in the future. We selected 59 near-natural mountain catchments in Czechia with significant snow influence on runoff and with available long-time series (>35 years) of daily hydrological and meteorological variables. A semi-distributed conceptual model, HBV-light, was used to simulate the individual components of the water cycle at a catchment scale. The model was calibrated for each of study catchments by using 100 calibration trials which resulted in respective number of optimized parameter sets. The model performance was evaluated against observed runoff and snow water equivalent. Rain-on-snow events definition by threshold values for air temperature, snow depth, rain intensity and snow water equivalent decrease allowed us to analyze inter-annual variations and trends in rain-on-snow events during the study period 1980-2014 and to explain the role of different catchment attributes.</p><p>The preliminary results show that a significant change of rain-on-snow events related to increasing air temperature is not clearly evident. Since both air temperature and elevation seem to be an important rain-on-snow drivers, there is an increasing rain-on-snow events occurrence during winter season due to a decrease in snowfall fraction. In contrast, a decrease in total number of events was observed due to the shortening of the period with existing snow cover on the ground. Modelling approach also opened further questions related to model structure and parameterization, specifically how individual model procedures and parameters represent the real natural processes. To understand potential model artefacts might be important when using HBV or similar bucket-type models for impact studies, such as modelling the impact of climate change on catchment runoff.</p>


2019 ◽  
Vol 12 (3) ◽  
pp. 933-953 ◽  
Author(s):  
Jeremy C. Ely ◽  
Chris D. Clark ◽  
David Small ◽  
Richard C. A. Hindmarsh

Abstract. Earth's extant ice sheets are of great societal importance given their ongoing and potential future contributions to sea-level rise. Numerical models of ice sheets are designed to simulate ice-sheet behaviour in response to climate changes but to be improved require validation against observations. The direct observational record of extant ice sheets is limited to a few recent decades, but there is a large and growing body of geochronological evidence spanning millennia constraining the behaviour of palaeo-ice sheets. Hindcasts can be used to improve model formulations and study interactions between ice sheets, the climate system and landscape. However, ice-sheet modelling results have inherent quantitative errors stemming from parameter uncertainty and their internal dynamics, leading many modellers to perform ensemble simulations, while uncertainty in geochronological evidence necessitates expert interpretation. Quantitative tools are essential to examine which members of an ice-sheet model ensemble best fit the constraints provided by geochronological data. We present the Automated Timing Accordance Tool (ATAT version 1.1) used to quantify differences between model results and geochronological data on the timing of ice-sheet advance and/or retreat. To demonstrate its utility, we perform three simplified ice-sheet modelling experiments of the former British–Irish ice sheet. These illustrate how ATAT can be used to quantify model performance, either by using the discrete locations where the data originated together with dating constraints or by comparing model outputs with empirically derived reconstructions that have used these data along with wider expert knowledge. The ATAT code is made available and can be used by ice-sheet modellers to quantify the goodness of fit of hindcasts. ATAT may also be useful for highlighting data inconsistent with glaciological principles or reconstructions that cannot be replicated by an ice-sheet model.


2017 ◽  
Vol 49 (1) ◽  
pp. 41-59
Author(s):  
Torsten Starkloff ◽  
Jannes Stolte ◽  
Rudi Hessel ◽  
Coen Ritsema

Abstract Shallow (<1 m deep) snowpacks on agricultural areas are an important hydrological component in many countries, which determines how much meltwater is potentially available for overland flow, causing soil erosion and flooding at the end of winter. Therefore, it is important to understand the development of shallow snowpacks in a spatially distributed manner. This study combined field observations with spatially distributed snow modelling using the UEBGrid model, for three consecutive winters (2013–2015) in southern Norway. Model performance was evaluated by comparing the spatially distributed snow water equivalent (SWE) measurements over time with the simulated SWE. UEBGrid replicated SWE development at catchment scale with satisfactory accuracy for the three winters. The different calibration approaches which were necessary for winters 2013 and 2015 showed the delicacy of modelling the change in shallow snowpacks. Especially the refreezing of meltwater and prevention of runoff and infiltration of meltwater by frozen soils and ice layers can make simulations of shallow snowpacks challenging.


2018 ◽  
Author(s):  
Jeremy C. Ely ◽  
Chris D. Clark ◽  
David Small ◽  
Richard C. A. Hindmarsh

Abstract. Earth's extant ice sheets are of great societal importance given their ongoing and potential future contributions to sea-level rise. Numerical models of ice sheets are designed to simulate ice sheet behaviour in response to climate changes, but to be improved require validation against observations. The direct observational record of extant ice sheets is limited to a few recent decades, but there is a large and growing body of geochronological evidence spanning millennia constraining the behaviour of palaeo-ice sheets. Hindcasts can be used to improve model formulations and study interactions between ice sheets, the climate system and landscape. However, ice-sheet modelling results have inherent quantitative errors stemming from parameter uncertainty and their internal dynamics, leading many modellers to perform ensemble simulations, while uncertainty in geochronological evidence necessitates expert interpretation. Quantitative tools are essential to examine which members of an ice-sheet model ensemble best fit the constraints provided by geochronological data. We present an Automated Timing Accordance Tool (ATAT version 1.0) used to quantify differences between model results and geo-data on the timing of ice sheet advance and/or retreat. To demonstrate its utility, we perform three simplified ice-sheet modelling experiments of the former British-Irish Ice Sheet. These illustrate how ATAT can be used to quantify model performance, either by using the discrete locations where the data originated together with dating constraints or by comparing model outputs with empirically-derived reconstructions that have used these data along with wider expert knowledge. The ATAT code is made available and can be used by ice-sheet modellers to quantify the goodness of fit of hindcasts. ATAT may also be useful for highlighting data inconsistent with glaciological principles or reconstructions that cannot be replicated by an ice sheet model.


2020 ◽  
Author(s):  
Ondrej Nedelcev ◽  
Michal Jenicek

<p>Seasonal snowpack is an important part of the water cycle and it has a large influence on runoff regime in mountain catchments of Central Europe. However, snow water equivalent (SWE) is decreasing in many mountain regions over the last decades and spring snowmelt occurs earlier in the year. This study aimed 1) to analyse long-term changes and trends in selected snowpack characteristics, such as SWE, snow cover duration, snowmelt onset and melt-out in 40 mountain catchments in Czechia in the period 1965–2014 and 2) to relate the detected changes to changes in air temperature and snowfall fraction at different elevations. Since the availability of time series of measured SWE at a catchment scale is limited, a conceptual semi-distributed hydrological model HBV-light was used to simulate daily SWE for defined elevation zones. Besides SWE, the model simulated other water balance components, such as runoff, soil moisture and groundwater recharge. The integrated multi-variable model calibration procedure was used to calibrate the model. Both observed runoff and SWE were used for evaluation of the model performance. Seasonal and monthly mean of SWE, as well as snow cover duration, snowmelt onset, snowmelt rates and melt-out were calculated for individual catchments and elevation zones. The non-parametric Mann-Kendall test was used to detect potential trends in simulated time series. The results showed significant decreasing trends in snowfall fraction for all catchments and elevations in the study period mostly due to an increase in air temperature. This resulted in a decrease in snow storages in most of catchments, especially in western parts of Czechia. However, a lot of regional differences exists and no trends in SWE were detected in some catchments. Decreasing trends in snow cover duration were detected as well, mostly because of earlier snowmelt onset and melt-out.</p>


2020 ◽  
Vol 15 (4) ◽  
pp. 351-361
Author(s):  
Liwei Huang ◽  
Arkady Shemyakin

Skewed t-copulas recently became popular as a modeling tool of non-linear dependence in statistics. In this paper we consider three different versions of skewed t-copulas introduced by Demarta and McNeill; Smith, Gan and Kohn; and Azzalini and Capitanio. Each of these versions represents a generalization of the symmetric t-copula model, allowing for a different treatment of lower and upper tails. Each of them has certain advantages in mathematical construction, inferential tools and interpretability. Our objective is to apply models based on different types of skewed t-copulas to the same financial and insurance applications. We consider comovements of stock index returns and times-to-failure of related vehicle parts under the warranty period. In both cases the treatment of both lower and upper tails of the joint distributions is of a special importance. Skewed t-copula model performance is compared to the benchmark cases of Gaussian and symmetric Student t-copulas. Instruments of comparison include information criteria, goodness-of-fit and tail dependence. A special attention is paid to methods of estimation of copula parameters. Some technical problems with the implementation of maximum likelihood method and the method of moments suggest the use of Bayesian estimation. We discuss the accuracy and computational efficiency of Bayesian estimation versus MLE. Metropolis-Hastings algorithm with block updates was suggested to deal with the problem of intractability of conditionals.


Atmosphere ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 238
Author(s):  
Pablo Contreras ◽  
Johanna Orellana-Alvear ◽  
Paul Muñoz ◽  
Jörg Bendix ◽  
Rolando Célleri

The Random Forest (RF) algorithm, a decision-tree-based technique, has become a promising approach for applications addressing runoff forecasting in remote areas. This machine learning approach can overcome the limitations of scarce spatio-temporal data and physical parameters needed for process-based hydrological models. However, the influence of RF hyperparameters is still uncertain and needs to be explored. Therefore, the aim of this study is to analyze the sensitivity of RF runoff forecasting models of varying lead time to the hyperparameters of the algorithm. For this, models were trained by using (a) default and (b) extensive hyperparameter combinations through a grid-search approach that allow reaching the optimal set. Model performances were assessed based on the R2, %Bias, and RMSE metrics. We found that: (i) The most influencing hyperparameter is the number of trees in the forest, however the combination of the depth of the tree and the number of features hyperparameters produced the highest variability-instability on the models. (ii) Hyperparameter optimization significantly improved model performance for higher lead times (12- and 24-h). For instance, the performance of the 12-h forecasting model under default RF hyperparameters improved to R2 = 0.41 after optimization (gain of 0.17). However, for short lead times (4-h) there was no significant model improvement (0.69 < R2 < 0.70). (iii) There is a range of values for each hyperparameter in which the performance of the model is not significantly affected but remains close to the optimal. Thus, a compromise between hyperparameter interactions (i.e., their values) can produce similar high model performances. Model improvements after optimization can be explained from a hydrological point of view, the generalization ability for lead times larger than the concentration time of the catchment tend to rely more on hyperparameterization than in what they can learn from the input data. This insight can help in the development of operational early warning systems.


2021 ◽  
Vol 9 (5) ◽  
pp. 467
Author(s):  
Mostafa Farrag ◽  
Gerald Corzo Perez ◽  
Dimitri Solomatine

Many grid-based spatial hydrological models suffer from the complexity of setting up a coherent spatial structure to calibrate such a complex, highly parameterized system. There are essential aspects of model-building to be taken into account: spatial resolution, the routing equation limitations, and calibration of spatial parameters, and their influence on modeling results, all are decisions that are often made without adequate analysis. In this research, an experimental analysis of grid discretization level, an analysis of processes integration, and the routing concepts are analyzed. The HBV-96 model is set up for each cell, and later on, cells are integrated into an interlinked modeling system (Hapi). The Jiboa River Basin in El Salvador is used as a case study. The first concept tested is the model structure temporal responses, which are highly linked to the runoff dynamics. By changing the runoff generation model description, we explore the responses to events. Two routing models are considered: Muskingum, which routes the runoff from each cell following the river network, and Maxbas, which routes the runoff directly to the outlet. The second concept is the spatial representation, where the model is built and tested for different spatial resolutions (500 m, 1 km, 2 km, and 4 km). The results show that the spatial sensitivity of the resolution is highly linked to the routing method, and it was found that routing sensitivity influenced the model performance more than the spatial discretization, and allowing for coarser discretization makes the model simpler and computationally faster. Slight performance improvement is gained by using different parameters’ values for each cell. It was found that the 2 km cell size corresponds to the least model error values. The proposed hydrological modeling codes have been published as open-source.


Sign in / Sign up

Export Citation Format

Share Document