scholarly journals How can expert knowledge increase the realism of conceptual hydrological models? A case study in the Swiss Pre-Alps

2017 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter. Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test to which extent expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the Inverse Distance Weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down setup relying on parameter and process constraints, and an experimentalists' setup based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed. The simulation results showed that: (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up setup performed better than the top-down one when simulating short-duration events, but similarly to the top-down setup when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up setup can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down setup seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of "model realism" differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2013 ◽  
Vol 44 (4) ◽  
pp. 673-689
Author(s):  
A. Wood ◽  
K. J. Beven

A number of hydrological models use a distribution function to develop the non-linear rainfall–runoff catchment response. In this study the beta function is applied to represent a distribution of soil moisture storages in conjunction with a fast and slow pathway routing. The BETA3 and BETA4 modules, presented in this paper, have a distribution of discrete storage elements that have variable and redistributed water levels at each timestep. The PDM-BETA5 is an analytical solution with a similar structure to the commonly used probability distribution model (PDM). Model testing was performed on three catchments in the Northern Pennine region in England. The performances of the BETA models were compared with a commonly used formulation of the PDM. The BETA models performed marginally better than the PDM in calibration and parameter estimation was better with the BETA models than for the PDM. The BETA models had a small advantage in validation on the hydrologically fast responding test catchments.


2019 ◽  
Vol 19 (1) ◽  
pp. 19-40 ◽  
Author(s):  
Manuel Antonetti ◽  
Christoph Horat ◽  
Ioannis V. Sideris ◽  
Massimiliano Zappa

Abstract. Flash floods evolve rapidly during and after heavy precipitation events and represent a potential risk for society. To predict the timing and magnitude of a peak runoff, it is common to couple meteorological and hydrological models in a forecasting chain. However, hydrological models rely on strong simplifying assumptions and hence need to be calibrated. This makes their application difficult in catchments where no direct observation of runoff is available. To address this gap, a flash-flood forecasting chain is presented based on (i) a nowcasting product which combines radar and rain gauge rainfall data (CombiPrecip); (ii) meteorological data from state-of-the-art numerical weather prediction models (COSMO-1, COSMO-E); (iii) operationally available soil moisture estimations from the PREVAH hydrological model; and (iv) a process-based runoff generation module with no need for calibration (RGM-PRO). This last component uses information on the spatial distribution of dominant runoff processes from the so-called maps of runoff types, which can be derived with different mapping approaches with increasing involvement of expert knowledge. RGM-PRO is event-based and parametrised a priori based on the results of sprinkling experiments. This prediction chain has been evaluated using data from April to September 2016 in the Emme catchment, a medium-sized flash-flood-prone basin in the Swiss Prealps. Two novel forecasting chains were set up with two different maps of runoff types, which allowed sensitivity of the forecast performance to the mapping approaches to be analysed. Furthermore, special emphasis was placed on the predictive power of the new forecasting chains in nested subcatchments when compared with a prediction chain including an original version of the runoff generation module of PREVAH calibrated for one event. Results showed a low sensitivity of the predictive power to the amount of expert knowledge included for the mapping approach. The forecasting chain including a map of runoff types with high involvement of expert knowledge did not guarantee more skill. In the larger basins of the Emme region, process-based forecasting chains revealed comparable skill to a prediction system including a conventional hydrological model. In the small nested subcatchments, although the process-based forecasting chains outperformed the original runoff generation module, no forecasting chain showed satisfying skill in the sense that it could be useful for decision makers. Despite the short period available for evaluation, preliminary outcomes of this study show that operational flash-flood predictions in ungauged basins can benefit from the use of information on runoff processes, as no long-term runoff measurements are needed for calibration.


2015 ◽  
Vol 28 (1) ◽  
pp. 237-240
Author(s):  
Gary Chartier
Keyword(s):  
Top Down ◽  

Peter T. Leeson's Anarchy Unbound offers an interesting collection of historical and theoretical arguments for the view that bottom-up social order is perfectly possible and at least sometimes preferable to order imposed from the top down.


2022 ◽  
Vol 76 (1) ◽  
Author(s):  
A. Panebianco ◽  
P. F. Gregorio ◽  
N. M. Schroeder ◽  
A. Marozzi ◽  
R. Ovejero ◽  
...  

Author(s):  
Manuel Antonetti ◽  
Christoph Horat ◽  
Ioannis V. Sideris ◽  
Massimiliano Zappa

Abstract. Flash floods (FFs) evolve rapidly during and after heavy precipitation events and represent a risk for society. To predict the timing and magnitude of a peak runoff, it is common to couple meteorological and hydrological models in a forecasting chain. However, hydrological models rely on strong simplifying assumptions and hence need to be calibrated. This makes their application difficult in catchments where no direct observation of runoff is available. To address this gap, a FF forecasting chain is presented based on: (i) a nowcasting product which combines radar and rain gauge rainfall data (CombiPrecip), (ii) meteorological data from state-of-the-art numerical weather prediction models (COSMO-1, COSMO-E), (iii) operationally available soil moisture estimations from the PREVAH hydrological model, and (iv) a process-based runoff generation module with no need for calibration (RGM-PRO). This last component uses information on the spatial distribution of dominant runoff processes from the so-called maps of runoff types (RTs), which can be derived with different mapping approaches with increasing involvement of expert knowledge. RGM-PRO is then parametrised a priori based on the results of sprinkling experiments. This prediction chain has been evaluated using data from April to September 2016 in the Emme catchment, a medium-size FF prone basin in the Swiss Prealps. Two novel forecasting chains were set up with two different maps of RTs, which allowed sensitivity of the forecast performance on the mapping approaches to be analysed. Furthermore, special emphasis was placed on the predictive power of the new forecasting chains in nested subcatchments when compared with a prediction chain including a conventional hydrological model relying on calibration. Results showed a low sensitivity of the predictive power on the amount of expert knowledge included for the mapping approach. The forecasting chain including a map of RTs with high involvement of expert knowledge did not guarantee more skill. In the larger basins of the Emme region, process-based forecasting chains revealed comparable skill as a prediction system including a conventional hydrological model. In the small nested subcatchments, the process-based forecasting chains outperformed the conventional system, however, no forecasting chain showed satisfying skill. The outcomes of this study show that operational FF predictions in ungauged basins can benefit from the use of information on runoff processes, as no long-term runoff measurements are needed for calibration.


Semantic Web ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 979-1005
Author(s):  
Sabrina Kirrane ◽  
Marta Sabou ◽  
Javier D. Fernández ◽  
Francesco Osborne ◽  
Cécile Robin ◽  
...  

The identification of research topics and trends is an important scientometric activity, as it can help guide the direction of future research. In the Semantic Web area, initially topic and trend detection was primarily performed through qualitative, top-down style approaches, that rely on expert knowledge. More recently, data-driven, bottom-up approaches have been proposed that offer a quantitative analysis of the evolution of a research domain. In this paper, we aim to provide a broader and more complete picture of Semantic Web topics and trends by adopting a mixed methods methodology, which allows for the combined use of both qualitative and quantitative approaches. Concretely, we build on a qualitative analysis of the main seminal papers, which adopt a top-down approach, and on quantitative results derived with three bottom-up data-driven approaches (Rexplore, Saffron, PoolParty), on a corpus of Semantic Web papers published between 2006 and 2015. In this process, we both use the latter for “fact-checking” on the former and also to derive key findings in relation to the strengths and weaknesses of top-down and bottom-up approaches to research topic identification. Although we provide a detailed study on the past decade of Semantic Web research, the findings and the methodology are relevant not only for our community but beyond the area of the Semantic Web to other research fields as well.


Author(s):  
Edyta Sasin ◽  
Daryl Fougnie

AbstractDoes the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search. We compared implicit memory for two types of objects—related-context nontargets that grabbed attention because they matched the target defining feature (i.e., color; top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). In Experiment 1, the salient distractor flickered, while in Experiment 2, the luminance of the salient distractor was alternated. Critically, salient and related-context nontargets produced equivalent attentional capture, yet related-context nontargets were remembered far better than salient distractors (and salient distractors were not remembered better than unrelated distractors). These results suggest that LTM depends not only on the amount of attention but also on the type of attention. Specifically, top-down attention is more effective in promoting the formation of memory traces than bottom-up attention.


2017 ◽  
Author(s):  
Markus Hrachowitz ◽  
Martyn Clark

Abstract. In hydrology, the two somewhat competing modelling philosophies of bottom-up and top-down approaches are the basis of most process-based models. Differing mostly (1) in their respective degree of detail in resolving the modelling domain and (2) in their respective degree of explicitly treating conservation laws, these two philosophies suffer from similar limitations. Nevertheless, a better understanding of their respective basis (i.e. micro-scale vs. macro-scale) as well as their respective short comings bears the potential of identifying the complementary value of the two philosophies for improving our models. In this manuscript we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the two modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy but rather on the way a model is implemented. Based on the premises that top-down models can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual we argue that a convergence of the two modelling strategies may hold some value for progressing the development of hydrological models.


Sign in / Sign up

Export Citation Format

Share Document