scholarly journals An insight into flood frequency for design floods

Author(s):  
M. Mohssen
2020 ◽  
Author(s):  
Gang Zhao ◽  
Paul Bates ◽  
Jeffrey Neal ◽  
Bo Pang

Abstract. Design flood estimation is a fundamental task in hydrology. In this research, we propose a machine learning based approach to estimate design floods globally. This approach mainly involves three stages: (i) estimating at-site flood frequency curve for global gauging stations by the Anderson-Darling test and Bayesian MCMC method; (ii) clustering these stations into subgroups by a K-means model based on twelve globally available catchment descriptors, and (iii) developing a regression model in each subgroup for regional design flood estimation using the same descriptors. A total of 11793 stations globally were selected for model development and three widely used regression models were compared for design flood estimation. The results showed that: (1) the proposed approach achieved the highest accuracy for design flood estimation when using all twelve descriptors for clustering; and the performance of regression was improved by considering more descriptors during the training and validation; (2) a support vector machine regression provide the highest prediction performance among all regression models tested, with root mean square normalised error of 0.708 for 100-year return period flood estimation; (3) 100-year design flood in tropical, arid, temperate, cold and polar climate zones could be reliably estimated with the relative mean relative biases (RBIAS) of −0.199, −0.233, −0.169, 0.179 and −0.091 respectively; (4) This machine learning based approach shows considerable improvement over the index-flood based method introduced by Smith et al. (2015, https://doi.org/10.1002/2014WR015814) for the design flood estimation at global scales; and the average RBIAS in estimation is less than 18 % for 10, 20, 50 and 100-year design floods. We conclude that the proposed approach is a valid method to estimate design floods anywhere on the global river network, improving our prediction of the flood hazard, especially in ungauged areas.


2021 ◽  
Author(s):  
Anne Fangmann ◽  
Uwe Haberlandt

<p>Flood frequency analysis (FFA) has long been the standard procedure for obtaining design floods for all kinds of purposes. Ideally, the data at the basis of the statistical operations have a high temporal resolution, in order to facilitate a full account of the observed flood peaks and hence a precise model fitting and flood quantile estimation.</p><p>Unfortunately, high-resolution flows are rarely disposable. Often, average daily flows pose the only available/sufficiently long base for flood frequency analysis. This averaging naturally causes a significant smoothing of the flood wave, such that the “instantaneous” peak can no longer be observed. As a possible consequence, design floods derived from these data may be severely underrated.</p><p>How strongly the original peaks are flattened and how this influences the design flood estimation depends on a variety of factors and varies from gauge to gauge. In this study we are looking at a range of errors arising from the use of daily instead of instantaneous flow data. These include differences in the observed individual flood peaks and mean annual maximum floods, as well as the estimated distribution parameters and flood quantiles. The aim is to identify catchment specific factors that influence the magnitude of these errors, and ultimately to provide a means for error assessment on the mere basis of local hydrological conditions, specifically where no high-resolution data is available.</p><p>The analyses are carried out on an all-German dataset of discharge gauges, for which high-resolution data is available for at least 30 years. The classical FFA approach of fitting distributions to annual maximum series is utilized for error assessment. For identification of influencing factors, both the discharge series themselves and a catalogue of climatic and physiographic catchment descriptors are screened.</p>


2000 ◽  
Vol 4 (3) ◽  
pp. 463-482 ◽  
Author(s):  
A. M. Hashemi ◽  
M. Franchini ◽  
P. E. O’Connell

Abstract. Regionalized and at-site flood frequency curves exhibit considerable variability in their shapes, but the factors controlling the variability (other than sampling effects) are not well understood. An application of the Monte Carlo simulation-based derived distribution approach is presented in this two-part paper to explore the influence of climate, described by simulated rainfall and evapotranspiration time series, and basin factors on the flood frequency curve (ffc). The sensitivity analysis conducted in the paper should not be interpreted as reflecting possible climate changes, but the results can provide an indication of the changes to which the flood frequency curve might be sensitive. A single site Neyman Scott point process model of rainfall, with convective and stratiform cells (Cowpertwait, 1994; 1995), has been employed to generate synthetic rainfall inputs to a rainfall runoff model. The time series of the potential evapotranspiration (ETp) demand has been represented through an AR(n) model with seasonal component, while a simplified version of the ARNO rainfall-runoff model (Todini, 1996) has been employed to simulate the continuous discharge time series. All these models have been parameterised in a realistic manner using observed data and results from previous applications, to obtain ‘reference’ parameter sets for a synthetic case study. Subsequently, perturbations to the model parameters have been made one-at-a-time and the sensitivities of the generated annual maximum rainfall and flood frequency curves (unstandardised, and standardised by the mean) have been assessed. Overall, the sensitivity analysis described in this paper suggests that the soil moisture regime, and, in particular, the probability distribution of soil moisture content at the storm arrival time, can be considered as a unifying link between the perturbations to the several parameters and their effects on the standardised and unstandardised ffcs, thus revealing the physical mechanism through which their influence is exercised. However, perturbations to the parameters of the linear routing component affect only the unstandardised ffc. In Franchini et al. (2000), the sensitivity analysis of the model parameters has been assessed through an analysis of variance (ANOVA) of the results obtained from a formal experimental design, where all the parameters are allowed to vary simultaneously, thus providing deeper insight into the interactions between the different factors. This approach allows a wider range of climatic and basin conditions to be analysed and reinforces the results presented in this paper, which provide valuable new insight into the climatic and basin factors controlling the ffc. Keywords: stochastic rainfall model; rainfall runoff model; simulation; derived distribution; flood frequency; sensitivity analysis


2021 ◽  
Vol 25 (11) ◽  
pp. 5981-5999
Author(s):  
Gang Zhao ◽  
Paul Bates ◽  
Jeffrey Neal ◽  
Bo Pang

Abstract. Design flood estimation is a fundamental task in hydrology. In this research, we propose a machine-learning-based approach to estimate design floods globally. This approach involves three stages: (i) estimating at-site flood frequency curves for global gauging stations using the Anderson–Darling test and a Bayesian Markov chain Monte Carlo (MCMC) method; (ii) clustering these stations into subgroups using a K-means model based on 12 globally available catchment descriptors; and (iii) developing a regression model in each subgroup for regional design flood estimation using the same descriptors. A total of 11 793 stations globally were selected for model development, and three widely used regression models were compared for design flood estimation. The results showed that (1) the proposed approach achieved the highest accuracy for design flood estimation when using all 12 descriptors for clustering; and the performance of the regression was improved by considering more descriptors during training and validation; (2) a support vector machine regression provided the highest prediction performance amongst all regression models tested, with a root mean square normalised error of 0.708 for 100-year return period flood estimation; (3) 100-year design floods in tropical, arid, temperate, cold and polar climate zones could be reliably estimated (i.e. <±25 % error), with relative mean bias (RBIAS) values of −0.199, −0.233, −0.169, 0.179 and −0.091 respectively; (4) the machine-learning-based approach developed in this paper showed considerable improvement over the index-flood-based method introduced by Smith et al. (2015, https://doi.org/10.1002/2014WR015814) for design flood estimation at global scales; and (5) the average RBIAS in estimation is less than 18 % for 10-, 20-, 50- and 100-year design floods. We conclude that the proposed approach is a valid method to estimate design floods anywhere on the global river network, improving our prediction of the flood hazard, especially in ungauged areas.


2021 ◽  
Author(s):  
Yanlai Zhou ◽  
Shenglian Guo ◽  
Chong-Yu Xu ◽  
Lihua Xiong ◽  
Hua Chen ◽  
...  

Abstract Quantifying the uncertainty of non-stationary flood frequency analysis is very crucial and beneficial for planning and design of water engineering projects, which is fundamentally challenging especially in the presence of high climate variability and reservoir regulation. This study proposed an integrated approach that combined the Generalized Additive Model for Location, Scale and Shape parameters (GAMLSS) method, the Copula function and the Bayesian Uncertainty Processor (BUP) technique to make reliable probabilistic interval estimations of design floods. The reliability and applicability of the proposed approach were assessed by flood datasets collected from two hydrological monitoring stations located in the Hanjiang River of China. The precipitation and the reservoir index were selected as the explanatory variables for modeling the time-varying parameters of marginal and joint distributions using long-term (1954–2018) observed datasets. First, the GAMLSS method was employed to model and fit the time-varying characteristics of parameters in marginal and joint distributions. Second, the Copula function was employed to execute the point estimations of non-stationary design floods. Finally, the BUP technique was employed to perform the interval estimations of design floods based on the point estimations obtained from the Copula function. The results demonstrated that the proposed approach can provide reliable probabilistic interval estimations of design floods meanwhile reducing the uncertainty of non-stationary flood frequency analysis. Consequently, the integrated approach is a promising way to offer an indication on how design values can be estimated in a high-dimensional problem.


2020 ◽  
Vol 24 (11) ◽  
pp. 5595-5619
Author(s):  
Kolbjørn Engeland ◽  
Anna Aano ◽  
Ida Steffensen ◽  
Eivind Støren ◽  
Øyvind Paasche

Abstract. The Glomma River is the largest in Norway, with a catchment area of 154 450 km2. People living near the shores of this river are frequently exposed to destructive floods that impair local cities and communities. Unfortunately, design flood predictions are hampered by uncertainty since the standard flood records are much shorter than the requested return period and the climate is also expected to change in the coming decades. Here we combine systematic historical and paleo information in an effort to improve flood frequency analysis and better understand potential linkages to both climate and non-climatic forcing. Specifically, we (i) compile historical flood data from the existing literature, (ii) produce high-resolution X-ray fluorescence (XRF), magnetic susceptibility (MS), and computed tomography (CT) scanning data from a sediment core covering the last 10 300 years, and (iii) integrate these data sets in order to better estimate design floods and assess non-stationarities. Based on observations from Lake Flyginnsjøen, receiving sediments from Glomma only when it reaches a certain threshold, we can estimate flood frequency in a moving window of 50 years across millennia revealing that past flood frequency is non-stationary on different timescales. We observe that periods with increased flood activity (4000–2000 years ago and <1000 years ago) correspond broadly to intervals with lower than average summer temperatures and glacier growth, whereas intervals with higher than average summer temperatures and receding glaciers overlap with periods of reduced numbers of floods (10 000 to 4000 years ago and 2200 to 1000 years ago). The flood frequency shows significant non-stationarities within periods with increased flood activity, as was the case for the 18th century, including the 1789 CE (“Stor-Ofsen”) flood, the largest on record for the last 10 300 years at this site. Using the identified non-stationarities in the paleoflood record allowed us to estimate non-stationary design floods. In particular, we found that the design flood was 23 % higher during the 18th century than today and that long-term trends in flood variability are intrinsically linked to the availability of snow in late spring linking climate change to adjustments in flood frequency.


Water SA ◽  
2018 ◽  
Vol 44 (3 July) ◽  
Author(s):  
JJ Nathanael ◽  
JC Smithers ◽  
MJC Horan

In engineering and flood hydrology, the estimation of a design flood associates the magnitude of a flood with a level of exceedance, or return period, for a given site. The use of a regional flood frequency analysis (RFFA) approach improves the accuracy and reliability of estimates of design floods. However, no RFFA method is currently widely used in South Africa, despite a number of RFFA studies having been undertaken in Africa and which include South Africa in their study areas. Hence, the performance of the current RFFA approaches needs to be assessed in order to determine the best approaches to use and to determine if a new RFFA approach needs to be developed for use in South Africa. Through a review of the relevant literature it was found that the Meigh et al. (1997) method, the Mkhandi et al. (2000) method, the Görgens (2007) Joint Peak-Volume (JPV) method and the Haile (2011) method are available for application in a nationwide study. The results of the study show that the Haile method generally performs better than the other RFFA methods; however, it also consistently underestimates design floods. Due to the poor overall performance of the RFFA methods assessed, it is recommended that a new RFFA method be developed for application in design flood practice in South Africa.


2020 ◽  
Author(s):  
Lei Yan ◽  
Lihua Xiong ◽  
Lingqi Li ◽  
Gusong Ruan ◽  
Chong-Yu Xu ◽  
...  

&lt;p&gt;In the traditional flood frequency analysis, researchers typically assume the flood events result from a homogeneous flood population. However, actually flood events are likely to be generated by distinct flood generation mechanisms (FGMs), such as snowmelt-induced floods and rainfall-induced floods. To address this problem in flood frequency analysis, currently, the most popular practice for mixture modeling of flood events is to use two-component mixture distributions (TCMD) without a priori classification of distict FGMs, which could result in component distributions without physical reality or lead to a larger standard error of the estimated quantiles. To improve the mixture distribution modeling in Norway, we firstly classify the flood series of 34 watersheds into snowmelt-induced long-duration floods and rainfall-induced short-duration floods based on an index named flood timescale (FT), defined as the ratio of the flood volume to peak value. A total of ten types of mixture distributions are considered in the application of FT-based TCMD to model the flood events in Norway. The results indicate that the FT-based TCMD model can reduce the uncertainty in the estimation of design floods. The improved predictive ability of the FT-based TCMD model is largely due to its explicit recognition of distinct FGMs, enabling the determination of the weighting coefficient without optimization.&lt;/p&gt;


1966 ◽  
Vol 24 ◽  
pp. 322-330
Author(s):  
A. Beer

The investigations which I should like to summarize in this paper concern recent photo-electric luminosity determinations of O and B stars. Their final aim has been the derivation of new stellar distances, and some insight into certain patterns of galactic structure.


Sign in / Sign up

Export Citation Format

Share Document