Flood frequency analysis for the Red River at Winnipeg

2001 ◽  
Vol 28 (3) ◽  
pp. 355-362 ◽  
Author(s):  
Donald H Burn ◽  
N K Goel

This paper reviews the flood frequency characteristics of the Red River at Winnipeg. The impacts of persistence in the flood series on estimates of flood quantiles and their associated confidence intervals are examined. This is done by generating a large number of data sequences using a mixed noise model that preserves the short-term and long-term correlation structures of the observed flood series. The results reveal that persistence in the data series can lead to a slight increase in the expected flood magnitude for a given return period. More importantly, persistence is shown to dramatically increase the uncertainty associated with estimated flood quantiles. The 117-year flood series for the Red River at Winnipeg is demonstrated to be equivalent to roughly 45 years of independent data.Key words: flood frequency, extreme events, simulation, historical data.

2014 ◽  
Vol 14 (5) ◽  
pp. 1283-1298 ◽  
Author(s):  
D. Lawrence ◽  
E. Paquet ◽  
J. Gailhard ◽  
A. K. Fleig

Abstract. Simulation methods for extreme flood estimation represent an important complement to statistical flood frequency analysis because a spectrum of catchment conditions potentially leading to extreme flows can be assessed. In this paper, stochastic, semi-continuous simulation is used to estimate extreme floods in three catchments located in Norway, all of which are characterised by flood regimes in which snowmelt often has a significant role. The simulations are based on SCHADEX, which couples a precipitation probabilistic model with a hydrological simulation such that an exhaustive set of catchment conditions and responses is simulated. The precipitation probabilistic model is conditioned by regional weather patterns, and a bottom–up classification procedure was used to define a set of weather patterns producing extreme precipitation in Norway. SCHADEX estimates for the 1000-year (Q1000) discharge are compared with those of several standard methods, including event-based and long-term simulations which use a single extreme precipitation sequence as input to a hydrological model, statistical flood frequency analysis based on the annual maximum series, and the GRADEX method. The comparison suggests that the combination of a precipitation probabilistic model with a long-term simulation of catchment conditions, including snowmelt, produces estimates for given return periods which are more in line with those based on statistical flood frequency analysis, as compared with the standard simulation methods, in two of the catchments. In the third case, the SCHADEX method gives higher estimates than statistical flood frequency analysis and further suggests that the seasonality of the most likely Q1000 events differs from that of the annual maximum flows. The semi-continuous stochastic simulation method highlights the importance of considering the joint probability of extreme precipitation, snowmelt rates and catchment saturation states when assigning return periods to floods estimated by precipitation-runoff methods. The SCHADEX methodology, as applied here, is dependent on observed discharge data for calibration of a hydrological model, and further study to extend its application to ungauged catchments would significantly enhance its versatility.


2011 ◽  
Vol 15 (3) ◽  
pp. 819-830 ◽  
Author(s):  
S. Das ◽  
C. Cunnane

Abstract. Flood frequency analysis is a necessary and important part of flood risk assessment and management studies. Regional flood frequency methods, in which flood data from groups of catchments are pooled together in order to enhance the precision of flood estimates at project locations, is an accepted part of such studies. This enhancement of precision is based on the assumption that catchments so pooled together are homogeneous in their flood producing properties. If homogeneity is assured then a homogeneous pooling group of sites lead to a reduction in the error of quantile estimates, relative to estimators based on single at-site data series alone. Homogeneous pooling groups are selected by using a previously nominated rule and this paper examines how effective one such rule is in selecting homogeneous groups. In this paper a study, based on annual maximum series obtained from 85 Irish gauging stations, examines how successful a common method of identifying pooling group membership is in selecting groups that actually are homogeneous. Each station has its own unique pooling group selected by use of a Euclidean distance measure in catchment descriptor space, commonly denoted dij and with a minimum of 500 station years of data in the pooling group. It was found that dij could be effectively defined in terms of catchment area, mean rainfall and baseflow index. The study then investigated how effective this selected method is in selecting groups of catchments that are actually homogenous as indicated by their L-Cv values. The sampling distribution of L-CV (t2) in each pooling group and the 95% confidence limits about the pooled estimate of t2 are obtained by simulation. The t2 values of the selected group members are compared with these confidence limits both graphically and numerically. Of the 85 stations, only 1 station's pooling group members have all their t2 values within the confidence limits, while 7, 33 and 44 of them have 1, 2 or 3 or more, t2 values outside the confidence limits. The outcomes are also compared with the heterogeneity measures H1 and H2. The H1 values show an upward trend with the ranges of t2 values in the pooling group whereas the H2 values do not show any such dependency. A selection of 27 pooling groups, found to be heterogeneous, were further examined with the help of box-plots of catchment descriptor values and one particular case is considered in detail. Overall the results show that even with a carefully considered selection procedure, it is not certain that perfectly homogeneous pooling groups are identified.


2020 ◽  
Author(s):  
Alexandra Fedorova ◽  
Nataliia Nesterova ◽  
Olga Makarieva ◽  
Andrey Shikhov

<p>In June 2019, the extreme flash flood was formed on the rivers of the Irkutsk region originating from the East Sayan mountains. This flood became the most hazardous one in the region in 80 years history of observations.</p><p>The greatest rise in water level was recorded at the Iya River in the town of Tulun (more than 9 m in three days). The recorded water level was more than 5 m above the dangerous mark of 850 cm and more than 2.5 m above the historical maximum water level which was observed in 1984.</p><p>The flood led to the catastrophic inundation of the town of Tulun, 25 people died and 8 went missing. According to preliminary assessment, economic damage from the flood in 2019 amounted up to half a billion Euro.</p><p>Among the reasons for the extreme flood in June 2019 that are discussed are heavy rains as a result of climate change, melting of snow and glaciers in the mountains of the East Sayan, deforestation of river basins due to clearings and fires, etc.</p><p>The aim of the study was to analyze the factors that led to the formation of a catastrophic flood in June 2019, as well as estimate the maximum discharge of at the Iya River. For calculations, the deterministic distributed hydrological model Hydrograph was applied. We used the observed data of meteorological stations and the forecast values ​​of the global weather forecast model ICON. The estimated discharge has exceeded previously observed one by about 50%.</p><p>The results of the study have shown that recent flood damage was caused mainly by unprepared infrastructure. The safety dam which was built in the town of Tulun just ten years ago was 2 meters lower than maximum observed water level in 2019. This case and many other cases in Russia suggest that the flood frequency analysis of even long-term historical data may mislead design engineers to significantly underestimate the probability and magnitude of flash floods. There are the evidences of observed precipitation regime transformations which directly contribute to the formation of dangerous hydrological phenomena. The details of the study for the Irkutsk region will be presented.</p>


2013 ◽  
Vol 1 (6) ◽  
pp. 6785-6828 ◽  
Author(s):  
D. Lawrence ◽  
E. Paquet ◽  
J. Gailhard ◽  
A. K. Fleig

Abstract. Simulation methods for extreme flood estimation represent an important complement to statistical flood frequency analysis because a spectrum of catchment conditions potentially leading to extreme flows can be assessed. In this paper, stochastic, semi-continuous simulation is used to estimate extreme floods in three catchments located in Norway, all of which are characterised by flood regimes in which snowmelt often has a significant role. The simulations are based on SCHADEX, which couples a precipitation probabilistic model with a hydrological simulation such that an exhaustive set of catchment conditions and responses are simulated. The precipitation probabilistic model is conditioned by regional weather patterns, and a "bottom-up" classification procedure was used for defining a set of weather patterns producing extreme precipitation in Norway. SCHADEX estimates for the 1000 yr (Q1000) discharge are compared with those of several standard methods, including event-based and long-term simulations which use a single extreme precipitation sequence as input to a hydrological model, with statistical flood frequency analysis based on the annual maximum series, and with the GRADEX method. The comparison suggests that the combination of a precipitation probabilistic model with a long-term simulation of catchment conditions, including snowmelt, produces estimates for given return periods which are more in line with those based on statistical flood frequency analysis, as compared with the standard simulation methods, in two of the catchments. In the third case, the SCHADEX method gives higher estimates than statistical flood frequency analysis and further suggests that the seasonality of the most likely Q1000 events differs from that of the annual maximum flows. The semi-continuous stochastic simulation method highlights the importance of considering the joint probability of extreme precipitation, snowmelt rates and catchment saturation states when assigning return periods to floods estimated by precipitation-runoff methods. The SCHADEX methodology, as applied here, is dependent on observed discharge data for calibration of a hydrological model, and further study to extend its application to ungauged catchments would significantly enhance its versatility.


2019 ◽  
Vol 11 (4) ◽  
pp. 966-979
Author(s):  
Nur Amalina Mat Jan ◽  
Ani Shabri ◽  
Ruhaidah Samsudin

Abstract Non-stationary flood frequency analysis (NFFA) plays an important role in addressing the issue of the stationary assumption (independent and identically distributed flood series) that is no longer valid in infrastructure-designed methods. This confirms the necessity of developing new statistical models in order to identify the change of probability functions over time and obtain a consistent flood estimation method in NFFA. The method of Trimmed L-moments (TL-moments) with time covariate is confronted with the L-moment method for the stationary and non-stationary generalized extreme value (GEV) models. The aims of the study are to investigate the behavior of the proposed TL-moments method in the presence of NFFA and applying the method along with GEV distribution. Comparisons of the methods are made by Monte Carlo simulations and bootstrap-based method. The simulation study showed the better performance of most levels of TL-moments method, which is TL(η,0), (η = 2, 3, 4) than the L-moment method for all models (GEV1, GEV2, and GEV3). The TL-moment method provides more efficient quantile estimates than other methods in flood quantiles estimated at higher return periods. Thus, the TL-moments method can produce better estimation results since the L-moment eliminates lowest value and gives more weight to the largest value which provides important information.


2011 ◽  
Vol 8 (2) ◽  
pp. 3305-3351 ◽  
Author(s):  
S. Ahilan ◽  
J. J. O'Sullivan ◽  
M. Bruen

Abstract. This study explores influences which result in shifts of flood frequency distributions in Irish rivers. Generalised Extreme Value (GEV) type I distributions are recommended in Ireland for estimating flood quantiles. This paper presents the findings of an investigation that identified the GEV statistical distributions that best fit the annual maximum (AM) data series extracted from 172 gauging stations of 126 rivers in Ireland. Of these 126 rivers, 25 have multiple gauging stations. Analysis of this data was undertaken to explore hydraulic and hydro-geological factors that influence flood frequency distributions and whether shifts in distributions occur in the down-river direction. The methodology involved determining the shape parameter of GEV distributions that were fitted to AM data at each site and to statistically test this shape parameter to determine whether a type I, type II or type III distribution was valid. The classification of these distributions was further supported by moment and L-moment diagrams and probability plots. Results indicated that of the 143 stations with flow records exceeding 25 yr, data for 92 was best represented by GEV type I distributions and that for another 12 and 39 stations followed type II and type III distributions respectively. The spatial, hydraulic and hydro-geological influences on flood frequency distributions were assessed by incorporating results on an Arc-GIS platform with individual layers showing karst features, flood attenuation polygons and lakes. This data reveals that type I distributions are spatially well represented throughout the country. The majority of type III distributions appear in four distinct clusters in well defined geographical areas where attenuation influences from floodplains and lakes appear to be influential. The majority of type II distributions appear to be in a single cluster in a region in the west of the country that is characterised by a karst landscape. The presence of karst in river catchments would be expected to provide additional subsurface storage and in this regard, type III distributions might be expected. The prevalence of type II distributions in this area reflects the finite nature of this storage and the effects, in extreme conditions, when the karst is saturated and further storage is no longer available. Results therefore indicate that in some instances assuming type I distributions is incorrect and may result in erroneous estimates of flood quantiles in these regions. Where actual data follows a type II distribution, flood quantiles may be underestimated and for type III distributions, overestimates may be expected.


2015 ◽  
Vol 10 (2) ◽  
pp. 698-706
Author(s):  
Bagher Heidarpour ◽  
Bahram Saghafian ◽  
Saeed Golian

The term "outlier" is generally used to refer to single data points that appear to depart significantly from the trend of the other data. Outliers are classified into three types: incorrect observations, rare events resulting from essentially the same phenomena as the other maxima, and rare events resulting from a different phenomenon. Flood frequency analysis was first performed on complete data series (including the outlier) and then on the series with the outlier removed. Results revealed that omission of the outlier data didn’t affect the probability distribution function (Log-Pearson type III), but the design discharge reduced by 60 percent in 10000 year return period from 3320 (m3/s) to 1340 (m3/s). Furthermore, the method proposed by the U.S. Water Resources Council (WRC), and the HEC-SSP software were applied in order to compose outlier data with other systematic data and to modify the parameters of the statistical distribution. Using WRC method, the estimated 10000-year flood was equaled to 1907 (m3/s) by designating the outlier as the 200-year return period and revising the parameters of Log-Pearson type III distribution; that is about 43 percent decrease over the scenario involving the outlier.


2010 ◽  
Vol 7 (4) ◽  
pp. 4253-4290
Author(s):  
B. Guse ◽  
T. Hofherr ◽  
B. Merz

Abstract. A novel approach to consider additional spatial information in flood frequency analyses, especially for the estimation of discharges with recurrence intervals larger than 100 years, is presented. For this purpose, large flood quantiles, i.e. pairs of a discharge and its corresponding recurrence interval, as well as an upper bound discharge, are combined within a mixed bounded distribution function. Large flood quantiles are derived using probabilistic regional envelope curves (PRECs) for all sites of a pooling group. These PREC flood quantiles are introduced into an at-site flood frequency analysis by assuming that they are representative for the range of recurrence intervals which is covered by PREC flood quantiles. For recurrence intervals above a certain inflection point, a Generalised Extreme Value (GEV) distribution function with a positive shape parameter is used. This GEV asymptotically approaches an upper bound derived from an empirical envelope curve. The resulting mixed distribution function is composed of two distribution functions, which are connected at the inflection point. This method is applied to 83 streamflow gauges in Saxony/Germany. Our analysis illustrates that the presented mixed bounded distribution function adequately considers PREC flood quantiles as well as an upper bound discharge. The introduction of both into an at-site flood frequency analysis improves the quantile estimation. A sensitivity analysis reveals that, for the target recurrence interval of 1000 years, the flood quantile estimation is less sensitive to the selection of an empirical envelope curve than to the selection of PREC discharges and of the inflection point between the mixed bounded distribution function.


2018 ◽  
Author(s):  
Yenan Wu ◽  
Upmanu Lall ◽  
Carlos H.R. Lima ◽  
Ping-an Zhong

Abstract. We develop a hierarchical, multilevel Bayesian model for reducing uncertainties in local (at-site) and regional (ungauged or short data sites) flood frequency analysis. This model is applied to the annual maximum streamflow of 17 gauged sites in the Huaihe River basin, China. A Generalized Extreme Value (GEV) distribution is considered for each site, and its location and scale parameters depend on the site’s drainage area. We assume the hyper-parameters come from Non-informative (independent, uniform) prior distribution and sample values from posterior distribution by the MCMC method using Gibbs sampling. For comparison, the ordinary GEV fitting by Maximum Likelihood Estimate (MLE) and index flood method fitted by L-moments are also applied. The local simulation results show that for most sites the 95 % credible interval simulated by the Hierarchical Bayesian model are narrower than the at site GEV outputs thus reducing uncertainty. By comparison, the homogeneity assumption of the index flood method often leads to large deviations from the empirical flood frequency curve. Cross validated flood quantiles and associated uncertainty intervals are also derived. These results show that the proposed model can better estimate the flood quantiles and their uncertainty than the index flood method.


Sign in / Sign up

Export Citation Format

Share Document