The effect of clustering of flood peaks on a flood risk analysis for the Red River

1985 ◽  
Vol 12 (1) ◽  
pp. 150-165 ◽  
Author(s):  
C. Booy ◽  
D. R. Morgan

The nearly 100 year record of spring flood peaks on the Red River at Winnipeg, Manitoba, shows a clustering of high annual peak flows that is possibly, but not likely, due to chance. A similar degree of clustering has been observed in other long-term geophysical records. It can be measured by means of the Hurst statistic. Clustering increases the uncertainty in the parameters of the probability distribution of peak flows estimated from the record. As such it profoundly affects the weight that must be given to the unusually high historical floods that preceded the period of record, in particular the 1826 and the 1852 floods. Incorporating this historical information in the probability analysis requires a time series model that tends to produce the appropriate degree of clustering. A fractional noise model was adopted for this purpose. Bayes' theorem was then used to update the distribution parameters, obtained from the record, with the additional information about the historical floods. The result shows the flood risk to the City of Winnipeg and the Red River Valley to be substantially higher than was estimated by conventional methods that assume serial independence of the peak flows. Key words: Red River floods, flood risk, historical floods, Hurst phenomenon, fractional noise, Bayesian probability distribution, Bayesian updating, time series.

1986 ◽  
Vol 13 (3) ◽  
pp. 365-374 ◽  
Author(s):  
C. Booy ◽  
L. M. Lye

The well-established occurrence of exceptionally high floods on the Red River prior to the record of annual peak flows at Winnipeg is an important factor in the flood risk assessment for that city and for the entire Red River valley. But the weight given to this occurrence is quite dependent on the autocorrelation structure assumed for the spring peak time series. It is therefore important to decide whether the clustering of high peak flows, which can be observed in the record, is a mere chance phenomenon or indeed a characteristic of the runoff process. In earlier studies this clustering was found to be significant in a statistical sense. The present study aims at finding a physical explanation for this particular type of correlation structure. It presents the accumulated basin storage (ABS) as a physically based parameter that measures average soil moisture conditions in the drainage basin. The reconstructed record of ABS values just prior to the spring runoff shows a very high first-order autocorrelation coefficient. Relatively wet and relatively dry soil conditions therefore tend to persist over long periods. Since the magnitude of the spring peak is significantly affected by soil moisture conditions prior to snowmelt, the structure of the annual ABS time series can be expected to be reflected in the peak flow time series. This was found to be the case. The study thus supports earlier conclusions based on statistical evidence that the conventional assumption of serially independent spring peak floods seriously underestimates the flood risk for the City of Winnipeg and the Red River valley. Key words: accumulated basin storage, Red River floods, simulation, time series, clustering, streamflow persistence, serial correlation, flood risk.


2021 ◽  
Vol 13 (14) ◽  
pp. 2783
Author(s):  
Sorin Nistor ◽  
Norbert-Szabolcs Suba ◽  
Kamil Maciuk ◽  
Jacek Kudrys ◽  
Eduard Ilie Nastase ◽  
...  

This study evaluates the EUREF Permanent Network (EPN) station position time series of approximately 200 GNSS stations subject to the Repro 2 reprocessing campaign in order to characterize the dominant types of noise and amplitude and their impact on estimated velocity values and associated uncertainties. The visual inspection on how different noise model represents the analysed data was done using the power spectral density of the residuals and the estimated noise model and it is coherent with the calculated Allan deviation (ADEV)-white and flicker noise. The velocities resulted from the dominant noise model are compared to the velocity obtained by using the Median Interannual Difference Adjusted for Skewness (MIDAS). The results show that only 3 stations present a dominant random walk noise model compared to flicker and powerlaw noise model for the horizontal and vertical components. We concluded that the velocities for the horizontal and vertical component show similar values in the case of MIDAS and maximum likelihood estimation (MLE), but we also found that the associated uncertainties from MIDAS are higher compared to the uncertainties from MLE. Additionally, we concluded that there is a spatial correlation in noise amplitude, and also regarding the differences in velocity uncertainties for the Up component.


2020 ◽  
Vol 3 (1) ◽  
pp. 37
Author(s):  
Toyi Maniki Diphagwe ◽  
Bernard Moeketsi Hlalele ◽  
Dibuseng Priscilla Mpakathi

The 2019/20 Australian bushfires burned over 46 million acres of land, killed 34 people and left 3500 individuals homeless. Majority of deaths and buildings destroyed were in New South Wales, while the Northern Territory accounted for approximately 1/3 of the burned area. Many of the buildings that were lost were farm buildings, adding to the challenge of agricultural recovery that is already complex because of ash-covered farmland accompanied by historic levels of drought. The current research therefore aimed at characterising veldfire risk in the study area using Keetch-Byram Drought Index (KBDI). A 39-year-long time series data was obtained from an online NASA database. Both homogeneity and stationarity tests were deployed using a non-parametric Pettitt’s and Dicky-Fuller tests respectively for data quality checks. Major results revealed a non-significant two-tailed Mann Kendall trend test with a p-value = 0.789 > 0.05 significance level. A suitable probability distribution was fitted to the annual KBDI time series where both Kolmogorov-Smirnov and Chi-square tests revealed Gamma (1) as a suitably fitted probability distribution. Return level computation from the Gamma (1) distribution using XLSTAT computer software resulted in a cumulative 40-year return period of moderate to high fire risk potential. With this low probability and 40-year-long return level, the study found the area less prone to fire risks detrimental to animal and crop production. More agribusiness investments can safely be executed in the Northern Territory without high risk aversion.


2013 ◽  
Vol 20 (6) ◽  
pp. 1071-1078 ◽  
Author(s):  
E. Piegari ◽  
R. Di Maio ◽  
A. Avella

Abstract. Reasonable prediction of landslide occurrences in a given area requires the choice of an appropriate probability distribution of recurrence time intervals. Although landslides are widespread and frequent in many parts of the world, complete databases of landslide occurrences over large periods are missing and often such natural disasters are treated as processes uncorrelated in time and, therefore, Poisson distributed. In this paper, we examine the recurrence time statistics of landslide events simulated by a cellular automaton model that reproduces well the actual frequency-size statistics of landslide catalogues. The complex time series are analysed by varying both the threshold above which the time between events is recorded and the values of the key model parameters. The synthetic recurrence time probability distribution is shown to be strongly dependent on the rate at which instability is approached, providing a smooth crossover from a power-law regime to a Weibull regime. Moreover, a Fano factor analysis shows a clear indication of different degrees of correlation in landslide time series. Such a finding supports, at least in part, a recent analysis performed for the first time of an historical landslide time series over a time window of fifty years.


2018 ◽  
Vol 25 (3) ◽  
pp. 565-587 ◽  
Author(s):  
Mohamed Jardak ◽  
Olivier Talagrand

Abstract. Data assimilation is considered as a problem in Bayesian estimation, viz. determine the probability distribution for the state of the observed system, conditioned by the available data. In the linear and additive Gaussian case, a Monte Carlo sample of the Bayesian probability distribution (which is Gaussian and known explicitly) can be obtained by a simple procedure: perturb the data according to the probability distribution of their own errors, and perform an assimilation on the perturbed data. The performance of that approach, called here ensemble variational assimilation (EnsVAR), also known as ensemble of data assimilations (EDA), is studied in this two-part paper on the non-linear low-dimensional Lorenz-96 chaotic system, with the assimilation being performed by the standard variational procedure. In this first part, EnsVAR is implemented first, for reference, in a linear and Gaussian case, and then in a weakly non-linear case (assimilation over 5 days of the system). The performances of the algorithm, considered either as a probabilistic or a deterministic estimator, are very similar in the two cases. Additional comparison shows that the performance of EnsVAR is better, both in the assimilation and forecast phases, than that of standard algorithms for the ensemble Kalman filter (EnKF) and particle filter (PF), although at a higher cost. Globally similar results are obtained with the Kuramoto–Sivashinsky (K–S) equation.


2014 ◽  
Vol 18 (1) ◽  
pp. 353-365 ◽  
Author(s):  
U. Haberlandt ◽  
I. Radtke

Abstract. Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the application for derived flood frequency analysis.


Sign in / Sign up

Export Citation Format

Share Document