On Selection of Probability Distributions for Representing Annual Extreme Rainfall Series

Author(s):  
Van-Thanh-Van Nguyen ◽  
Diana Tao ◽  
Alain Bourque
2018 ◽  
Vol 8 (1) ◽  
pp. 2537-2541
Author(s):  
A. H. Syafrina ◽  
A. Norzaida ◽  
O. Noor Shazwani

Weather generator is a numerical tool that uses existing meteorological records to generate series of synthetic weather data. The AWE-GEN (Advanced Weather Generator) model has been successful in producing a broad range of temporal scale weather variables, ranging from the high-frequency hourly values to the low-frequency inter-annual variability. In Malaysia, AWE-GEN has produced reliable projections of extreme rainfall events for some parts of Peninsular Malaysia. This study focuses on the use of AWE-GEN model to assess rainfall distribution in Kelantan. Kelantan is situated on the north east of the Peninsular, a region which is highly susceptible to flood. Embedded within the AWE-GEN model is the Neyman Scott process which employs parameters to represent physical rainfall characteristics. The use of correct probability distributions to represent the parameters is imperative to allow reliable results to be produced. This study compares the performance of two probability distributions, Weibull and Gamma to represent rainfall intensity and the better distribution found was used subsequently to simulate hourly scaled rainfall series. Thirty years of hourly scaled meteorological data from two stations in Kelantan were used in model construction. Results indicate that both probability distributions are capable of replicating the rainfall series at both stations very well, however numerical evaluations suggested that Gamma performs better. Despite Gamma not being a heavy tailed distribution, it is able to replicate the key characteristics of rainfall series and particularly extreme values. The overall simulation results showed that the AWE-GEN model is capable of generating tropical rainfall series which could be beneficial in flood preparedness studies in areas vulnerable to flood.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1397 ◽  
Author(s):  
Óscar E. Coronado-Hernández ◽  
Ernesto Merlano-Sabalza ◽  
Zaid Díaz-Vergara ◽  
Jairo R. Coronado-Hernández

Frequency analysis of extreme events is used to estimate the maximum rainfall associated with different return periods and is used in planning hydraulic structures. When carrying out this type of analysis in engineering projects, the hydrological distributions that best fit the trend of maximum 24 h rainfall data are unknown. This study collected maximum 24 h rainfall records from 362 stations distributed throughout Colombia, with the goal of guiding hydraulic planners by suggesting the probability distributions they should use before beginning their analysis. The generalized extreme value (GEV) probability distribution, using the weighted moments method, presented the best fits of frequency analysis of maximum daily precipitation for various return periods for selected rainfall stations in Colombia.


2002 ◽  
Vol 45 (2) ◽  
pp. 55-61 ◽  
Author(s):  
G. Vaes ◽  
P. Willems ◽  
J. Berlamont

In 1999 the digitisation of old rainfall records of measurements at Uccle (Belgium) was completed, which resulted in a unique rainfall series of 100 years (period 1898-1997). This is an ideal opportunity to search for trends in the rainfall over the last century. Large variations in rainfall probability over the century have been observed. For small aggregation levels there is a small decrease in extreme rainfall events over the century. For large aggregation levels there is a more explicit increase in extreme rainfall. Because the rainfall on seasonal aggregation level is only slightly increased, the increase in extreme rainfall events for aggregation levels between a few days and a few months can only occur due to larger clustering. However, the final conclusion is that no significant trend can be observed. A pure random variation of the rainfall can cause equally large variations. This does not exclude a possible trend in flooding frequency, due to the strong increase in urbanisation over the last century.


2019 ◽  
Vol 23 (5) ◽  
pp. 2225-2243 ◽  
Author(s):  
Guo Yu ◽  
Daniel B. Wright ◽  
Zhihua Zhu ◽  
Cassia Smith ◽  
Kathleen D. Holman

Abstract. Floods are the product of complex interactions among processes including precipitation, soil moisture, and watershed morphology. Conventional flood frequency analysis (FFA) methods such as design storms and discharge-based statistical methods offer few insights into these process interactions and how they “shape” the probability distributions of floods. Understanding and projecting flood frequency in conditions of nonstationary hydroclimate and land use require deeper understanding of these processes, some or all of which may be changing in ways that will be undersampled in observational records. This study presents an alternative “process-based” FFA approach that uses stochastic storm transposition to generate large numbers of realistic rainstorm “scenarios” based on relatively short rainfall remote sensing records. Long-term continuous hydrologic model simulations are used to derive seasonally varying distributions of watershed antecedent conditions. We couple rainstorm scenarios with seasonally appropriate antecedent conditions to simulate flood frequency. The methodology is applied to the 4002 km2 Turkey River watershed in the Midwestern United States, which is undergoing significant climatic and hydrologic change. We show that, using only 15 years of rainfall records, our methodology can produce accurate estimates of “present-day” flood frequency. We found that shifts in the seasonality of soil moisture, snow, and extreme rainfall in the Turkey River exert important controls on flood frequency. We also demonstrate that process-based techniques may be prone to errors due to inadequate representation of specific seasonal processes within hydrologic models. If such mistakes are avoided, however, process-based approaches can provide a useful pathway toward understanding current and future flood frequency in nonstationary conditions and thus be valuable for supplementing existing FFA practices.


2008 ◽  
Vol 14 (3) ◽  
pp. 388-401 ◽  
Author(s):  
Aleksandras Krylovas ◽  
Natalja Kosareva

In this paper a mathematical model for obtaining probability distribution of the knowledge testing results is proposed. Differences and similarities of this model and Item Response Theory (IRT) logistic model are discussed. Probability distributions of 10 items test results for low, middle and high ability populations selecting characteristic functions of the various difficulty items combinations are obtained. Entropy function values for these items combinations are counted. These results enable to formulate recomendations for test items selection for various testing groups according to their attainment level. Method of selection of a suitable item characteristic function based on the Kolmogorov compatibility test, is proposed. This method is illustrated by applying it to a discreet mathematics test item. Santrauka Straipsnyje pasiūlytas matematinis modelis žinių tikrinimo rezultatų tikimybiniam skirstiniui gauti. Aptarti šio modelio ir užduočių sprendimo teorijos (IRT) logistinio modelio skirtumai ir panašumai. Išnagrinėti 10 klausimų testo rezultatų tikimybiniai skirstiniai silpnai, vidutinei ir stipriai testuojamųjų populiacijoms parenkant įvairias testo klausimų sunkumo funkcijų kombinacijas. Apskaičiuotos entropijos funkcijos reikšmės. Gauti rezultatai leidžia formuluoti rekomendacijas testo klausimams parinkti skirtingoms testuojamųjų grupėms pagal jų žinių lygį. Pasiūlytas tinkamiausios klausimo charakteristinės funkcijos parinkimo būdas, grindžiamas Kolmogorovo kriterijumi. Ši procedūra iliustruojama taikant ją konkrečiam diskrečiosios matematikos testo klausimui.


2019 ◽  
Vol 65 (3) ◽  
pp. 101-112 ◽  
Author(s):  
M. Rogalska ◽  
J. Żelazna-Pawlicka

AbstractThe paper evaluates the relationship between the selection of the probability density function and the construction price, and the price of the building's life cycle, in relation to the deterministic cost estimate in terms of the minimum, mean, and maximum. The deterministic cost estimates were made based on the minimum, mean, and maximum prices: labor rates, indirect costs, profit, and the cost of equipment and materials. The net construction prices received were given different probability density distributions based on the minimum, mean, and maximum values. Twelve kinds of probability distributions were used: triangular, normal, lognormal, beta pert, gamma, beta, exponential, Laplace, Cauchy, Gumbel, Rayleigh, and uniform. The results of calculations with the event probability from 5 to 95% were subjected to the statistical comparative analysis. The dependencies between the results of calculations were determined, for which different probability density distributions of price factors were assumed. A certain price level was assigned to specific distributions in 6 groups based on the t-test. It was shown that each of the distributions analyzed is suitable for use, however, it has consequences in the form of a final result. The lowest final price is obtained using the gamma distribution, the highest is obtained by the beta distribution, beta pert, normal, and uniform.


Author(s):  
Uwe Haberlandt ◽  
Christian Berndt

Abstract. Pure radar rainfall, station rainfall and radar-station merging products are analysed regarding extreme rainfall frequencies with durations from 5 min to 6 h and return periods from 1 year to 30 years. Partial duration series of the extremes are derived from the data and probability distributions are fitted. The performance of the design rainfall estimates is assessed based on cross validations for observed station points, which are used as reference. For design rainfall estimation using the pure radar data, the pixel value at the station location is taken; for the merging products, spatial interpolation methods are applied. The results show, that pure radar data are not suitable for the estimation of extremes. They usually lead to an overestimation compared to the observations, which is opposite to the usual behaviour of the radar rainfall. The merging products between radar and station data on the other hand lead usually to an underestimation. They can only outperform the station observations for longer durations. The main problem for a good estimation of extremes seems to be the poor radar data quality.


Sign in / Sign up

Export Citation Format

Share Document