scholarly journals An algorithm for computing moments-based flood quantile estimates when historical flood information is available

1997 ◽  
Vol 33 (9) ◽  
pp. 2089-2096 ◽  
Author(s):  
T. A. Cohn ◽  
W. L. Lane ◽  
W. G. Baier
Keyword(s):  
2021 ◽  
pp. 135481662110300
Author(s):  
Usamah F Alfarhan ◽  
Khaldoon Nusair ◽  
Hamed Al-Azri ◽  
Saeed Al-Muharrami ◽  
Nan Hua

Tourism expenditures are determined by a set of antecedents that reflect tourists’ willingness and ability to spend, and de facto incremental monetary outlays at which willingness and ability is transformed into total expenditures. Based on the neoclassical theoretical argument of utility-constrained expenditure minimization, we extend the current literature by applying a sustainability-based segmentation criterion, namely, the Legatum Prosperity IndexTM to the decomposition of a total expenditure differential into tourists’ relative willingness to spend and an upper bound of third-degree price discrimination, using mean-level and conditional quantile estimates. Our results indicate that understanding the price–quantity composition of international inbound tourism expenditure differentials assists agents in the tourism industry in their quest for profit maximization.


2021 ◽  
Author(s):  
Ilaria Prosdocimi ◽  
Thomas Kjeldsen

<p>The potential for changes in hydrometeorological extremes is routinely investigated by fitting change-permitting extreme value models to long-term observations, allowing one or more distribution parameters to change as a function of time or some physically-motivated covariate. In most practical extreme value analyses, the main quantity of interest though is the upper quantiles of the distribution, rather than the parameters' values. This study focuses on the changes in quantile estimates under different change-permitting models. First, metrics which measure the impact of changes in parameters on changes in quantiles are introduced. The mathematical structure of these change metrics is investigated for several models based on the Generalised Extreme Value (GEV) distribution. It is shown that for the most commonly used models, the predicted changes in the quantiles are a non-intuitive function of the distribution parameters, leading to results which are difficult to interpret. Next, it is posited that commonly used change-permitting GEV models do not preserve a constant coefficient of variation, a property that is typically assumed to hold and that is related to the scaling properties of extremes. To address these shortcomings a new (parsimonious) model is proposed: the model assumes a constant coefficient of variation, allowing the location and scale parameters to change simultaneously. The proposed model results in more interpretable changes in the quantile function. The consequences of the different modelling choices on quantile estimates are exemplified using a dataset of extreme peak river flow measurements.</p>


2016 ◽  
Vol 20 (12) ◽  
pp. 4717-4729 ◽  
Author(s):  
Martin Durocher ◽  
Fateh Chebana ◽  
Taha B. M. J. Ouarda

Abstract. This study investigates the utilization of hydrological information in regional flood frequency analysis (RFFA) to enforce desired properties for a group of gauged stations. Neighbourhoods are particular types of regions that are centred on target locations. A challenge for using neighbourhoods in RFFA is that hydrological information is not available at target locations and cannot be completely replaced by the available physiographical information. Instead of using the available physiographic characteristics to define the centre of a target location, this study proposes to introduce estimates of reference hydrological variables to ensure a better homogeneity. These reference variables represent nonlinear relations with the site characteristics obtained by projection pursuit regression, a nonparametric regression method. The resulting neighbourhoods are investigated in combination with commonly used regional models: the index-flood model and regression-based models. The complete approach is illustrated in a real-world case study with gauged sites from the southern part of the province of Québec, Canada, and is compared with the traditional approaches such as region of influence and canonical correlation analysis. The evaluation focuses on the neighbourhood properties as well as prediction performances, with special attention devoted to problematic stations. Results show clear improvements in neighbourhood definitions and quantile estimates.


2019 ◽  
Vol 20 (1) ◽  
pp. 106-123 ◽  
Author(s):  
Mustafizur Rahman ◽  
Md. Al-Hasan

This article undertakes an examination of Bangladesh’s latest available Quarterly Labour Force Survey 2015–2016 data to draw in-depth insights on gender wage gap and wage discrimination in Bangladesh labour market. The mean wage decomposition shows that on average a woman in Bangladesh earns 12.2 per cent lower wage than a man, and about half of the wage gap can be explained by labour market discrimination against women. Quantile counterfactual decomposition shows that women are subject to higher wage penalty at the lower deciles of the wage distribution with the wage gap varying between 8.3 per cent and 19.4 per cent at different deciles. We have found that at lower deciles, a significant part of the gender wage gap is on account of the relatively larger presence of informal employment. Conditional quantile estimates further reveal that formally employed female workers earn higher wage than their male counterparts at the first decile but suffer from wage penalty at the top deciles. JEL: C21, J31, J46, J70


2017 ◽  
Vol 17 (9) ◽  
pp. 1623-1629 ◽  
Author(s):  
Berry Boessenkool ◽  
Gerd Bürger ◽  
Maik Heistermann

Abstract. High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.


2019 ◽  
Vol 72 (4) ◽  
pp. 517-541 ◽  
Author(s):  
Hilary I. Okagbue ◽  
Muminu O. Adamu ◽  
Timothy A. Anake ◽  
Ashiribo S. Wusu

2011 ◽  
Vol 15 (3) ◽  
pp. 819-830 ◽  
Author(s):  
S. Das ◽  
C. Cunnane

Abstract. Flood frequency analysis is a necessary and important part of flood risk assessment and management studies. Regional flood frequency methods, in which flood data from groups of catchments are pooled together in order to enhance the precision of flood estimates at project locations, is an accepted part of such studies. This enhancement of precision is based on the assumption that catchments so pooled together are homogeneous in their flood producing properties. If homogeneity is assured then a homogeneous pooling group of sites lead to a reduction in the error of quantile estimates, relative to estimators based on single at-site data series alone. Homogeneous pooling groups are selected by using a previously nominated rule and this paper examines how effective one such rule is in selecting homogeneous groups. In this paper a study, based on annual maximum series obtained from 85 Irish gauging stations, examines how successful a common method of identifying pooling group membership is in selecting groups that actually are homogeneous. Each station has its own unique pooling group selected by use of a Euclidean distance measure in catchment descriptor space, commonly denoted dij and with a minimum of 500 station years of data in the pooling group. It was found that dij could be effectively defined in terms of catchment area, mean rainfall and baseflow index. The study then investigated how effective this selected method is in selecting groups of catchments that are actually homogenous as indicated by their L-Cv values. The sampling distribution of L-CV (t2) in each pooling group and the 95% confidence limits about the pooled estimate of t2 are obtained by simulation. The t2 values of the selected group members are compared with these confidence limits both graphically and numerically. Of the 85 stations, only 1 station's pooling group members have all their t2 values within the confidence limits, while 7, 33 and 44 of them have 1, 2 or 3 or more, t2 values outside the confidence limits. The outcomes are also compared with the heterogeneity measures H1 and H2. The H1 values show an upward trend with the ranges of t2 values in the pooling group whereas the H2 values do not show any such dependency. A selection of 27 pooling groups, found to be heterogeneous, were further examined with the help of box-plots of catchment descriptor values and one particular case is considered in detail. Overall the results show that even with a carefully considered selection procedure, it is not certain that perfectly homogeneous pooling groups are identified.


Sign in / Sign up

Export Citation Format

Share Document