The Development of Estimators for the Peaks-Over-Threshold Method

Author(s):  
A. Naess ◽  
P. H. Clausen

The paper discusses the accuracy and efficiency of some of the standard estimators used in conjunction with the Peaks-Over-Threshold (POT) method. A comparison is made between some commonly adopted estimators and two types of estimators proposed by the authors. The comparison is based on an extensive set of synthetic data simulated from a range of different statistical distribution functions that have been assumed to describe wind speed processes.

Author(s):  
A. Naess ◽  
E. Haug

The paper describes a new method for predicting the appropriate extreme value distribution derived from an observed time series. The method is based on introducing a cascade of conditioning approximations to the exact extreme value distribution. This allows for a rational way of capturing dependence effects in the time series. The performance of the method is compared with that of the peaks over threshold method.


Author(s):  
V.P. Evstigneev ◽  
◽  
V.A. Naumova ◽  
N.A. Lemeshko ◽  
◽  
...  

In the paper statistical distribution of the highest wind speed per year in the Azov and Black Sea region was analyzed using the data of 33 meteorological stations for 1958-2013. A statistical estimation of the wind speed extremes was carried out by approximation of the empirical sample with a function of Generalized distribution of Extreme Values (GEV) and by extrapolating it to the low probabilities region. We used two methodologies and applied statistical distribution functions corresponding to them. The first method is based on the assumption of stationarity of parameters of the GEV function. The second one is based on the non-stationary assumption of time dependence of extremum localization parameter μ. It was found, that for 13 out of 33 stations of the region, non-stationary GEV-function turned out to be adequate to describe extreme wind speeds.


Author(s):  
A. Naess ◽  
E. Haug

The paper describes a novel method for predicting the appropriate extreme value distribution derived from an observed time series. The method is based on introducing a cascade of conditioning approximations to the exact extreme value distribution. This allows for a rational way of capturing dependence effects in the time series. The performance of the method is compared with that of the peaks-over-threshold method.


2012 ◽  
Vol 610-613 ◽  
pp. 1033-1040
Author(s):  
Wei Dai ◽  
Jia Qi Gao ◽  
Bo Wang ◽  
Feng Ouyang

Effects of weather conditions including temperature, relative humidity, wind speed, wind and direction on PM2.5 were studied using statistical methods. PM2.5 samples were collected during the summer and the winter in a suburb of Shenzhen. Then, correlations, hypothesis test and statistical distribution of PM2.5 and meteorological data were analyzed with IBM SPSS predictive analytics software. Seasonal and daily variations of PM2.5 have been found and these mainly resulted from the weather effects.


Author(s):  
Amr Khaled Khamees ◽  
Almoataz Y. Abdelaziz ◽  
Ziad M. Ali ◽  
Mosleh M. Alharthi ◽  
Sherif S.M. Ghoneim ◽  
...  

2021 ◽  
Author(s):  
Xiao Pan ◽  
Ataur Rahman

Abstract Flood frequency analysis (FFA) enables fitting of distribution functions to observed flow data for estimation of flood quantiles. Two main approaches, Annual Maximum (AM) and peaks-over-threshold (POT) are adopted for FFA. POT approach is under-employed due to its complexity and uncertainty associated with the threshold selection and independence criteria for selecting peak flows. This study evaluates the POT and AM approaches using data from 188 gauged stations in south-east Australia. POT approach adopted in this study applies a different average numbers of events per year fitted with Generalised Pareto (GP) distribution with an automated threshold detection method. The POT model extends its parametric approach to Maximum Likelihood Estimator (MLE) and Point Moment Weighted Unbiased (PMWU) method. Generalised Extreme Value (GEV) distribution using L-moment estimator is used for AM approach. It has been found that there is a large difference in design flood estimates between the AM and POT approaches for smaller average recurrence intervals (ARI), with a median difference of 25% for 1.01 year ARI and 5% for 50 and 100 years ARIs.


2005 ◽  
Vol 225 (5) ◽  
Author(s):  
Sandra Gottschalk

SummaryNonparametric resampling is a method for generating synthetic microdata and is introduced as a procedure for microdata disclosure limitation. Theoretically, re-identification of individuals or firms is not possible with synthetic data. The resampling procedure creates datasets - the resample - which nearly have the same empirical cumulative distribution functions as the original survey data and thus permit econometricians to calculate meaningful regression results. The idea of nonparametric resampling, especially, is to draw from univariate or multivariate empirical distribution functions without having to estimate these explicitly. Until now, the resampling procedure shown here has only been applicable to variables with continuous distribution functions. Monte Carlo simulations and applications with data from the Mannheim Innovation Panel show that results of linear and nonlinear regression analyses can be reproduced quite precisely by nonparametric resamples. A univariate and a multivariate resampling version are examined. The univariate version as well as the multivariate version which is using the correlation structure of the original data as a scaling instrument turn out to be able to retain the coefficients of model estimations. Furthermore, multivariate resampling best reproduces regression results if all variables are anonymised.


Sign in / Sign up

Export Citation Format

Share Document