Use of simulation–extrapolation estimation in catch–effort analyses

1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.

1997 ◽  
Vol 54 (4) ◽  
pp. 898-906
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

We have investigated the consequences of using imprecise catch and effort estimates in closed-population catch-effort analyses using traditional regression techniques and maximum likelihood to estimate the catchability coefficient and population size parameters. Our simulation study involved adding known amounts of measurement error to error-free catch and effort data to determine the effects of using such estimates of catch and effort rather than the true, and in many cases unknown, quantities. Our results suggest that naive estimation using estimates of catch and effort as true values may bias estimates of population size and the catchability coefficient. In most cases, the effects of measurement error in catch and effort were to inflate the parameter estimates, the magnitude of inflation being dependent on the size of the measurement error variance. Maximum likelihood estimation proved to be the estimation procedure most robust to the errors in measurement, but still displayed the need for correction of the measurement-error-induced bias. A recently developed simulation-extrapolation method of inference (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) is suggested as a possible means for making bias adjustments.


2009 ◽  
Vol 131 (3) ◽  
Author(s):  
Joshua A. Nordquist ◽  
M. L. Hull

Because instrumented spatial linkages (ISLs) have been commonly used in measuring joint rotations and must be calibrated before using the device in confidence, a calibration device design and associated method for quantifying calibration device error would be useful. The objectives of the work reported by this paper were to (1) design an ISL calibration device and demonstrate the design for a specific application, (2) describe a new method for calibrating the device that minimizes measurement error, and (3) quantify measurement error of the device using the new method. Relative translations and orientations of the device were calculated via a series of transformation matrices containing inherent fixed and variable parameters. These translations and orientations were verified with a coordinate measurement machine, which served as a gold standard. Inherent fixed parameters of the device were optimized to minimize measurement error. After parameter optimization, accuracy was determined. The root mean squared error (RMSE) was 0.175 deg for orientation and 0.587 mm for position. All RMSE values were less than 0.8% of their respective full-scale ranges. These errors are comparable to published measurement errors of ISLs for positions and lower by at least a factor of 2 for orientations. These errors are in spite of the many steps taken in design and manufacturing to achieve high accuracy. Because it is challenging to achieve the accuracy required for a custom calibration device to serve as a viable gold standard, it is important to verify that a calibration device provides sufficient precision to calibrate an ISL.


2011 ◽  
Vol 133 (2) ◽  
Author(s):  
Nathaniel J. Williams ◽  
E. Ernest van Dyk ◽  
Frederik J. Vorster

With the high cost of grid extension and approximately 1.6 billion people still living without electrical services, the solar home system is an important technology in the alleviation of rural energy poverty across the developing world. The performance monitoring and analysis of these systems provide insights leading to improvements in system design and implementation in order to ensure high quality and robust energy supply in remote locations. Most small solar home systems now use charge controllers using pulse width modulation (PWM) to regulate the charge current to the battery. A rapid variation in current and voltage resulting from PWM creates monitoring challenges, which, if not carefully considered in the design of the monitoring system, can result in the erroneous measurement of photovoltaic (PV) power. In order to characterize and clarify the measurement process during PWM, a mathematical model was developed to reproduce and simulate measured data. The effects of matched scan and PWM frequency were studied with the model, and an algorithm was devised to select appropriate scan rates to ensure that a representative sample of measurements is acquired. Furthermore, estimation methods were developed to correct for measurement errors due to factors such as nonzero “short circuit” voltage and current/voltage peak mismatches. A more sophisticated algorithm is then discussed to more accurately measure PV power using highly programmable data loggers. The results produced by the various methods are compared and reveal a significant error in the measurement of PV power without corrective action. Estimation methods prove to be effective in certain cases but are susceptible to error during conditions of variable irradiance. The effect of the measurement error has been found to depend strongly on the duty cycle of PWM as well as the relationship between scan rate and PWM frequency. The energy measurement error over 1 day depends on insolation and system conditions as well as on system design. On a sunny day, under a daily load of about 20 A h, the net error in PV energy is found to be 1%, whereas a system with a high initial battery state of charge under similar conditions and no load produced an error of 47.6%. This study shows the importance of data logger selection and programming in monitoring accurately the energy provided by solar home systems. When appropriately considered, measurement errors can be avoided or reduced without investment in more expensive measurement equipment.


Author(s):  
Анастасия Юрьевна Тимофеева

Рассматривается проблема оценки относительной активности мономеров на основе дифференциального уравнения сополимеризации. Обосновано включение в модель погрешности измерения входного признака в виде ошибки Берксона. Предложен алгоритм одновременного оценивания констант сополимеризации и дисперсий ошибок с помощью метода максимального правдоподобия. На примере сополимеризации виниловых эфиров произведено сравнение разных методов оценивания констант сополимеризации. Показано, что метод на основе симметричных уравнений дает некорректные результаты. Результаты оценивания с помощью предложенного алгоритма наиболее близки к оценкам, полученным по нелинейному методу наименьших квадратов Purpose. The purpose of this paper is to study methods for estimating copolymerization reactivity ratios based on the differential composition equation. Methodology. Most estimation methods reduce the differential composition equation to a linear form. They are based on the least squares method and do not take into account the measurement error in the input variable. Therefore they lead to statistically incorrect results. When analyzing the problem on the basis of the error-in-variables model in the classical case, additional information is required to determine the magnitude of the errors in measuring the concentration of monomers in the mixture and in the copolymer. Inclusion of the measurement error in the input variable into the model as the Berkson error is more consistent with the actual conditions of the experiments. It allows simultaneous estimating both the reactivity ratios and the variances of measurement errors using the maximum likelihood method. Results. The algorithm have been developed for estimating reactivity ratios with no additional information. The empirical study of estimation methods has been carried out using the example of copolymerization of vinyl esters. Findings. It is shown that the method based on symmetric equations gives incorrect results. Estimation results using the proposed algorithm are closest to the estimates obtained by the nonlinear least squares method


Author(s):  
Jan Pablo Burgard ◽  
Joscha Krause ◽  
Dennis Kreber ◽  
Domingo Morales

AbstractThe connection between regularization and min–max robustification in the presence of unobservable covariate measurement errors in linear mixed models is addressed. We prove that regularized model parameter estimation is equivalent to robust loss minimization under a min–max approach. On the example of the LASSO, Ridge regression, and the Elastic Net, we derive uncertainty sets that characterize the feasible noise that can be added to a given estimation problem. These sets allow us to determine measurement error bounds without distribution assumptions. A conservative Jackknife estimator of the mean squared error in this setting is proposed. We further derive conditions under which min-max robust estimation of model parameters is consistent. The theoretical findings are supported by a Monte Carlo simulation study under multiple measurement error scenarios.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


Econometrics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 40
Author(s):  
Erhard Reschenhofer ◽  
Manveer K. Mangat

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.


Sign in / Sign up

Export Citation Format

Share Document