Corrigendum to “When continuous observations just won't do: Developing accurate and efficient sampling strategies for the laying hen” [Behav. Process. 103 (2014) 58–66]

2016 ◽  
Vol 130 ◽  
pp. 86
Author(s):  
Courtney L. Daigle ◽  
Janice M. Siegford
2007 ◽  
Vol 4 (3) ◽  
pp. 1069-1094
Author(s):  
M. Rivas-Casado ◽  
S. White ◽  
P. Bellamy

Abstract. River restoration appraisal requires the implementation of monitoring programmes that assess the river site before and after the restoration project. However, little work has yet been developed to design effective and efficient sampling strategies. Three main variables need to be considered when designing monitoring programmes: space, time and scale. The aim of this paper is to describe the methodology applied to analyse the variation of depth in space, scale and time so more comprehensive monitoring programmes can be developed. Geostatistical techniques were applied to study the spatial dimension (sampling strategy and density), spectral analysis was used to study the scale at which depth shows cyclic patterns, whilst descriptive statistics were used to assess the temporal variation. A brief set of guidelines have been summarised in the conclusion.


2003 ◽  
Vol 5 (1) ◽  
pp. 11-25 ◽  
Author(s):  
Gayathri Gopalakrishnan ◽  
Barbara S. Minsker ◽  
David E. Goldberg

A groundwater management model has been developed that predicts human health risks and uses a noisy genetic algorithm to identify promising risk-based corrective action (RBCA) designs. Noisy genetic algorithms are simple genetic algorithms that operate in noisy environments. The noisy genetic algorithm uses a type of noisy fitness function (objective function) called the sampling fitness function, which utilises Monte-Carlo-type sampling to find robust designs. Unlike Monte Carlo simulation modelling, however, the noisy genetic algorithm is highly efficient and can identify robust designs with only a few samples per design. For hydroinformatic problems with complex fitness functions, however, it is important that the sampling be as efficient as possible. In this paper, methods for identifying efficient sampling strategies are investigated and their performance evaluated using a case study of a RBCA design problem. Guidelines for setting the parameter values used in these methods are also developed. Applying these guidelines to the case study resulted in highly efficient sampling strategies that found RBCA designs with 98% reliability using as few as 4 samples per design. Moreover, these designs were identified with fewer simulation runs than would likely be required to identify designs using trial-and-error Monte Carlo simulation. These findings show considerable promise for applying these methods to complex hydroinformatic problems where substantial uncertainty exists but extensive sampling cannot feasibly be done.


2020 ◽  
Vol 221 (Supplement_5) ◽  
pp. S554-S560 ◽  
Author(s):  
Claudio Fronterre ◽  
Benjamin Amoah ◽  
Emanuele Giorgi ◽  
Michelle C Stanton ◽  
Peter J Diggle

Abstract As neglected tropical diseases approach elimination status, there is a need to develop efficient sampling strategies for confirmation (or not) that elimination criteria have been met. This is an inherently difficult task because the relative precision of a prevalence estimate deteriorates as prevalence decreases, and classic survey sampling strategies based on random sampling therefore require increasingly large sample sizes. More efficient strategies for survey design and analysis can be obtained by exploiting any spatial correlation in prevalence within a model-based geostatistics framework. This framework can be used for constructing predictive probability maps that can inform in-country decision makers of the likelihood that their elimination target has been met, and where to invest in additional sampling. We evaluated our methodology using a case study of lymphatic filariasis in Ghana, demonstrating that a geostatistical approach outperforms approaches currently used to determine an evaluation unit’s elimination status.


2021 ◽  
Vol 204 ◽  
pp. 304-311
Author(s):  
Eduardo Rosa ◽  
Julio Mosquera ◽  
Haritz Arriaga ◽  
Gema Montalvo ◽  
Pilar Merino

1995 ◽  
Vol 17 (3) ◽  
pp. 221-229 ◽  
Author(s):  
Hajime Nakashima ◽  
Ronald Lieberman ◽  
Atsuya Karato ◽  
Hitoshi Arioka ◽  
Hironobu Ohmatsu ◽  
...  

1993 ◽  
Vol 116 (1) ◽  
pp. 195-226 ◽  
Author(s):  
Richard J. Lipton ◽  
Jeffrey F. Naughton ◽  
Donovan A. Schneider ◽  
S. Seshadri

1992 ◽  
Vol 22 (2) ◽  
pp. 239-247 ◽  
Author(s):  
H.T. Schreuder ◽  
Z. Ouyang

Our strong effort to find an optimal sampling strategy that was clearly superior to other strategies for a range of linearity conditions and variance structures for linear models showed that several sampling strategies turned out to be equally efficient. Each of these stratified the population to the maximum extent feasible, i.e., used n strata based on a covariate. Which of two ways of stratification to use and how units in each stratum were selected (simple random sampling or sampling with probability proportional to size) did not seem to matter much. Two regression estimators, one considering both probability and variance weights (Ŷgr) and one considering only probability weights (Ŷpi), are preferred estimators with the five efficient sampling selection schemes that select one unit per stratum with either equal or unequal probability sampling. The bootstrap variance estimator is generally the least biased, yet conservative, variance estimator and yields reliable coverage rates with 95% confidence intervals for most populations studied.


2011 ◽  
Vol 104 (4) ◽  
pp. 739-748 ◽  
Author(s):  
Kenichi Ozaki ◽  
Katsuhiko Sayama ◽  
Akira Ueda ◽  
Masato Ito ◽  
Ken Tabuchi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document