The effect of sample size and plot stratification on the precision of the wheel-point method of estimating botanical composition in clustered plant communities.

1979 ◽  
Vol 1 (4) ◽  
pp. 346 ◽  
Author(s):  
GM Lodge ◽  
AC Gleeson

A natural pasture in which there was contagious distribution of the species was sampled using a single wheel-point apparatus on which the interpoint distance exceeded the size of individual plants but was less than that of the plant clusters. The standard error of mean basal cover calculated from repeated independent samples was lower than that e\pected from a binomial distribution. The standard errors for five levels of sampling, with and without stratification of the plot are presented and these can be used to predict the sampling intensity needed to achieve an acceptable standard error for each mean basal cover. Either an increase in the number of points sampled over the whole-plot or shatification of points within the plot reduced the standard error of the mean estimate of basal cover. At all levels of sampling, stratification of point samples gave a substantially lower standard error and was more efficient in terms of field sampling time, than an increase in the number of points sampled. -

1991 ◽  
Vol 65 (03) ◽  
pp. 263-267 ◽  
Author(s):  
A M H P van den Besselaar ◽  
R M Bertina

SummaryIn a collaborative trial of eleven laboratories which was performed mainly within the framework of the European Community Bureau of Reference (BCR), a second reference material for thromboplastin, rabbit, plain, was calibrated against its predecessor RBT/79. This second reference material (coded CRM 149R) has a mean International Sensitivity Index (ISI) of 1.343 with a standard error of the mean of 0.035. The standard error of the ISI was determined by combination of the standard errors of the ISI of RBT/79 and the slope of the calibration line in this trial.The BCR reference material for thromboplastin, human, plain (coded BCT/099) was also included in this trial for assessment of the long-term stability of the relationship with RBT/79. The results indicated that this relationship has not changed over a period of 8 years. The interlaboratory variation of the slope of the relationship between CRM 149R and RBT/79 was significantly lower than the variation of the slope of the relationship between BCT/099 and RBT/79. In addition to the manual technique, a semi-automatic coagulometer according to Schnitger & Gross was used to determine prothrombin times with CRM 149R. The mean ISI of CRM 149R was not affected by replacement of the manual technique by this particular coagulometer.Two lyophilized plasmas were included in this trial. The mean slope of relationship between RBT/79 and CRM 149R based on the two lyophilized plasmas was the same as the corresponding slope based on fresh plasmas. Tlowever, the mean slope of relationship between RBT/79 and BCT/099 based on the two lyophilized plasmas was 4.9% higher than the mean slope based on fresh plasmas. Thus, the use of these lyophilized plasmas induced a small but significant bias in the slope of relationship between these thromboplastins of different species.


1993 ◽  
Vol 156 ◽  
pp. 1-10
Author(s):  
J. Kovalevsky ◽  
M. Froeschlé

In a first part, the present status of the HIPPARCOS mission is described. Despite the degradations and failures of gyroscopes, it is still hoped that a 4 1/2 mission duration will be reached. The first-year of data has been reduced by both FAST and NDAC consortia. For the best 46200 observed stars, the distribution of standard errors in positions has a maximum of 1.5 mas in latitude and 1.8 mas in longitude and the mean standard error for parallaxes is of the order of 3 mas. The comparison of results obtained by both consortia shows that the differences are small and quite consistent with the announced internal precisions. Magnitude measurements are precise to 0.02 magnitude for a 4 second observation. The precision to be expected for double star observations is also given. The main new result is that the magnitudes of the components are obtained with a few hundredths of a magnitude precision. This allows to devise a new method of mass determination based upon the parallax and a recalibrated mass-luminosity diagram. The parallax dependence of the results is much more favourable than in the case of the classical determination of masses using orbital motions.


Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


1988 ◽  
Vol 18 (5) ◽  
pp. 649-652 ◽  
Author(s):  
Gilles P. Delisle ◽  
Paul M. Woodard ◽  
Stephen J. Titus ◽  
Allen F. Johnson

This study assessed the variability of sample estimates for downed and dead woody fuel weight in natural lodgepole pine (Pinuscontorta Dougl.) stands using line-intersect sampling procedures. Equilateral triangles (30 m/side) were established at each of 40 sample sites with variable length transects on each side to estimate fuel weights by diameter class. Regardless of the number of sides measured, the standard error for fuels less than 7.0 cm was at most 20% of the mean. Even measuring only one side of the triangle, using a single transect instead of the triangular sample unit, still achieved standard errors less than 20% of the mean. Standard errors for classes greater than 7.0 cm were all greater than 20% of the mean. For these classes, more samples are required to achieve the 20% standard error limit; however, depending on costs, the triangular sample unit may not be the best solution. In this study, intracluster correlations were above 0.7 for the fuel diameter classes greater than 7.0 cm, suggesting that multiple transects at a given sample location contribute little new information. This effect, although less pronounced, was also observed with the smaller diameter classes.


2008 ◽  
Vol 32 (3) ◽  
pp. 203-208 ◽  
Author(s):  
Douglas Curran-Everett

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in Advances in Physiology Education provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle with which to explore basic concepts in statistics, I provide the requisite R commands. In this inaugural paper we explore the essential distinction between standard deviation and standard error: a standard deviation estimates the variability among sample observations whereas a standard error of the mean estimates the variability among theoretical sample means. If we fail to report the standard deviation, then we fail to fully report our data. Because it incorporates information about sample size, the standard error of the mean is a misguided estimate of variability among observations. Instead, the standard error of the mean provides an estimate of the uncertainty of the true value of the population mean.


2016 ◽  
Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


2012 ◽  
Vol 8 (2) ◽  
pp. 433-450 ◽  
Author(s):  
M. Carré ◽  
J. P. Sachs ◽  
J. M. Wallace ◽  
C. Favier

Abstract. Quantitative reconstructions of the past climate statistics from geochemical coral or mollusk records require quantified error bars in order to properly interpret the amplitude of the climate change and to perform meaningful comparisons with climate model outputs. We introduce here a more precise categorization of reconstruction errors, differentiating the error bar due to the proxy calibration uncertainty from the standard error due to sampling and variability in the proxy formation process. Then, we propose a numerical approach based on Monte Carlo simulations with surrogate proxy-derived climate records. These are produced by perturbing a known time series in a way that mimics the uncertainty sources in the proxy climate reconstruction. A freely available algorithm, MoCo, was designed to be parameterized by the user and to calculate realistic systematic and standard errors of the mean and the variance of the annual temperature, and of the mean and the variance of the temperature seasonality reconstructed from marine accretionary archive geochemistry. In this study, the algorithm is used for sensitivity experiments in a case study to characterize and quantitatively evaluate the sensitivity of systematic and standard errors to sampling size, stochastic uncertainty sources, archive-specific biological limitations, and climate non-stationarity. The results of the experiments yield an illustrative example of the range of variations of the standard error and the systematic error in the reconstruction of climate statistics in the Eastern Tropical Pacific. Thus, we show that the sample size and the climate variability are the main sources of the standard error. The experiments allowed the identification and estimation of systematic bias that would not otherwise be detected because of limited modern datasets. Our study demonstrates that numerical simulations based on Monte Carlo analyses are a simple and powerful approach to improve the understanding of the proxy records. We show that the standard error for the climate statistics linearly increases with the climate variability, which means that the accuracy of the error estimated by MoCo is limited by the climate non-stationarity.


2012 ◽  
pp. 66-77 ◽  
Author(s):  
I. A. Lavrinenko ◽  
O. V. Lavrinenko ◽  
D. V. Dobrynin

The satellite images show that the area of marshes in the Kolokolkova bay was notstable during the period from 1973 up to 2011. Until 2010 it varied from 357 to 636 ha. After a severe storm happened on July 24–25, 2010 the total area of marshes was reduced up to 43–50 ha. The mean value of NDVI for studied marshes, reflecting the green biomass, varied from 0.13 to 0.32 before the storm in 2010, after the storm the NDVI decreased to 0.10, in 2011 — 0.03. A comparative analysis of species composition and structure of plant communities described in 2002 and 2011, allowed to evaluate the vegetation changes of marshes of the different topographic levels. They are fol­lowing: a total destruction of plant communities of the ass. Puccinellietum phryganodis and ass. Caricetum subspathaceae on low and middle marches; increasing role of halophytic species in plant communities of the ass. Caricetum glareosae vic. Calamagrostis deschampsioides subass. typicum on middle marches; some changes in species composition and structure of plant communities of the ass. Caricetum glareosae vic. Calamagrostis deschampsioides subass. festucetosum rubrae on high marches and ass. Parnassio palustris–Salicetum reptantis in transition zone between marches and tundra without changes of their syntaxonomy; a death of moss cover in plant communities of the ass. Caricetum mackenziei var. Warnstorfia exannulata on brackish coastal bogs. The possible reasons of dramatic vegetation dynamics are discussed. The dating of the storm makes it possible to observe the directions and rates of the succession of marches vegetation.


1953 ◽  
Vol 43 (1) ◽  
pp. 77-88 ◽  
Author(s):  
H. D. Patterson

An experiment, designed to test different ways of using straw with fertilizers, and involving a three course rotation of crops, was carried out at Rothamsted between 1933 and 1951. The methods of analysis developed for this experiment are described in the present paper and demonstrated using yields of potatoes.Treatment effects of interest are given by the mean yields over all years and the linear regressions of yield on time. These estimates are straightforward but the evaluation of their errors is complicated by the existence of correlations due to the recurrence of treatments on the same plots. Further complications are introduced when, as frequently happens in long-term experiments, treatment effects show real variation from year to year. A method is given for estimating standard errors which include a contribution from this variation.The various relationships between yields and the uncontrolled seasonal factors can also be examined; in the present experiment there is some indication that the effects of treatments on yields of potatoes are influenced by the dates of planting.In other circumstances the analysis requires modifications, some of which are briefly considered.


1. It is widely felt that any method of rejecting observations with large deviations from the mean is open to some suspicion. Suppose that by some criterion, such as Peirce’s and Chauvenet’s, we decide to reject observations with deviations greater than 4 σ, where σ is the standard error, computed from the standard deviation by the usual rule; then we reject an observation deviating by 4·5 σ, and thereby alter the mean by about 4·5 σ/ n , where n is the number of observations, and at the same time we reduce the computed standard error. This may lead to the rejection of another observation deviating from the original mean by less than 4 σ, and if the process is repeated the mean may be shifted so much as to lead to doubt as to whether it is really sufficiently representative of the observations. In many cases, where we suspect that some abnormal cause has affected a fraction of the observations, there is a legitimate doubt as to whether it has affected a particular observation. Suppose that we have 50 observations. Then there is an even chance, according to the normal law, of a deviation exceeding 2·33 σ. But a deviation of 3 σ or more is not impossible, and if we make a mistake in rejecting it the mean of the remainder is not the most probable value. On the other hand, an observation deviating by only 2 σ may be affected by an abnormal cause of error, and then we should err in retaining it, even though no existing rule will instruct us to reject such an observation. It seems clear that the probability that a given observation has been affected by an abnormal cause of error is a continuous function of the deviation; it is never certain or impossible that it has been so affected, and a process that completely rejects certain observations, while retaining with full weight others with comparable deviations, possibly in the opposite direction, is unsatisfactory in principle.


Sign in / Sign up

Export Citation Format

Share Document