Sample size and variability of fuel weight estimates in natural stands of lodgepole pine

1988 ◽  
Vol 18 (5) ◽  
pp. 649-652 ◽  
Author(s):  
Gilles P. Delisle ◽  
Paul M. Woodard ◽  
Stephen J. Titus ◽  
Allen F. Johnson

This study assessed the variability of sample estimates for downed and dead woody fuel weight in natural lodgepole pine (Pinuscontorta Dougl.) stands using line-intersect sampling procedures. Equilateral triangles (30 m/side) were established at each of 40 sample sites with variable length transects on each side to estimate fuel weights by diameter class. Regardless of the number of sides measured, the standard error for fuels less than 7.0 cm was at most 20% of the mean. Even measuring only one side of the triangle, using a single transect instead of the triangular sample unit, still achieved standard errors less than 20% of the mean. Standard errors for classes greater than 7.0 cm were all greater than 20% of the mean. For these classes, more samples are required to achieve the 20% standard error limit; however, depending on costs, the triangular sample unit may not be the best solution. In this study, intracluster correlations were above 0.7 for the fuel diameter classes greater than 7.0 cm, suggesting that multiple transects at a given sample location contribute little new information. This effect, although less pronounced, was also observed with the smaller diameter classes.

1991 ◽  
Vol 65 (03) ◽  
pp. 263-267 ◽  
Author(s):  
A M H P van den Besselaar ◽  
R M Bertina

SummaryIn a collaborative trial of eleven laboratories which was performed mainly within the framework of the European Community Bureau of Reference (BCR), a second reference material for thromboplastin, rabbit, plain, was calibrated against its predecessor RBT/79. This second reference material (coded CRM 149R) has a mean International Sensitivity Index (ISI) of 1.343 with a standard error of the mean of 0.035. The standard error of the ISI was determined by combination of the standard errors of the ISI of RBT/79 and the slope of the calibration line in this trial.The BCR reference material for thromboplastin, human, plain (coded BCT/099) was also included in this trial for assessment of the long-term stability of the relationship with RBT/79. The results indicated that this relationship has not changed over a period of 8 years. The interlaboratory variation of the slope of the relationship between CRM 149R and RBT/79 was significantly lower than the variation of the slope of the relationship between BCT/099 and RBT/79. In addition to the manual technique, a semi-automatic coagulometer according to Schnitger & Gross was used to determine prothrombin times with CRM 149R. The mean ISI of CRM 149R was not affected by replacement of the manual technique by this particular coagulometer.Two lyophilized plasmas were included in this trial. The mean slope of relationship between RBT/79 and CRM 149R based on the two lyophilized plasmas was the same as the corresponding slope based on fresh plasmas. Tlowever, the mean slope of relationship between RBT/79 and BCT/099 based on the two lyophilized plasmas was 4.9% higher than the mean slope based on fresh plasmas. Thus, the use of these lyophilized plasmas induced a small but significant bias in the slope of relationship between these thromboplastins of different species.


1993 ◽  
Vol 156 ◽  
pp. 1-10
Author(s):  
J. Kovalevsky ◽  
M. Froeschlé

In a first part, the present status of the HIPPARCOS mission is described. Despite the degradations and failures of gyroscopes, it is still hoped that a 4 1/2 mission duration will be reached. The first-year of data has been reduced by both FAST and NDAC consortia. For the best 46200 observed stars, the distribution of standard errors in positions has a maximum of 1.5 mas in latitude and 1.8 mas in longitude and the mean standard error for parallaxes is of the order of 3 mas. The comparison of results obtained by both consortia shows that the differences are small and quite consistent with the announced internal precisions. Magnitude measurements are precise to 0.02 magnitude for a 4 second observation. The precision to be expected for double star observations is also given. The main new result is that the magnitudes of the components are obtained with a few hundredths of a magnitude precision. This allows to devise a new method of mass determination based upon the parallax and a recalibrated mass-luminosity diagram. The parallax dependence of the results is much more favourable than in the case of the classical determination of masses using orbital motions.


Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


2008 ◽  
Vol 32 (3) ◽  
pp. 203-208 ◽  
Author(s):  
Douglas Curran-Everett

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in Advances in Physiology Education provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle with which to explore basic concepts in statistics, I provide the requisite R commands. In this inaugural paper we explore the essential distinction between standard deviation and standard error: a standard deviation estimates the variability among sample observations whereas a standard error of the mean estimates the variability among theoretical sample means. If we fail to report the standard deviation, then we fail to fully report our data. Because it incorporates information about sample size, the standard error of the mean is a misguided estimate of variability among observations. Instead, the standard error of the mean provides an estimate of the uncertainty of the true value of the population mean.


2016 ◽  
Author(s):  
Jordan Anaya

GRIMMER (Granularity-Related Inconsistency of Means Mapped to Error Repeats) builds upon the GRIM test and allows for testing whether reported measures of variability are mathematically possible. GRIMMER relies upon the statistical phenomenon that variances display a simple repetitive pattern when the data is discrete, i.e. granular. This observation allows for the generation of an algorithm that can quickly identify whether a reported statistic of any size or precision is consistent with the stated sample size and granularity. My implementation of the test is available at PrePubMed (http://www.prepubmed.org/grimmer) and currently allows for testing variances, standard deviations, and standard errors for integer data. It is possible to extend the test to other measures of variability such as deviation from the mean, or apply the test to non-integer data such as data reported to halves or tenths. The ability of the test to identify inconsistent statistics relies upon four factors: (1) the sample size; (2) the granularity of the data; (3) the precision (number of decimals) of the reported statistic; and (4) the size of the standard deviation or standard error (but not the variance). The test is most powerful when the sample size is small, the granularity is large, the statistic is reported to a large number of decimal places, and the standard deviation or standard error is small (variance is immune to size considerations). This test has important implications for any field that routinely reports statistics for granular data to at least two decimal places because it can help identify errors in publications, and should be used by journals during their initial screen of new submissions. The errors detected can be the result of anything from something as innocent as a typo or rounding error to large statistical mistakes or unfortunately even fraud. In this report I describe the mathematical foundations of the GRIMMER test and the algorithm I use to implement it.


2012 ◽  
Vol 8 (2) ◽  
pp. 433-450 ◽  
Author(s):  
M. Carré ◽  
J. P. Sachs ◽  
J. M. Wallace ◽  
C. Favier

Abstract. Quantitative reconstructions of the past climate statistics from geochemical coral or mollusk records require quantified error bars in order to properly interpret the amplitude of the climate change and to perform meaningful comparisons with climate model outputs. We introduce here a more precise categorization of reconstruction errors, differentiating the error bar due to the proxy calibration uncertainty from the standard error due to sampling and variability in the proxy formation process. Then, we propose a numerical approach based on Monte Carlo simulations with surrogate proxy-derived climate records. These are produced by perturbing a known time series in a way that mimics the uncertainty sources in the proxy climate reconstruction. A freely available algorithm, MoCo, was designed to be parameterized by the user and to calculate realistic systematic and standard errors of the mean and the variance of the annual temperature, and of the mean and the variance of the temperature seasonality reconstructed from marine accretionary archive geochemistry. In this study, the algorithm is used for sensitivity experiments in a case study to characterize and quantitatively evaluate the sensitivity of systematic and standard errors to sampling size, stochastic uncertainty sources, archive-specific biological limitations, and climate non-stationarity. The results of the experiments yield an illustrative example of the range of variations of the standard error and the systematic error in the reconstruction of climate statistics in the Eastern Tropical Pacific. Thus, we show that the sample size and the climate variability are the main sources of the standard error. The experiments allowed the identification and estimation of systematic bias that would not otherwise be detected because of limited modern datasets. Our study demonstrates that numerical simulations based on Monte Carlo analyses are a simple and powerful approach to improve the understanding of the proxy records. We show that the standard error for the climate statistics linearly increases with the climate variability, which means that the accuracy of the error estimated by MoCo is limited by the climate non-stationarity.


1979 ◽  
Vol 1 (4) ◽  
pp. 346 ◽  
Author(s):  
GM Lodge ◽  
AC Gleeson

A natural pasture in which there was contagious distribution of the species was sampled using a single wheel-point apparatus on which the interpoint distance exceeded the size of individual plants but was less than that of the plant clusters. The standard error of mean basal cover calculated from repeated independent samples was lower than that e\pected from a binomial distribution. The standard errors for five levels of sampling, with and without stratification of the plot are presented and these can be used to predict the sampling intensity needed to achieve an acceptable standard error for each mean basal cover. Either an increase in the number of points sampled over the whole-plot or shatification of points within the plot reduced the standard error of the mean estimate of basal cover. At all levels of sampling, stratification of point samples gave a substantially lower standard error and was more efficient in terms of field sampling time, than an increase in the number of points sampled. -


1990 ◽  
Vol 66 (6) ◽  
pp. 596-599 ◽  
Author(s):  
Allen F. Johnson ◽  
Paul M. Woodard ◽  
Stephen J. Titus

Regression equations that predict the foliage and roundwood biomass by diameter classes: 0.0-0.5 cm, 0.5-1.0 cm, 1.0-3.0 cm, 3.0-5.0 cm, 5.0-7.0 cm and 7.0-10.0 cm given diameter at breast height (dbh) were developed for lodgepole pine and white spruce. Common to the Prairie Provinces the allometric model y = adbhb fit the data well for all component categories except the roundwood classes >3.0 cm. The r2 values generally exceed.80 and SEE were small. The larger size classes are best predicted by multiplying the number of trees affected by a constant. The management value of this new information is significant when viewed from an ecologic perspective.


Soil Research ◽  
1984 ◽  
Vol 22 (1) ◽  
pp. 81 ◽  
Author(s):  
DK Friesen ◽  
GJ Blair

Soil testing programs are often brought in disrepute by unexplained variability in the data. The deposition of dung and urine onto grazed pasture brings about marked variation in the chemical status of soils which contributes to this variability. A study was undertaken to compare a range of sampling procedures to estimate Colwell-P, Bray-1 P, bicarbonate K and pH levels in adjacent low and high P status paddocks. The sampling strategies used consisted of 75 by 50 m grids; whole and stratified paddock zig-zag and cluster (monitor plot) samplings. Soil test means for the various parameters did not vary among sampling methods. The number of grid samples required to estimate within 10% of the mean varied from 121 for Bray-1 P down to 1 for soil pH. Sampling efficiencies were higher for cluster sampling than for whole paddock zig-zag path sampling. Stratification generally did not improve sampling efficiency in these paddocks. Soil test means declined as sampling depth increased, but the coefficient of variation remained constant for Colwell-P and pH. The results indicate that cluster sampling (monitor plots) is the most appropriate procedure for estimating the nutrient status of grazed pastures. This sampling method enables a more accurate measure to be taken of the nutrient status of a paddock and should allow more reasonable estimates to be made of the temporal variations in soil test.


1971 ◽  
Vol 1 (4) ◽  
pp. 262-266 ◽  
Author(s):  
D. F. W. Pollard

Biomass (stems and branches) increased from 17 000 kg h−1 in the 4th year to 34 000 kg h−1 in the 7th year of development of an aspen sucker stand. The bulk of the biomass was distributed in the middle and upper diameter classes of shoots; net annual increases only occurred in the upper classes. About 80% of shoots dying in the 3 years of study were less than 2 cm dbh; the biomass lost in these amounted to 200 kg h−1 or less each year. The remaining 20% mortality occurred in the 7th year among shoots 2–5 cm dbh infected with Diplodiatumefaciens. Biomass lost in these larger shoots amounted to 4 900 kg h−1; this was close to the discrepancy between net production (stems and branches) in the 7th year (2600 kg h−1 per annum) and net production in the 5th and 6th years (about 7000 kg h−1 per annum.) Results suggest that although high rates of net annual production are obtainable in short rotations, the mean annual production is strongly influenced by disease because of insufficient time for enhanced growth of survivors.


Sign in / Sign up

Export Citation Format

Share Document