The use of pont quadrats, with specieal reference to stem-like organs.

1966 ◽  
Vol 14 (1) ◽  
pp. 105 ◽  
Author(s):  
JR Philip

Refinement of point quadrat techniques leads to three integral equations. (A) relates the variation of contact frequency with quadrat angle, f(β), to the distribution of foliage deiisity with foliage angle, &(α) (Philip 1965a). (B), app!icab!e to stems or stem-like organs, relates f (β) to the distribution of foliage (stem surface) density with axial angle h(γ); and (C) connects g(α) and h(γ). A trio of integral equations analogous to (A), (B), (C) holds for any class of axisymmetrical organs whose members are geometrically similar. The utility nf these equations in practice depends on the differential order of their solutions: the higher the order, the greater the amplification of errors. The order is 2½ for (A) and 3 for (B). Reliable results on the distribution of stem axial angles thus require very accurate data (and hence a great deal of labour). The kernels entering (A), (B), and (C) are basic, not only to "integral equation" studies of the problem, but also to less ambitious approaches. Data on these kernels are therefore presented. They are used to illustrate the inherent difficulties in estimating h(γ). Simple methods are developed for estimating foliage density for stems from quadrat observations at one, two, or three angles. These are appreciably more accurate than the similar formulae (for foliage in general) developed by Warren Wilson (1960, 1963). The reason for this is indicated. The latter sections of the paper deal with some statistical aspects of the use of point quadrats. For a given "relative variance" the accuracy of any f(β) observation depends solely on the number of quadrat contacts, N. The relative variance is typically of order unity, and it follows that the relative standard error of f(β) is of order N-½. The accuracy of f(β) observations may therefore be determined a priori by fixing minimum contact numbers rather than by fixing quadrat numbers. Practical implementation of procedures of this type is discussed. Optimal strategies for simple estimates of foliage density are considered, the criterion being maximum accuracy for a given quantity of observational labour. Accuracy may be improved markedly by proper distribution of contact numbers amongst the quadrat angles. The optimal distribution is indicated. A basis for the choice between one-, two-, and three-angle formulae is developed. The accuracy of alternative formulae depends on the total variance arising from (i) sampling error in the observations, and (ii) intrinsic error in the formula. The method is arbitrary in the sense that a ruie is required to distinguish between the labour needed to observe a fixed total number of contacts at one, two, and three quadrat angles. The approach is illustrated by applying it to Warren Wilson's formulae. It may be used also for the corresponding "stem" formulae and for formulae involving f(0°) and f(90°), which are better adapted to give estimates of "mean" foliage or axial angle as well. The errors in estimates of "mean" foliage and axial angles due to sampling errors in f(0°) and f(90°) are examined. The determination of "mean" axial angle (even if the assumption of a uniform angle were valid) is inherently rather inaccurate, especially for small values of the angle.

1965 ◽  
Vol 13 (2) ◽  
pp. 357 ◽  
Author(s):  
JR Philip

Estimation of the distribution of foliage density with foliage angle from contact frequency data for a number of quadrat inclinations involves solution of a Fredholm integral equation of the first kind. The kernel is known from the work of Warren Wilson and Reeves, and the observed contact frequencies constitute the given function f(β). The solution is g(α), the foliage angle density function. f (β) is known at only a finite number of points, and each value contains inevitable sampling errors. The structure of the solution is such that g(β) is consequently subject to serious errors. A technique involving smoothing of the data is developed with the aim of minimizing this difficulty. The technique is critically discussed and applied to observations of Warren Wilson on lucerne leaves. The analysis indicates that the distribution of leaf angle is roughly symmetrical about the mean angle, with a standard deviation of about 15°.


1988 ◽  
Vol 78 (1) ◽  
pp. 155-161 ◽  
Author(s):  
J. Van Sickle

AbstractSeveral published reports have presented estimates of the rate of increase, r, based on sampled ovarian age distributions from Glossina populations throughout Africa. These estimates are invalid, because an age distribution sampled at one point in time can be equated to a survivorship curve only if r = 0. When such a survivorship curve and a corresponding fecundity schedule are then used to estimate r via the Euler-Lotka equation, the result is a value of r near zero, regardless of the population's true rate of increase. Synthetic sampling from a hypothetical tsetse population confirmed that estimates computed in this fashion are entirely the products of sampling error. Valid estimates of r can sometimes be obtained from an age distribution, using an alternative method, but such estimates are highly sensitive to sampling errors in the distribution.


2016 ◽  
Vol 9 (4) ◽  
pp. 1653-1669 ◽  
Author(s):  
Hui Wang ◽  
Rebecca J. Barthelmie ◽  
Sara C. Pryor ◽  
Gareth. Brown

Abstract. Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annual energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.


2008 ◽  
Vol 52 (11) ◽  
pp. 4043-4049 ◽  
Author(s):  
K. C. Wade ◽  
D. Wu ◽  
D. A. Kaufman ◽  
R. M. Ward ◽  
D. K. Benjamin ◽  
...  

ABSTRACT Fluconazole is being increasingly used to prevent and treat invasive candidiasis in neonates, yet dosing is largely empirical due to the lack of adequate pharmacokinetic (PK) data. We performed a multicenter population PK study of fluconazole in 23- to 40-week-gestation infants less than 120 days of age. We developed a population PK model using nonlinear mixed effect modeling (NONMEM) with the NONMEM algorithm. Covariate effects were predefined and evaluated based on estimation precision and clinical significance. We studied fluconazole PK in 55 infants who at enrollment had a median (range) weight of 1.02 (0.440 to 7.125) kg, a gestational age at birth (BGA) of 26 (23 to 40) weeks, and a postnatal age (PNA) of 2.3 (0.14 to 12.6) weeks. The final data set contained 357 samples; 217/357 (61%) were collected prospectively at prespecified time intervals, and 140/357 (39%) were scavenged from discarded clinical specimens. Fluconazole population PK was best described by a one-compartment model with covariates normalized to median values. The population mean clearance (CL) can be derived for this population by the equation CL (liter/h) equals 0.015 · (weight/1)0.75 · (BGA/26)1.739 · (PNA/2)0.237 · serum creatinine (SCRT)−4.896 (when SCRT is >1.0 mg/dl), and using a volume of distribution (V) (liter) of 1.024 · (weight/1). The relative standard error around the fixed effects point estimates ranged from 3 to 24%. CL doubles between birth and 28 days of age from 0.008 to 0.016 and from 0.010 to 0.022 liter/kg/h for typical 24- and 32-week-gestation infants, respectively. This population PK model of fluconazole discriminated the impact of BGA, PNA, and creatinine on drug CL. Our data suggest that dosing in young infants will require adjustment for BGA and PNA to achieve targeted systemic drug exposures.


2015 ◽  
Vol 82 (2) ◽  
pp. 177-184 ◽  
Author(s):  
Sema Demirci Çekiç ◽  
Aslı Demir ◽  
Kevser Sözgen Başkan ◽  
Esma Tütem ◽  
Reşat Apak

Most milk-applied antioxidant assays in literature are based on the isolation and quantification of individual antioxidative compounds, whereas total antioxidant capacity (TAC) gives a more holistic picture due to cooperative action of antioxidants. Recently, the cupric reducing antioxidant capacity (CUPRAC) method has been modified to measure the antioxidant capacities of thiol-containing proteins, where the classical ammonium acetate buffer – that may otherwise precipitate proteins– was replaced with concentrated urea buffer (able to expose embedded thiol groups of proteins to oxidative attack) adjusted to pH 7.0. Thus, antioxidant capacity of milk was investigated with two competing TAC assays, namely CUPRAC and ABTS (2,2′-azinobis(3-ethylbenzothiazoline-6-sulphonic acid))/persulphate, because only these assays were capable of evaluating protein contribution to the observed TAC value. As milk fat caused turbidity, experiments were carried out with skim milk or defatted milk samples. To determine TAC, modified CUPRAC method was applied to whole milk, separated and redissolved protein fractions, and the remaining liquid phase after necessary operations. Both TAC methods were investigated for their dilution sensitivity and antioxidant power assessment of separate milk fractions such as casein and whey. Proteins like β-lactoglobulin and casein (but not simple thiols) exhibited enhanced CUPRAC reactivity with surfactant (SDS) addition. Addition of milk protein fractions to whole skim milk produced significant ‘negative-biased’ deviations (up to −26% relative standard error) from TAC absorbance additivity in the application of the ABTS method, as opposed to that of the CUPRAC method less affected by chemical deviations from Beer's law thereby producing much smaller deviations from additivity (i.e. the property of additivity is valid when the measured TAC of a mixture is equal to the sum of individual antioxidant capacities of its constituents).


Author(s):  
Nhan Phan-Thien ◽  
Sangtae Kim

Analytical solutions to a set of boundary integral equations are rare, even with simple geometries and boundary conditions. To make any reasonable progress, a numerical technique must be used. There are basically four issues that must be discussed in any numerical scheme dealing with integral equations. The first and most basic one is how numerical integration can be effected, together with an effective way of dealing with singular kernels of the type encountered in elastostatics. Numerical integration is usually termed numerical quadrature, meaning mathematical formulae for numerical integration. The second issue is the boundary discretization: when integration over the whole boundary is replaced by a sum of the integrations over the individual patches on the boundary. Each patch would be a finite element, or in our case, a boundary element on the surface. Obviously a high-order integration scheme can be devised for the whole domain, thus eliminating the need for boundary discretization. Such a scheme would be problem dependent and therefore would not be very useful to us. The third issue has to do with the fact that we are constrained by the very nature of the numerical approximation process to search for solutions within a certain subspace of L2, say the space of piecewise constant functions in which the unknowns are considered to be constant over a boundary element. It is the order of this subspace, together with the order and the nature of the interpolation of the geometry, that gives rise to the names of various boundary element schemes. Finally, one is faced with the task of solving a set of linear algebraic equations, which is usually dense (the system matrix is fully populated) and potentially ill-conditioned. A direct solver such as Gauss elimination may be very efficient for small- to medium-sized problems but will become stuck in a large-scale simulation, where the only feasible solution strategy is an iterative method. In fact, iterative solution strategies lead naturally to a parallel algorithm under a suitable parallel computing environment. This chapter will review various issues involved in the practical implementation of the CDL-BIEM on a serial computer and on a distributed computing environment.


2019 ◽  
Vol 148 (3) ◽  
pp. 1229-1249 ◽  
Author(s):  
Tobias Necker ◽  
Martin Weissmann ◽  
Yvonne Ruckstuhl ◽  
Jeffrey Anderson ◽  
Takemasa Miyoshi

Abstract State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.


2013 ◽  
Vol 6 (2) ◽  
pp. 3545-3579 ◽  
Author(s):  
S. Dohe ◽  
V. Sherlock ◽  
F. Hase ◽  
M. Gisi ◽  
J. Robinson ◽  
...  

Abstract. The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-average dry air mole fractions of key greenhouse gases. To ensure the network wide consistency, biases between Fourier Transform spectrometers at different sites have to be well controlled. In this study we investigate a fundamental correction scheme for errors in the sampling of the interferogram. This is a two-step procedure in which the laser sampling error (LSE) is quantified using a subset of suitable interferograms and then used to resample all the interferograms in the timeseries. Timeseries of measurements acquired at the TCCON sites Izaña and Lauder are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions. Estimated LSE are in good agreement with sampling errors inferred from lamp measurements of the ghost to parent ratio (Lauder). The LSE introduce retrieval biases which are minimised when the interferograms are resampled. The original timeseries of Xair and XCO2 at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling discrepancies are reduced to 0.1% at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also contribute to the residual difference.


2020 ◽  
Vol 142 (9) ◽  
Author(s):  
M. Chilla ◽  
G. Pullan ◽  
S. Gallimore

Abstract The effects of blade row interactions on stator-mounted instrumentation in axial compressors are investigated using unsteady numerical calculations. The test compressor is an eight-stage machine representative of an aero-engine core compressor. For the unsteady calculations, a 180-deg sector (half-annulus) model of the compressor is used. It is shown that the time-mean flow field in the stator leading edge planes is circumferentially nonuniform. The circumferential variations in stagnation pressure and stagnation temperature, respectively, reach 4.2% and 1.1% of the local mean. Using spatial wave number analysis, the incoming wakes from the upstream stator rows are identified as the dominant source of the circumferential variations in the front and middle of the compressor, while toward the rear of the compressor, the upstream influence of the eight struts in the exit duct becomes dominant. Based on three circumferential probes, the sampling errors for stagnation pressure and stagnation temperature are calculated as a function of the probe locations. Optimization of the probe locations shows that the sampling error can be reduced by up to 77% by circumferentially redistributing the individual probes. The reductions in the sampling errors translate to reductions in the uncertainties of the overall compressor efficiency and inlet flow capacity by up to 50%. Recognizing that data from large-scale unsteady calculations are rarely available in the instrumentation phase for a new test rig or engine, a method for approximating the circumferential variations with single harmonics is presented. The construction of the harmonics is based solely on the knowledge of the number of stators in each row and a small number of equispaced probes. It is shown how excursions in the sampling error are reduced by increasing the number of circumferential probes.


Author(s):  
M. D. Pandey ◽  
H. J. Sutherland

Robust estimation of wind turbine design loads for service lifetimes of 30 to 50 years that are based on field measurements of a few days is a challenging problem. Estimating the long-term load distribution involves the integration of conditional distributions of extreme loads over the mean wind speed and turbulence intensity distributions. However, the accuracy of the statistical extrapolation is fairly sensitive to both model and sampling errors. Using measured inflow and structural data from the LIST program, this paper presents a comparative assessment of extreme loads using three distributions: namely, the Gumbel, Weibull and Generalized Extreme Value distributions. The paper uses L-moments, in place of traditional product moments, to reduce the sampling error. The paper discusses the application of extreme value theory and highlights its practical limitations. The proposed technique has the potential of improving estimates of the design loads for wind turbines.


Sign in / Sign up

Export Citation Format

Share Document