Forcing additivity of biomass tables: use of the generalized least squares method

1985 ◽  
Vol 15 (1) ◽  
pp. 23-28 ◽  
Author(s):  
T. Cunia ◽  
R. D. Briggs

The generalized least squares procedure is applied to sample tree data for which additive biomass tables are required. This procedure is proposed as an alternative to the ordinary weighted least squares in order to account for the fact that several biomass components are measured on the same sample trees. The biomass tables generated by the generalized and the ordinary least squares are very similar, the confidence intervals are sometimes wider, sometimes narrower, but the prediction intervals are always narrower for the generalized least squares method.

1985 ◽  
Vol 15 (2) ◽  
pp. 331-340 ◽  
Author(s):  
T. Cunia ◽  
R. D. Briggs

To construct biomass tables for various tree components that are consistent with each other, one may use linear regression techniques with dummy variables. When the biomass of these components is measured on the same sample trees, one should also use the generalized rather than ordinary least squares method. A procedure is shown which allows the estimation of the covariance matrix of the sample biomass values and circumvents the problem of storing and inverting large covariance matrices. Applied to 20 sets of sample tree data, the generalized least squares regressions generated estimates which, on the average were slightly higher (about 1%) than the sample data. The confidence and prediction bands about the regression function were wider, sometimes considerably wider than those estimated by the ordinary weighted least squares.


1982 ◽  
Vol 60 (15) ◽  
pp. 1978-1981 ◽  
Author(s):  
John W. Lorimer

A generalized least-squares method is described for finding the point of intersection of a family of straight lines, each of which is defined by two experimental points. It is shown that the method of the least-squares triangle (Can. J. Chem. 59, 3076 (1981)) is a good first approximation to the general method. An example demonstrates the method of iteration of both parameters and observations for a problem involving evaluation of solid phase compositions from solubility measurements.


Author(s):  
Anatolii Omelchenko ◽  
Oleksandr Vinnichenko ◽  
Pavel Neyezhmakov ◽  
Oleksii Fedorov ◽  
Volodymyr Bolyuh

Abstract In order to develop optimal data processing algorithms in ballistic laser gravimeters under the effect of correlated interference, the method of generalized least squares is applied. In this case, to describe the interference, a mathematical model of the autoregression process is used, for which the inverse correlation matrix has a band type and is expressed through the values of the autoregression coefficients. To convert the “path-time” data from the output of the coincidence circuit of ballistic laser gravimeters to a process uniform in time, their local quadratic interpolation is used. Algorithms for data processing in a ballistic gravimeter, developed on the basis of a method of weighted least squares using orthogonal Hahn polynomials, are considered. To implement a symmetric measurement method, the symmetric Hahn polynomials, characterized by one parameter, are used. The method of mathematical modelling is used to study the gain in the accuracy of measuring the gravitational acceleration by the synthesized algorithms in comparison with the algorithm based on the method of least squares. It is shown that auto seismic interference in ballistic laser gravimeters with a symmetric measurement method can be significantly reduced by using a mathematical model of the second-order autoregressive process in the method of generalized least squares. A comparative analysis of the characteristics of the algorithms developed using the method of generalized least squares, the method of weighted least squares and the method of ordinary least squares is carried out.


Author(s):  
Jean-Pierre Florens ◽  
Velayoudom Marimoutou ◽  
Anne Peguin-Feissolle ◽  
Josef Perktold ◽  
Marine Carrasco

2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


Sign in / Sign up

Export Citation Format

Share Document