scholarly journals Estimating Detection Limits in Chromatography from Calibration Data: Ordinary Least Squares Regression vs. Weighted Least Squares

Separations ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 49 ◽  
Author(s):  
Juan Sanchez

It is necessary to determine the limit of detection when validating any analytical method. For methods with a linear response, a simple and low labor-consuming procedure is to use the linear regression parameters obtained in the calibration to estimate the blank standard deviation from the residual standard deviation (sres), or the intercept standard deviation (sb0). In this study, multiple experimental calibrations are evaluated, applying both ordinary and weighted least squares. Moreover, the analyses of replicated blank matrices, spiked at 2–5 times the lowest calculated limit values with the two regression methods, are performed to obtain the standard deviation of the blank. The limits of detection obtained with ordinary least squares, weighted least squares, the signal-to-noise ratio, and replicate blank measurements are then compared. Ordinary least squares, which is the simplest and most commonly applied calibration regression methodology, always overestimate the values of the standard deviations at the lower levels of calibration ranges. As a result, the detection limits are up to one order of magnitude greater than those obtained with the other approaches studied, which all gave similar limits.

2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Janet Myhre ◽  
Daniel R. Jeske ◽  
Michael Rennie ◽  
Yingtao Bi

A heteroscedastic linear regression model is developed from plausible assumptions that describe the time evolution of performance metrics for equipment. The inherited motivation for the related weighted least squares analysis of the model is an essential and attractive selling point to engineers with interest in equipment surveillance methodologies. A simple test for the significance of the heteroscedasticity suggested by a data set is derived and a simulation study is used to evaluate the power of the test and compare it with several other applicable tests that were designed under different contexts. Tolerance intervals within the context of the model are derived, thus generalizing well-known tolerance intervals for ordinary least squares regression. Use of the model and its associated analyses is illustrated with an aerospace application where hundreds of electronic components are continuously monitored by an automated system that flags components that are suspected of unusual degradation patterns.


Talanta ◽  
2010 ◽  
Vol 80 (3) ◽  
pp. 1102-1109 ◽  
Author(s):  
Rosilene S. Nascimento ◽  
Roberta E.S. Froes ◽  
Nilton O.C. e Silva ◽  
Rita L.P. Naveira ◽  
Denise B.C. Mendes ◽  
...  

Author(s):  
Daniel Hoechle

I present a new Stata program, xtscc, that estimates pooled ordinary least-squares/weighted least-squares regression and fixed-effects (within) regression models with Driscoll and Kraay (Review of Economics and Statistics 80: 549–560) standard errors. By running Monte Carlo simulations, I compare the finite-sample properties of the cross-sectional dependence–consistent Driscoll–Kraay estimator with the properties of other, more commonly used covariance matrix estimators that do not account for cross-sectional dependence. The results indicate that Driscoll–Kraay standard errors are well calibrated when cross-sectional dependence is present. However, erroneously ignoring cross-sectional correlation in the estimation of panel models can lead to severely biased statistical results. I illustrate the xtscc program by considering an application from empirical finance. Thereby, I also propose a Hausman-type test for fixed effects that is robust to general forms of cross-sectional and temporal dependence.


Author(s):  
Michael A. Gebers

Since 1964 the California Department of Motor Vehicles has issued several monographs on driver characteristics and accident risk factors as part of a series of analyses known as the California driver record study. A number of regression analyses were conducted of driving record variables measured over a 6-year time period (1986 to 1991). The techniques presented consist of ordinary least squares, weighted least squares, Poisson, negative binomial, linear probability, and logistic regression models. The objective of the analyses was to compare the results obtained from several different regression techniques under consideration for use in the in-progress California driver record study. The results are informative in determining whether the various regression methods produce similar results for different sample sizes and in exploring whether reliance on ordinary least squares techniques in past California driver record study analyses has produced biased significance levels and parameter estimates. The results indicate that, for these data, the use of the different regression techniques do not lead to any greater increase in individual accident prediction beyond that obtained through application of ordinary least squares regression. The methods produce almost identical results in terms of the relative importance and statistical significance of the independent variables. It therefore appears safe to employ ordinary least squares multiple regression techniques on driver accident count distributions of the type represented by California driver records, at least when the sample sizes are large.


1987 ◽  
Vol 109 (1) ◽  
pp. 103-112
Author(s):  
C. R. Mischke

In estimating the cumulative density function of data, investigators selectively transform the data and their order statistics in order to achieve rectification of the data string. Ordinary least-squares regression procedures no longer apply because of the transformations. Investigators are often seeking a fifty-percent (median) locus, which least-squares methods do not ordinarily discover. A weighted least-squares regression procedure is presented that will establish an estimate of the mean CDF line and through appropriate rotation, provide an estimate of the median CDF line. Examples from common distributions follow a general development.


Water ◽  
2021 ◽  
Vol 13 (17) ◽  
pp. 2380
Author(s):  
Fenli Chen ◽  
Shengjie Wang ◽  
Xixi Wu ◽  
Mingjun Zhang ◽  
Athanassios A. Argiriou ◽  
...  

The local meteoric water lines (LMWLs) reflect water sources and the degree of sub-cloud evaporation at a specific location. Lanzhou is a semi-arid city located at the margin of the Asian monsoon, and the isotope composition in precipitation around this region has aroused attention in hydrological and paleoclimate studies. Based on an observation network of stable isotopes in precipitation in Lanzhou, LMWLs at four stations (Anning, Yuzhong, Gaolan and Yongdeng) are calculated using the event-based/monthly data and six regression methods (i.e., ordinary least squares, reduced major axis, major axis regressions, and their counterparts weighted using precipitation amount). Compared with the global meteoric water line, the slope and intercept of LMWL in Lanzhou are smaller. The slopes and intercepts calculated using different methods are slightly different. Among these methods, precipitation-weighted least squares regression (PWLSR) usually had the minimum average value of root mean sum of squared error (rmSSEav), indicating that the result of the precipitation weighted method is relatively stable. Higher precipitation amount and lower air temperature result in larger slopes and intercepts on an annual scale, which is out of accordance with the summertime.


2020 ◽  
Vol 50 (4) ◽  
pp. 1252-1259 ◽  
Author(s):  
Grant S. Galloway ◽  
Victoria M. Catterson ◽  
Craig Love ◽  
Andrew Robb ◽  
Thomas Fay

Sign in / Sign up

Export Citation Format

Share Document