scholarly journals A distributed minimum variance estimator for sensor networks

2008 ◽  
Vol 26 (4) ◽  
pp. 609-621 ◽  
Author(s):  
A. Speranzon ◽  
C. Fischione ◽  
K. Johansson ◽  
A. Sangiovanni-Vincentelli
1981 ◽  
Vol 8 (5) ◽  
pp. 695-702 ◽  
Author(s):  
Michael H. Buonocore ◽  
William R. Brody ◽  
Albert Macovski

2021 ◽  
Vol 9 ◽  
Author(s):  
S. Toepfer ◽  
Y. Narita ◽  
D. Heyner ◽  
U. Motschmann

The error propagation of Capon’s minimum variance estimator resulting from measurement errors and position errors is derived within a linear approximation. It turns out, that Capon’s estimator provides the same error propagation as the conventionally used least square fit method. The shape matrix which describes the location depence of the measurement positions is the key parameter for the error propagation, since the condition number of the shape matrix determines how the errors are amplified. Furthermore, the error resulting from a finite number of data samples is derived by regarding Capon’s estimator as a special case of the maximum likelihood estimator.


2001 ◽  
Vol 45 (1) ◽  
pp. 92-94 ◽  
Author(s):  
Ralph C. Allen ◽  
Jack H. Stone

When stating the Gauss-Markov theorem, undergraduate econometric textbooks generally imply that the ordinary least squares (OLS) estimator has minimum variance. However, the proof of the Gauss Markov theorem indicates that the weights produced by the OLS estimator–not the formula per se–produce the unique minimum–variance estimator. An example demonstrates that other linear unbiased estimators can yield the same variance as the OLS estimator; however, the weights from such formulas are identical to the OLS weights. To avoid this pedagogical confusion, the statement of the theorem should emphasize the weights rather than the estimator itself.


Sign in / Sign up

Export Citation Format

Share Document