SENSOR‐ARRAY PROCESSING WITH CHANNEL‐RECURSIVE BAYES TECHNIQUES

Geophysics ◽  
1971 ◽  
Vol 36 (5) ◽  
pp. 822-834 ◽  
Author(s):  
Edward J. Farrell

Arrays of seismometers, hydrophones, and electromagnetic receivers have several signal processing problems in common. This paper is concerned primarily with source location and secondarily with signal extraction. The basic problem can be described as follows: A transient signal from an event is detected in the outputs of the sensor array. We determine the location of the source from the temporal positions of the signal in the array outputs. Further, if the signal is unknown, we estimate it. The approach taken here differs from previous investigations in three ways: (i) a Bayes estimation approach is used, (ii) the estimates are evaluated recursively with respect to channels, and (iii) a time‐domain approach is used, as opposed to a frequency‐domain approach. The proposed estimation technique is optimum with respect to a large class of loss functions, since it is based on the expectation of the posterior distribution. Recursive evaluation of the posterior expectation has several advantages. At each step we have the optimum estimate of the unknown parameters and the corresponding covariance matrix. A channel selection rule and stopping rule are defined in terms of the covariance matrix. Further, having an optimum estimate at each step permits simplification of the processing; e.g., the search interval may be limited to the most probable region of the parameter space. Such techniques significantly decrease the processing time and increase the rate of convergence. Equations are developed for the known‐signal case with planar and spherical wavefronts, and results are presented from a computer simulation. Subsequently, equations for the unknown‐signal case are presented with simulation results.

Author(s):  
Marta Savkina

In the paper in case heteroscedastic independent deviations a regression model whose function has the form $f(x) = ax^2+bx+c$, where $a$, $b$ and $c$ are unknown parameters, is studied. Approximate values (observations) of functions $f(x)$ are registered at equidistant points of a line segment. The theorem which is proved at the paper gives a sufficient condition on the variance of the deviations at which the Aitken estimation of parameter $a$ coincides with its estimation of the LS in the case of odd number of observation points and bisymmetric covariance matrix. Under this condition, the Aitken and LS estimations of $b$ and $c$ will not coincide. The proof of the theorem consists of the following steps. First, the original system of polynomials is simplified: we get the system polynomials of the second degree. The variables of both systems are unknown variances of deviations, each of the solutions of the original system gives a set variances of deviations at which the estimations of Aitken and LS parameter a coincide. In the next step the solving of the original system polynomials is reduced to solving an equation with three unknowns, and all other unknowns are expressed in some way through these three. At last it is proved that there are positive unequal values of these three unknowns, which will be the solution of the obtained equation. And all other unknowns when substituting in their expression these values will be positive.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 985
Author(s):  
Youngsaeng Lee ◽  
Jeong-Soo Park

The approximated nonlinear least squares (ALS) method has been used for the estimation of unknown parameters in the complex computer code which is very time-consuming to execute. The ALS calibrates or tunes the computer code by minimizing the squared difference between real observations and computer output using a surrogate such as a Gaussian process model. When the differences (residuals) are correlated or heteroscedastic, the ALS may result in a distorted code tuning with a large variance of estimation. Another potential drawback of the ALS is that it does not take into account the uncertainty in the approximation of the computer model by a surrogate. To address these problems, we propose a generalized ALS (GALS) by constructing the covariance matrix of residuals. The inverse of the covariance matrix is multiplied to the residuals, and it is minimized with respect to the tuning parameters. In addition, we consider an iterative version for the GALS, which is called as the max-minG algorithm. In this algorithm, the parameters are re-estimated and updated by the maximum likelihood estimation and the GALS, by using both computer and experimental data repeatedly until convergence. Moreover, the iteratively re-weighted ALS method (IRWALS) was considered for a comparison purpose. Five test functions in different conditions are examined for a comparative analysis of the four methods. Based on the test function study, we find that both the bias and variance of estimates obtained from the proposed methods (the GALS and the max-minG) are smaller than those from the ALS and the IRWALS methods. Especially, the max-minG works better than others including the GALS for the relatively complex test functions. Lastly, an application to a nuclear fusion simulator is illustrated and it is shown that the abnormal pattern of residuals in the ALS can be resolved by the proposed methods.


Sign in / Sign up

Export Citation Format

Share Document