The three‐parameter equation: An efficient tool to enhance the stack

Geophysics ◽  
1994 ◽  
Vol 59 (2) ◽  
pp. 297-308 ◽  
Author(s):  
Pierre D. Thore ◽  
Eric de Bazelaire ◽  
Marisha P. Rays

We compare the three‐term equation to the normal moveout (NMO) equation for several synthetic data sets to analyze whether or not it is worth making the additional computational effort in the stacking process within various exploration contexts. In our evaluation we have selected two criteria: 1)The quality of the stacked image. 2) The reliability of the stacking parameters and their usefulness for further computation such as interval velocity estimation. We have simulated the stacking process very precisely, despite using only the traveltimes and not the full waveform data. The procedure searches for maximum coherency along the traveltime curve rather than a least‐square regression to it. This technique, which we call the Gaussian‐weighted least square, avoids most of the shortcomings of the least‐square method. The following are our conclusions: 1) The three term equation gives a better stack than the regular NMO. The increase in stacking energy can be more than 30 percent. 2)The calculation of interval velocities using a DIX formula rewritten for the three‐parameter equation is much more stable and accurate than the standard DIX formula. 3) The search for the three parameters is feasible in an efficient way since the shifted hyperbola requires only static corrections rather than dy namic ones. 4) Noise alters the parameters of the maximum energy stack in a way that depends on the noise type. The estimates obtained remain accurate enough for interval velocity estimation (where only two parameters are needed), but the use of the three parameters in direct inversion may be hazardous because of noise corruption. These conclusions should, however, be verified on real data examples.

2012 ◽  
Vol 538-541 ◽  
pp. 2622-2626
Author(s):  
Zi Jing Yang ◽  
Li Gang Cai ◽  
Gui You Chi ◽  
Gen Mao Yu ◽  
Li Xin Gao

A redundant lifting wavelet packet analysis based on variable parameter was presented, which was used to extract the weak fault features of bearing submerged in background noise. Through different choices of parameters in the design formula of predictor based on least square method of fitting, six asymmetric wavelets with various characteristics were constructed, which were then respectively used for redundant lifting wavelet packet decomposition to signal layer by layer. All the results obtained by decomposition were adopted to establish the objective function of minimum norm, through which the optimal wavelet that best matches the feature information is selected for each node signal. The node signals got by last decomposition were utilized for wavelet packet energy analysis, while the node signal with maximum energy was chosen for single branch reconstruction and envelop spectrum analysis. The proposed method above is applied to process the engineering data of bearing with fault and good results are gained by which the effectiveness of this method is well validated.


2015 ◽  
Vol 77 (17) ◽  
Author(s):  
Herman Wahid ◽  
Mohd. Hakimi Othman ◽  
Ruzairi Abdul Rahim

In geophysical subsurface surveys, difficulty to interpret measurement of data obtain from the equipment are risen. Data provided by the equipment did not indicate subsurface condition specifically and deviates from the expected standard due to numerous features. Generally, the data that obtained from the laws of physics computation is known as forward problem. And the process of obtaining the data from sets of measurements and reconstruct the model is known as inverse problem. Researchers have proposed multiple estimation techniques to cater the inverse problem and provide estimation that close to actual model. In this work, we investigate the feasibility of using artificial neural network (ANN) in solving two- dimensional (2-D) direct current (DC) resistivity mapping for subsurface investigation, in which the algorithms are based on the radial basis function (RBF) model and the multi-layer perceptron (MLP) model. Conventional approach of least square (LS) method is used as a benchmark and comparative study with the proposed algorithms. In order to train the proposed algorithms, several synthetic data are generated using RES2DMOD software based on hybrid Wenner-Schlumberger configurations. Results are compared between the proposed algorithms and least square method in term of its effectiveness and error variations to the actual values. It is discovered that the proposed algorithms have offered better performance in term minimum error difference to the actual model, as compared to least square method. Simulation results demonstrate that proposed algorithms can solve the inverse problem and it can be illustrated by means of the 2-D graphical mapping.


2020 ◽  
Vol 62 (8) ◽  
pp. 1107-1120
Author(s):  
Pedro Miraldo ◽  
João R. Cardoso

Abstract This paper addresses the problem of finding the closest generalized essential matrix from a given $$6\times 6$$ 6 × 6 matrix, with respect to the Frobenius norm. To the best of our knowledge, this nonlinear constrained optimization problem has not been addressed in the literature yet. Although it can be solved directly, it involves a large number of constraints, and any optimization method to solve it would require much computational effort. We start by deriving a couple of unconstrained formulations of the problem. After that, we convert the original problem into a new one, involving only orthogonal constraints, and propose an efficient algorithm of steepest descent type to find its solution. To test the algorithms, we evaluate the methods with synthetic data and conclude that the proposed steepest descent-type approach is much faster than the direct application of general optimization techniques to the original formulation with 33 constraints and to the unconstrained ones. To further motivate the relevance of our method, we apply it in two pose problems (relative and absolute) using synthetic and real data.


Author(s):  
Denis Ndanguza ◽  
Jean Pierre Muhirwa ◽  
Anatholie Uwimana

Predator prey interactions are important in ecology and most of time in the analysis, the two antagonists are assumed to be in a closed system. The aim of this study is to model the unclosed predator-prey system. The model is built and simulated data are computed by adding noise on deterministic solution. Therefore, model parameters are estimated using least square method. We compute the two critical points and the stability analysis is carried out and results show that the population is stable at one critical point and unstable at (0,0). The model fits the synthetic data with coefficient of determination R2 = 0.9693 equivalent to 96.93%. Using the residual analysis to test the validity of the model, it is shown that there is no pattern between residuals. To strengthen the validity of the model, the Markov Chain Monte Carlo algorithms are used as an alternative method in parameters estimation. Diagnostics prove the chains’ convergence which is the sign of an accurate model. As conclusion, the model is accurate and it can be applied to real data.Keywords: predator-prey, spatial distribution, parameters, Metropolis-Hastings algorithm, model diagnostic, stability analysis


2020 ◽  
Vol 2 (2) ◽  
pp. 18-20
Author(s):  
Rona Dwi Rahmah

Abstract. Earthquakes are natural disasters caused by shocks on the earth due to faults and the sudden movement of tectonic rocks that make up the earth's crust. This study of earthquakes will be interesting if explored further from the perspective of the Qur'an because in the Qur'an there are many verses that speak of earthquakes. As explained in the Qur'an Al-Zalzalah verses 1-8. On February 14 2016 to February 23 2016 aftershocks occurred in the Klagon Village Area, Saradan District, Madiun. To analyze when the end of aftershocks ends by using the Least Square Method and the relationship of frequency of aftershocks to times that include the Omori, Omogi 1, Omogi 2 and Utsu methods. The conclusion of this study is the Omogi 2 method which has obtained the correlation coefficient r = 0.195 from the correlation value -1 ≤ r ≤ 1, with the aftershocks ending on day 464 and from the comparison of aftershock frequency corresponding to the graph between the results data calculations with real data (actual data) namely the Omogi 2 method. And basically the term earthquake in the Qur'an can still be said not to make the verses interpreted as a single word containing the meaning of the earthquake as a brief explanation of aftershocks in the perspective of the Qur'an.


Author(s):  
Salih Djilali ◽  
Soufiane Bentout ◽  
Sunil Kumar ◽  
Tarik Mohammed Touaoula

In this research, we are interested in discussing the evolution of the COVID-19 infection cases and predicting the spread of COVID-19 disease in Algeria and India. To this aim, we will approximate the transmission rate in terms of the measures taken by the governments. The least square method is used with an accuracy of 95% for fitting the artificial solution with the real data declared by WHO for the purpose of approximating the density of asymptomatic individuals for COVID-19 disease. As a result, we obtained the different values of the basic reproduction number (BRN) corresponding to each measure taken by the governments. Moreover, we estimate the number of asymptomatic infected persons at the epidemic peak for each country. Further, we will determine the needed ICU beds (intense medical carte beds) and regular treatment beds. Also, we provide the outcome of governmental strategies in reducing the spread of disease. Combining all these components, we offer some suggestions about the necessity of using the recently discovered vaccines as Pfizer/Bioentec and Moderna for limiting the spread of the COVID-19 disease in the studied countries.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S33-S40 ◽  
Author(s):  
S. Rentsch ◽  
S. Buske ◽  
S. Lüth ◽  
S. A. Shapiro

We propose a new approach for the location of seismic sources using a technique inspired by Gaussian-beam migration of three-component data. This approach requires only the preliminary picking of time intervals around a detected event and is much less sensitive to the picking precision than standard location procedures. Furthermore, this approach is characterized by a high degree of automation. The polarization information of three-component data is estimated and used to perform initial-value ray tracing. By weighting the energy of the signal using Gaussian beams around these rays, the stacking is restricted to physically relevant regions only. Event locations correspond to regions of maximum energy in the resulting image. We have successfully applied the method to synthetic data examples with 20%–30% white noise and to real data of a hydraulic-fracturing experiment, where events with comparatively small magnitudes [Formula: see text] were recorded.


Author(s):  
Mohamed Ibrahim Mohamed

In this work, we introduce a new extension of the Fréchet distribution. A sufficient set of the mathematical and statistical properties have been derived. The estimation of the parameters is carried out by considering the different method of estimation. The performances of the proposed estimation methods are studied by Monte Carlo simulations. The potentiality of the proposed model has been analyzed through two data sets. The weighted least square method is the best method for modelling breaking stress data, the least square method is the best method for modelling strengths data, however all other methods performed well for both data sets. On the other hand, the new model gives the best …ts among all other …fitted extensions of the Fréchet models to these data. So, it could be chosen as the best model for modeling breaking stress and strengths real data.


Geophysics ◽  
1996 ◽  
Vol 61 (1) ◽  
pp. 138-150 ◽  
Author(s):  
Michael Jervis ◽  
Mrinal K. Sen ◽  
Paul L. Stoffa

We describe here methods of estimating interval velocities based on two nonlinear optimization methods; very fast simulated annealing (VFSA) and a genetic algorithm (GA). The objective function is defined using prestack seismic data after depth migration. This inverse problem involves optimizing the lateral consistency of reflectors between adjacent migrated shot records. In effect, the normal moveout correction in velocity analysis is replaced by prestack depth migration. When the least‐squared difference between each pair of migrated shots is at a minimum, the true velocity model has been found. Our model is parameterized using cubic‐B splines distributed on a rectangular grid. The main advantages of using migrated data are that they do not require traveltime picking, knowledge of the source wavelet, and expensive computation of synthetic waveform data to assess the degree of data‐model fit. Nonlinear methods allow automated determination of the global minimum without relying on estimates of the gradient of the objective function, the starting model, or making assumptions about the nature of the objective function itself. For the velocity estimation problem, the VFSA converges 4 to 5 times faster than the GA for both a 2-D synthetic example and a structurally complex real data example from the Gulf of Mexico. Though computationally intensive, this problem requires few model parameters, and use of a fast traveltime code for Kirchhoff migration makes the algorithm tractable for real earth problems.


Geophysics ◽  
2001 ◽  
Vol 66 (2) ◽  
pp. 627-636 ◽  
Author(s):  
Pantelis M. Soupios ◽  
Constantinos B. Papazachos ◽  
Christopher Juhlin ◽  
Gregory N. Tsokas

This paper deals with the problem of nonlinear seismic velocity estimation from first‐arrival traveltimes obtained from crosshole and downhole experiments in three dimensions. A standard tomographic procedure is applied, based on the representation of the crosshole area into a number of cells which have an initial slowness assigned. For the forward modeling, the raypath matrix is computed using the revisited ray bending method, supplemented by an approximate computation of the first Fresnel zone at each point of the ray, hence using physical and not only mathematical rays. Since 3-D ray tracing is incorporated, the inversion technique is nonlinear. Velocity images are obtained by a constrained least‐squares inversion scheme using both “damping” and “smoothing” factors. The appropriate choice of these factors is defined by the use of appropriate criteria such as the L-curve. The tomographic approach is improved by incorporating a priori information about the media to be imaged into our inversion scheme. This improvement in imaging is achieved by projecting a desirable solution onto the null space of the inversion, and including this null‐space contribution with the standard non‐null‐space inversion solution. The efficiency of the inversion scheme is tested through a series of tests with synthetic data. Moreover, application in the area of the Ural Mountains using real data demonstrates that the proposed technique produces more realistic velocity models than those obtained by other standard approaches.


Sign in / Sign up

Export Citation Format

Share Document