Dip-scanning coherence algorithm using eigenstructure analysis and supertrace technique

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V61-V66 ◽  
Author(s):  
Yandong Li ◽  
Wenkai Lu ◽  
Huanqin Xiao ◽  
Shanwen Zhang ◽  
Yanda Li

The eigenstructure-based coherence algorithms are robust to noise and able to produce enhanced coherence images. However, the original eigenstructure coherence algorithm does not implement dip scanning; therefore, it produces less satisfactory results in areas with strong structural dips. The supertrace technique also improves the coherence algorithms’ robustness by concatenating multiple seismic traces to form a supertrace. In addition, the supertrace data cube preserves the structural-dip information that is contained in the original seismic data cube; thus, dip scanning can be performed effectively using a number of adjacent supertraces. We combine the eigenstructure analysis and the dip-scanning supertrace technique to obtain a new coherence-estimation algorithm. Application to the real data set shows that the new algorithm provides good coherence estimates in areas with strong structural dips. Furthermore, the algorithm is computationally efficient because of the small covariance matrix [Formula: see text] used for the eigenstructure analysis.

Geophysics ◽  
2005 ◽  
Vol 70 (3) ◽  
pp. P13-P18 ◽  
Author(s):  
Wenkai Lu ◽  
Yandong Li ◽  
Shanwen Zhang ◽  
Huanqin Xiao ◽  
Yanda Li

This article proposes a new higher-order-statistics-based coherence-estimation algorithm, which we denote as HOSC. Unlike the traditional crosscorrelation-based C1 coherence algorithm, which sequentially estimates correlation in the inline and crossline directions and uses their geometric mean as a coherence estimate at the analysis point, our method exploits three seismic traces simultaneously to calculate a 2D slice of their normalized fourth-order moment with one zero-lag correlation and then searches for the maximum correlation point on the 2D slice as the coherence estimate. To include more seismic traces in the coherence estimation, we introduce a supertrace technique that constructs a new data cube by rearranging several adjacent seismic traces into a single supertrace. Combining our supertrace technique with the C1 and HOSC algorithms, we obtain two efficient coherence-estimation algorithms, which we call ST-C1 and ST-HOSC. Application results on the real data set show that our algorithms are able to reveal more details about the structural and stratigraphic features than the traditional C1 algorithm, yet still preserve its computational efficiency.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. S99-S110
Author(s):  
Daniel A. Rosales ◽  
Biondo Biondi

A new partial-prestack migration operator to manipulate multicomponent data, called converted-wave azimuth moveout (PS-AMO), transforms converted-wave prestack data with an arbitrary offset and azimuth to equivalent data with a new offset and azimuth position. This operator is a sequential application of converted-wave dip moveout and its inverse. As expected, PS-AMO reduces to the known expression of AMO for the extreme case when the P velocity is the same as the S velocity. Moreover, PS-AMO preserves the resolution of dipping events and internally applies a correction for the lateral shift between the common-midpoint and the common-reflection/conversion point. An implementation of PS-AMO in the log-stretch frequency-wavenumber domain is computationally efficient. The main applications for the PS-AMO operator are geometry regularization, data-reduction through partial stacking, and interpolation of unevenly sampled data. We test our PS-AMO operator by solving 3D acquisition geometry-regularization problems for multicomponent, ocean-bottom seismic data. The geometry-regularization problem is defined as a regularized least-squares-objective function. To preserve the resolution of dipping events, the regularization term uses the PS-AMO operator. Application of this methodology on a portion of the Alba 3D, multicomponent, ocean-bottom seismic data set shows that we can satisfactorily obtain an interpolated data set that honors the physics of converted waves.


Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. V51-V60 ◽  
Author(s):  
Ramesh (Neelsh) Neelamani ◽  
Anatoly Baumstein ◽  
Warren S. Ross

We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors ([Formula: see text] in [Formula: see text] data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.


Geophysics ◽  
1992 ◽  
Vol 57 (12) ◽  
pp. 1623-1632 ◽  
Author(s):  
Richard E. Duren ◽  
Stan V. Morris

Null steering refers to the removal (or zeroing) of interferences at specified dips by creating receiving patterns with nulls that are aligned on the interferences. This type of beamforming is more effective than forming a simple crossline array and can be applied to both multistreamer and swath data for reducing out‐of‐plane interferences (sideswipe, boat interference, etc.) that corrupt two‐dimensional (2-D) data (the desired signal). Many beamforming techniques lead to signal cancellation when the interferences are correlated with the desired signal. However, a beamforming technique that has been developed is effective in the presence of signal correlated interferences. The technique can be effectively extended to prestack and poststack seismic data. The number of interferences and their dips are identified by a visual examination of the plotted data. This information can be used to design filters that are applied to the total data set. The resulting 2-D data set is free from the crossline interferences with the inline 2-D data remaining unaltered. Model and real data comparisons between null steering and simple crossline array summation show that: (1) null steering significantly attenuates crossline interference, and (2) 2-D inline data, masked by sideswipe, can be revealed once sideswipe is attenuated by null steering. The real data examples show the identification and effective attenuation of interferences that could easily be interpreted as inline 2-D data: (1) an apparent steeply dipping event, and (2) an apparent flat “bright spot.”


2016 ◽  
Vol 13 (3) ◽  
pp. 529-538 ◽  
Author(s):  
Tao Yang ◽  
Jing-Huai Gao ◽  
Bing Zhang ◽  
Da-Xing Wang

Geophysics ◽  
2021 ◽  
pp. 1-67
Author(s):  
Hossein Jodeiri Akbari Fam ◽  
Mostafa Naghizadeh ◽  
Oz Yilmaz

Two-dimensional seismic surveys often are conducted along crooked line traverses due to the inaccessibility of rugged terrains, logistical and environmental restrictions, and budget limitations. The crookedness of line traverses, irregular topography, and complex subsurface geology with steeply dipping and curved interfaces could adversely affect the signal-to-noise ratio of the data. The crooked-line geometry violates the assumption of a straight-line survey that is a basic principle behind the 2D multifocusing (MF) method and leads to crossline spread of midpoints. Additionally, the crooked-line geometry can give rise to potential pitfalls and artifacts, thus, leads to difficulties in imaging and velocity-depth model estimation. We develop a novel multifocusing algorithm for crooked-line seismic data and revise the traveltime equation accordingly to achieve better signal alignment before stacking. Specifically, we present a 2.5D multifocusing reflection traveltime equation, which explicitly takes into account the midpoint dispersion and cross-dip effects. The new formulation corrects for normal, inline, and crossline dip moveouts simultaneously, which is significantly more accurate than removing these effects sequentially. Applying NMO, DMO, and CDMO separately tends to result in significant errors, especially for large offsets. The 2.5D multifocusing method can perform automatically with a coherence-based global optimization search on data. We investigated the accuracy of the new formulation by testing it on different synthetic models and a real seismic data set. Applying the proposed approach to the real data led to a high-resolution seismic image with a significant quality improvement compared to the conventional method. Numerical tests show that the new formula can accurately focus the primary reflections at their correct location, remove anomalous dip-dependent velocities, and extract true dips from seismic data for structural interpretation. The proposed method efficiently projects and extracts valuable 3D structural information when applied to crooked-line seismic surveys.


Geophysics ◽  
2012 ◽  
Vol 77 (6) ◽  
pp. N17-N24 ◽  
Author(s):  
Zhaoyun Zong ◽  
Xingyao Yin ◽  
Guochen Wu

The fluid term in the Biot-Gassmann equation plays an important role in reservoir fluid discrimination. The density term imbedded in the fluid term, however, is difficult to estimate because it is less sensitive to seismic amplitude variations. We combined poroelasticity theory, amplitude variation with offset (AVO) inversion, and identification of P- and S-wave moduli to present a stable and physically meaningful method to estimate the fluid term, with no need for density information from prestack seismic data. We used poroelasticity theory to express the fluid term as a function of P- and S-wave moduli. The use of P- and S-wave moduli made the derivation physically meaningful and natural. Then we derived an AVO approximation in terms of these moduli, which can then be directly inverted from seismic data. Furthermore, this practical and robust AVO-inversion technique was developed in a Bayesian framework. The objective was to obtain the maximum a posteriori solution for the P-wave modulus, S-wave modulus, and density. Gaussian and Cauchy distributions were used for the likelihood and a priori probability distributions, respectively. The introduction of a low-frequency constraint and statistical probability information to the objective function rendered the inversion more stable and less sensitive to the initial model. Tests on synthetic data showed that all the parameters can be estimated well when no noise is present and the estimated P- and S-wave moduli were still reasonable with moderate noise and rather smooth initial model parameters. A test on a real data set showed that the estimated fluid term was in good agreement with the results of drilling.


2019 ◽  
Author(s):  
Aurora Torrente

Abstract Background: The concept of depth induces an ordering from centre outwards in multivariate data. Most depth definitions are unfeasible for dimensions larger than three or four, but the Modified Band Depth (MBD) is a notable exception that has proven to be a valuable tool in the analysis of gene expression data. However, given a notion of depth, there exists no straight forward method to derive a depth-based similarity or dissimilarity measure between observations to be used in standard tasks such as clustering or classification. Results: We propose a methodology to assess a data-driven (dis)similarity between two observations, taking advantage of the bands used in the computation of the MBD. To that end, we build binary vectors associated to each observation to compute the number of times each coordinate is located between the limits of the intervals defined by all possible bands in the set. Those vectors and their Boolean products are used to derive contingency tables from which standard similarity indices can be calculated. Our approach is computationally efficient and can be applied to bands formed by any number of observations from the data set. Conclusions: We have evaluated the performance of several similarity indices with respect to that of the Euclidean distance, used as benchmark, in standard clustering and classification techniques in a variety of simulated and real data sets. Our experiments show that the technique for deriving such measures is very promising, with some of the selected indices outperforming the Euclidean distance. The use of the method is not restricted to these, the extension to other similarity coefficients being straight-forward.


Sign in / Sign up

Export Citation Format

Share Document