scholarly journals A perspective on 3D surface-related multiple elimination

Geophysics ◽  
2010 ◽  
Vol 75 (5) ◽  
pp. 75A245-75A261 ◽  
Author(s):  
Bill Dragoset ◽  
Eric Verschuur ◽  
Ian Moore ◽  
Richard Bisley

Surface-related multiple elimination (SRME) is an algorithm that predicts all surface multiples by a convolutional process applied to seismic field data. Only minimal preprocessing is required. Once predicted, the multiples are removed from the data by adaptive subtraction. Unlike other methods of multiple attenuation, SRME does not rely on assumptions or knowledge about the subsurface, nor does it use event properties to discriminate between multiples and primaries. In exchange for this “freedom from the subsurface,” SRME requires knowledge of the acquisition wavelet and a dense spatial distribution of sources and receivers. Although a 2D version of SRME sometimes suffices, most field data sets require 3D SRME for accurate multiple prediction. All implementations of 3D SRME face a serious challenge: The sparse spatial distribution of sources and receivers available in typical seismic field data sets does not conform to the algorithmic requirements. There are several approaches to implementing 3D SRME that address the data sparseness problem. Among those approaches are pre-SRME data interpolation, on-the-fly data interpolation, zero-azimuth SRME, and true-azimuth SRME. Field data examples confirm that (1) multiples predicted using true-azimuth 3D SRME are more accurate than those using zero-azimuth 3D SRME and (2) on-the-fly interpolation produces excellent results.

Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. E25-E33 ◽  
Author(s):  
Anatoly Baumstein ◽  
Mohamed T. Hadidi

The wide success of 2D surface-related multiple elimination (SRME) in attenuating complex multiples in many cases has spurred efforts to apply the method in three dimensions. However, application of 3D SRME to conventional marine data is often impeded by severe crossline aliasing characteristic of marine acquisition geometries. We propose to overcome this limitation using a dip-moveout (DMO)-based procedure consisting of the following steps: resorting the data into common offsets to improve crossline sampling, performing DMO to eliminate azimuth variations in the common-offset domain, and efficiently implementing inverse shot-record DMO to reconstruct densely sampled shot records required for 3D SRME to predict multiples correctly. We use a field data example to demonstrate that the proposed shot reconstruction procedure leads to kinematically accurate reconstruction of primaries but may not be able to simultaneously position multiples correctly. The mispositioning of multiples becomes a problem when second- and higher-order multiples must be predicted. We propose to resolve this difficulty by using a layer-stripping approach to multiple prediction. Alternatively, an approximate algorithm that relies on adaptive subtraction to compensate for inaccurate positioning of predicted multiples can be used. Application of the latter approach is illustrated with a field data example, and its performance is evaluated quantitatively through a measurement of S/N ratio improvement. We demonstrate that a DMO-based implementation of 3D SRME outperforms conventional 2D SRME and can accurately predict and attenuate complex 3D multiples.


2021 ◽  
Vol 18 (4) ◽  
pp. 492-502
Author(s):  
Dongliang Zhang ◽  
Constantinos Tsingas ◽  
Ahmed A Ghamdi ◽  
Mingzhong Huang ◽  
Woodon Jeong ◽  
...  

Abstract In the last decade, a significant shift in the marine seismic acquisition business has been made where ocean bottom nodes gained a substantial market share from streamer cable configurations. Ocean bottom node acquisition (OBN) can acquire wide azimuth seismic data over geographical areas with challenging deep and shallow bathymetries and complex subsurface regimes. When the water bottom is rugose and has significant elevation differences, OBN data processing faces a number of challenges, such as denoising of the vertical geophone, accurate wavefield separation, redatuming the sparse receiver nodes from ocean bottom to sea level and multiple attenuation. In this work, we review a number of challenges using real OBN data illustrations. We demonstrate corresponding solutions using processing workflows comprising denoising the vertical geophones by using all four recorded nodal components, cross-ghosting the data or using direct wave to design calibration filters for up- and down-going wavefield separation, performing one-dimensional reversible redatuming for stacking QC and multiple prediction, and designing cascaded model and data-driven multiple elimination applications. The optimum combination of the mentioned technologies produced cleaner and high-resolution migration images mitigating the risk of false interpretations.


Geophysics ◽  
1998 ◽  
Vol 63 (2) ◽  
pp. 772-789 ◽  
Author(s):  
William H. Dragoset ◽  
Željko Jeričević

The surface multiple attenuation algorithm discussed in this paper is a prestack inversion of a surface‐recorded, 2-D wavefield that aims to remove all orders of all surface multiples present within the wavefield. Although the algorithm requires no assumptions or modeling regarding the positions and reflection coefficients of the multiple‐causing reflectors, it does require complete internal physical consistency between primary and multiple events—something that exists only in ideal 2-D data sets. In field data sets the physical consistency between primaries and multiples is disturbed by phenomena such as variations in the acquisition wavelet, cable feathering, cross‐line dip, a finite near offset, and unequal or too coarse spatial sampling in source and receiver coordinates. Careful survey design can minimize the impact of those phenomena on surface multiple attenuation. If it is not too large, trace extrapolation can solve the finite near‐offset problem. Minor adjustments to the algorithm allow processing of data for which the source and receiver intervals differ by an integer multiple, although for those and other acquisition geometries, trace interpolation may be preferred. In the f-x domain, surface multiple attenuation can be formulated as an equation whose straightforward solution involves the inversion of a large matrix that is a function of the acquisition wavelet. Since that wavelet is generally unknown, solving this matrix equation becomes an optimization problem. Many matrix inversions are needed to estimate the acquisition wavelet that leads to the best multiple suppression, rendering the straightforward solution to the surface multiple attenuation equation quite costly. We offer two alternative approaches. In our first approach we compute an eigenvalue decomposition of the large matrix, allowing the equation to be recast so that the wavelet dependency appears in a diagonal matrix for which repetitive inversion is trivial. In our second approach we begin by using the surface multiple attenuation algorithm with a fixed, approximately correct wavelet to compute the surface multiple wavefield. We then filter the predicted multiples adaptively to match the actual multiples in the original wavefield and subtract these filtered multiples from the original wavefield. The second approach is relatively inexpensive and to some extent can cope with physical inconsistencies between primaries and multiples caused by field data set imperfections.


Geophysics ◽  
2008 ◽  
Vol 73 (3) ◽  
pp. V29-V36 ◽  
Author(s):  
Sam T. Kaplan ◽  
Kristopher A. Innanen

We present a three-stage algorithm for adaptive separation of free-surface multiples. The free-surface multiple elimination (FSME) method requires, as deterministic prerequisites, knowledge of the source wavelet and deghosted data. In their absence, FSME provides an estimate of free-surface multiples that must be subtracted adaptively from the data. First we construct several orders from the free-surface multiple prediction formula. Next we use the full recording duration of any given data trace to construct filters that attempt to match the data and the multiple predictions. This kind of filter produces adequate phase results, but the order-by-order nature of the free-surface algorithm brings results that remain insufficient for straightforward subtraction. Then we construct, trace by trace, a mixing model in which the mixtures are the data trace and its orders of multiple predictions. We separate the mixtures through a blind source separation technique, in particular by employing independent component analysis. One of the recovered signals is a data trace without free-surface multiples. This technique sidesteps the subtraction inherent in most adaptive subtraction methods by separating the desired signal from the free-surface multiples. The method was applied to synthetic and field data. We compared the field data to a published method and found comparable results.


Geophysics ◽  
2005 ◽  
Vol 70 (3) ◽  
pp. V31-V43 ◽  
Author(s):  
E. J. van Dedem ◽  
D. J. Verschuur

The theory of iterative surface-related multiple elimination holds for 2D as well as 3D wavefields. The 3D prediction of surface multiples, however, requires a dense and extended distribution of sources and receivers at the surface. Since current 3D marine acquisition geometries are very sparsely sampled in the crossline direction, the direct Fresnel summation of the multiple contributions, calculated for those surface positions at which a source and a receiver are present, cannot be applied without introducing severe aliasing effects. In this newly proposed method, the regular Fresnel summation is applied to the contributions in the densely sampled inline direction, but the crossline Fresnel summation is replaced with a sparse parametric inversion. With this procedure, 3D multiples can be predicted using the available input data. The proposed method is demonstrated on a 3D synthetic data set as well as on a 3D marine data set from offshore Norway.


2020 ◽  
Vol 501 (1) ◽  
pp. 994-1001
Author(s):  
Suman Sarkar ◽  
Biswajit Pandey ◽  
Snehasish Bhattacharjee

ABSTRACT We use an information theoretic framework to analyse data from the Galaxy Zoo 2 project and study if there are any statistically significant correlations between the presence of bars in spiral galaxies and their environment. We measure the mutual information between the barredness of galaxies and their environments in a volume limited sample (Mr ≤ −21) and compare it with the same in data sets where (i) the bar/unbar classifications are randomized and (ii) the spatial distribution of galaxies are shuffled on different length scales. We assess the statistical significance of the differences in the mutual information using a t-test and find that both randomization of morphological classifications and shuffling of spatial distribution do not alter the mutual information in a statistically significant way. The non-zero mutual information between the barredness and environment arises due to the finite and discrete nature of the data set that can be entirely explained by mock Poisson distributions. We also separately compare the cumulative distribution functions of the barred and unbarred galaxies as a function of their local density. Using a Kolmogorov–Smirnov test, we find that the null hypothesis cannot be rejected even at $75{{\ \rm per\ cent}}$ confidence level. Our analysis indicates that environments do not play a significant role in the formation of a bar, which is largely determined by the internal processes of the host galaxy.


2019 ◽  
Vol 374 (1769) ◽  
pp. 20180204 ◽  
Author(s):  
Iliana Medina ◽  
Naomi E. Langmore

The spatial distribution of hosts can be a determining factor in the reproductive success of parasites. Highly aggregated hosts may offer more opportunities for reproduction but can have better defences than isolated hosts. Here we connect macro- and micro-evolutionary processes to understand the link between host density and parasitism, using avian brood parasites as a model system. We analyse data across more than 200 host species using phylogenetic comparative analyses and quantify parasitism rate and host reproductive success in relation to spatial distribution using field data collected on one host species over 6 years. Our comparative analysis reveals that hosts occurring at intermediate densities are more likely to be parasitized than colonial or widely dispersed hosts. Correspondingly, our intraspecific field data show that individuals living at moderate densities experience higher parasitism rates than individuals at either low or high densities. Moreover, we show for the first time that the effect of host density on host reproductive success varies according to the intensity of parasitism; hosts have greater reproductive success when living at high densities if parasitism rates are high, but fare better at low densities when parasitism rates are low. We provide the first evidence of the trade-off between host density and parasitism at both macro- and micro-evolutionary scales in brood parasites. This article is part of the theme issue ‘The coevolutionary biology of brood parasitism: from mechanism to pattern’.


Geophysics ◽  
2011 ◽  
Vol 76 (6) ◽  
pp. V115-V128 ◽  
Author(s):  
Ning Wu ◽  
Yue Li ◽  
Baojun Yang

To remove surface waves from seismic records while preserving other seismic events of interest, we introduced a transform and a filter based on recent developments in image processing. The transform can be seen as a weighted Radon transform, in particular along linear trajectories. The weights in the transform are data dependent and designed to introduce large amplitude differences between surface waves and other events such that surface waves could be separated by a simple amplitude threshold. This is a key property of the filter and distinguishes this approach from others, such as conventional ones that use information on moveout ranges to apply a mask in the transform domain. Initial experiments with synthetic records and field data have demonstrated that, with the appropriate parameters, the proposed trace transform filter performs better both in terms of surface wave attenuation and reflected signal preservation than the conventional methods. Further experiments on larger data sets are needed to fully assess the method.


Sign in / Sign up

Export Citation Format

Share Document