scholarly journals Multiple prediction through inversion: Theoretical advancements and real data application

Geophysics ◽  
2007 ◽  
Vol 72 (2) ◽  
pp. V33-V39 ◽  
Author(s):  
Yanghua Wang

Wave-equation-based multiple attenuation seismic methods may be divided into the two distinct phases of multiple modeling and multiple subtraction. These two are interrelated and must be optimized in order to produce an optimal final result. The multiple prediction through inversion (MPI) scheme updates the multiple model iteratively, as we usually do in a linearized inverse problem. The scheme models the multiple wavefield without an explicit knowledge of surface and subsurface structures or of the source signature; both are generally unknown in seismic surveys. However, compared to a conventional surface-related multiple attenuation method, the accuracy of the multiple model is improved both kinematically and dynamically. It is because the MPI scheme implicitly takes account of the spatial variation of the surface reflectivity, the source signature, the detector patterns and receiver ghosts, and other effects included in the so-called surface operator. When the MPI scheme is used in the first phase it also significantly reduces the nonlinearity of the problem in the second phase that involves attenuating multiples without removing or altering primaries. The effectiveness of the MPI scheme is demonstrated by examples involving real marine seismic data.

Geophysics ◽  
1999 ◽  
Vol 64 (6) ◽  
pp. 1806-1815 ◽  
Author(s):  
Evgeny Landa ◽  
Igor Belfer ◽  
Shemer Keydar

The problem of multiple attenuation has been solved only partially. One of the most common methods of attenuating multiples is an approach based on the Radon transform. It is commonly accepted that the parabolic Radon transform method is only able to attenuate multiples with significant moveouts. We propose a new 2-D method for attenuation of both surface‐related and interbed multiples in the parabolic τ-p domain. The method is based on the prediction of a multiple model from the wavefront characteristics of the primary events. Multiple prediction comprises the following steps: 1) For a given multiple code, the angles of emergence and the radii of wavefront curvatures are estimated for primary reflections for each receiver in the common‐shotpoint gather. 2) The intermediate points which compose a specified multiple event are determined for each shot‐receiver pair. 3) Traveltimes of the multiples are calculated. Wavefields within time windows around the predicted traveltime curves may be considered as multiple model traces which we use for multiple attenuation process. Using the predicted multiple traveltimes, we can define the area in the τ-p domain which contains the main energy of the multiple event. Resolution improvement of the parabolic Radon operator can be achieved through a simple multiplication of each sample in the τ-p space by a nonlinear semblance function. In this work, we follow the idea of defining the multiple reject areas automatically by comparing the energy of the multiple model and the original input data in the τ-p space. We illustrate the usefulness of this algorithm for the attenuation of multiples on both synthetic and real data.


Geophysics ◽  
2004 ◽  
Vol 69 (2) ◽  
pp. 547-553 ◽  
Author(s):  
Yanghua Wang

This paper introduces a fully data‐driven concept, multiple prediction through inversion (MPI), for surface‐related multiple attenuation (SMA). It builds the multiple model not by spatial convolution, as in a conventional SMA, but by updating the attenuated multiple wavefield in the previous iteration to generate a multiple prediction for the new iteration, as is usually the case in an iterative inverse problem. Because MPI does not use spatial convolution, it is able to minimize the edge effect that appears in conventional SMA multiple prediction and to eliminate the need to synthesize near‐offset traces, required by a conventional scheme, so that it can deal with a seismic data set with missing near‐offset traces. The MPI concept also eliminates the need for an explicit surface operator, which is required by conventional SMA and is comprised of the inverse source signature and other effects. This method accounts implicitly for the spatial variation of the surface operator in multiple‐model building and attempts to predict multiples which are not only accurate kinematically but are also accurate in phase and amplitude.


Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. V317-V328
Author(s):  
Jitao Ma ◽  
Guoyang Xu ◽  
Xiaohong Chen ◽  
Xiaoliu Wang ◽  
Zhenjiang Hao

The parabolic Radon transform is one of the most commonly used multiple attenuation methods in seismic data processing. The 2D Radon transform cannot consider the azimuth effect on seismic data when processing 3D common-depth point gathers; hence, the result of applying this transform is unreliable. Therefore, the 3D Radon transform should be applied. The theory of the 3D Radon transform is first introduced. To address sparse sampling in the crossline direction, a lower frequency constraint is introduced to reduce spatial aliasing and improve the resolution of the Radon transform. An orthogonal polynomial transform, which can fit the amplitude variations in different parabolic directions, is combined with the dealiased 3D high-resolution Radon transform to account for the amplitude variations with offset of seismic data. A multiple model can be estimated with superior accuracy, and improved results can be achieved. Synthetic and real data examples indicate that even though our method comes at a higher computational cost than existing techniques, the developed approach provides better attenuation of multiples for 3D seismic data with amplitude variations.


Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V97-V107 ◽  
Author(s):  
Antoine Guitton

Primaries (signal) and multiples (noise) often exhibit different kinematics and amplitudes (i.e., patterns) in time and space. Multidimensional prediction-error filters (PEFs) approximate these patterns to separate noise and signal in a least-squares sense. These filters are time-space variant to handle the nonstationarity of multioffset seismic data. PEFs for the primaries and multiples are estimated from pattern models. In an ideal case where accurate pattern models of both noise and signal exist, the pattern-based method recovers the primaries while preserving their amplitudes. In the more general case, the pattern model of the multiples is obtained by using the data as prediction operators. The pattern model of the primaries is obtained by convolving the noise PEFs with the input data. In this situation, 3D PEFs are preferred to separate (in prestack data) the multiples properly and to preserve the primaries. Comparisons of the proposed method with adaptive subtraction with an [Formula: see text] norm demonstrate that for a given multiple model, the pattern-based approach generally attenuates the multiples and recovers the primaries better. In addition, tests on a 2D line from the Gulf of Mexico demonstrate that the proposed technique copes fairly well with modeling inadequacies present in the multiple prediction.


2021 ◽  
Vol 18 (4) ◽  
pp. 492-502
Author(s):  
Dongliang Zhang ◽  
Constantinos Tsingas ◽  
Ahmed A Ghamdi ◽  
Mingzhong Huang ◽  
Woodon Jeong ◽  
...  

Abstract In the last decade, a significant shift in the marine seismic acquisition business has been made where ocean bottom nodes gained a substantial market share from streamer cable configurations. Ocean bottom node acquisition (OBN) can acquire wide azimuth seismic data over geographical areas with challenging deep and shallow bathymetries and complex subsurface regimes. When the water bottom is rugose and has significant elevation differences, OBN data processing faces a number of challenges, such as denoising of the vertical geophone, accurate wavefield separation, redatuming the sparse receiver nodes from ocean bottom to sea level and multiple attenuation. In this work, we review a number of challenges using real OBN data illustrations. We demonstrate corresponding solutions using processing workflows comprising denoising the vertical geophones by using all four recorded nodal components, cross-ghosting the data or using direct wave to design calibration filters for up- and down-going wavefield separation, performing one-dimensional reversible redatuming for stacking QC and multiple prediction, and designing cascaded model and data-driven multiple elimination applications. The optimum combination of the mentioned technologies produced cleaner and high-resolution migration images mitigating the risk of false interpretations.


Author(s):  
MARTIN IVARSSON ◽  
TONY GORSCHEK

Knowledge management (KM) in software engineering and software process improvement (SPI) are challenging. Most existing KM and SPI frameworks are too expensive to deploy or do not take an organization's specific needs or knowledge into consideration. There is thus a need for scalable improvement approaches that leverage knowledge already residing in the organizations. This paper presents the Practice Selection Framework (PSF), an Experience Factory approach, enabling lightweight experience capture and use by utilizing postmortem reviews. Experiences gathered concern performance and applicability of practices used in the organization, gained from concluded projects. Project managers use these as decision support for selecting practices to use in future projects, enabling explicit knowledge transfer across projects and the development organization as a whole. Process managers use the experiences to determine if there is potential for improvement of practices used in the organization. This framework was developed and subsequently validated in industry to get feedback on usability and usefulness from practitioners. The validation consisted of tailoring and testing the framework using real data from the organization and comparing it to current practices used in the organization to ensure that the approach meets industry needs. The results from the validation are encouraging and the participants' assessment of PSF and particularly the tailoring developed was positive.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


Kybernetes ◽  
2018 ◽  
Vol 47 (8) ◽  
pp. 1664-1686 ◽  
Author(s):  
Cihan Çetinkaya ◽  
Mehmet Kabak ◽  
Mehmet Erbaş ◽  
Eren Özceylan

Purpose The aim of this study is to evaluate the potential geographic locations for ecotourism activities and to select the best one among alternatives. Design/methodology/approach The proposed model consists of four sequential phases. In the first phase, different geographic criteria are determined based on existing literature, and data are gathered using GIS. On equal criteria weighing, alternative locations are determined using GIS in the second phase. In the third phase, the identified criteria are weighted using analytical hierarchy process (AHP) by various stakeholders of potential ecotourism sites. In the fourth phase, the PROMETHEE method is applied to determine the best alternative based on the weighted criteria. Findings A framework including four sequential steps is proposed. Using real data from the Black Sea region in Turkey, the authors test the applicability of the evaluation approach and compare the best alternative obtained by the proposed method for nine cities in the region. Consequently, west of Sinop, east of Artvin and south of the Black Sea region are determined as very suitable locations for ecotourism. Research limitations/implications The first limitation of the study is considered the number of included criteria. Another limitation is the use of deterministic parameters that do not cope with uncertainty. Further research can be conducted for determining the optimum locations for different types of tourism, e.g. religion tourism, hunting tourism and golf tourism, for effective tourism planning. Practical implications The proposed approach can be applied to all area that cover the considered criteria. The approach has been tested in the Black Sea region (nine cities) in Turkey. Social implications Using the proposed approach, decision-makers can determine locations where environmentally responsible travel to natural areas to enjoy and appreciate nature that promotes conservation have a low visitor impact and provide for beneficially active socioeconomic involvement of local individuals. Originality/value To the best knowledge of the authors, this is the first study which applies a GIS-based multi-criteria decision-making approach for ecotourism site selection.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2164
Author(s):  
Héctor J. Gómez ◽  
Diego I. Gallardo ◽  
Karol I. Santoro

In this paper, we present an extension of the truncated positive normal (TPN) distribution to model positive data with a high kurtosis. The new model is defined as the quotient between two random variables: the TPN distribution (numerator) and the power of a standard uniform distribution (denominator). The resulting model has greater kurtosis than the TPN distribution. We studied some properties of the distribution, such as moments, asymmetry, and kurtosis. Parameter estimation is based on the moments method, and maximum likelihood estimation uses the expectation-maximization algorithm. We performed some simulation studies to assess the recovery parameters and illustrate the model with a real data application related to body weight. The computational implementation of this work was included in the tpn package of the R software.


Sign in / Sign up

Export Citation Format

Share Document