Creating a 3D image from 2D data using structurally conformable interpolation: a case study from the Beagle Sub-basin, NSW, Australia

2020 ◽  
Vol 60 (1) ◽  
pp. 326
Author(s):  
Subodh Notiyal ◽  
Victoria Seesaha

2D seismic data still provides key information for companies evaluating new permits on offer or entering new basins. However, working on multi-vintage 2D data can be time-consuming for several reasons, including getting correct navigation, variability of physical parameters like amplitude, time and phase between different vintages, and then interpreting the 2D data itself, which often results in gridding artefacts. In a step change to the use of traditional 2D data, TGS has developed a methodology called ‘structurally conformable interpolation’ – also known as 2Dcubed. It is created using input data from available 2D migrated stacks and velocities from available vintages. The workflow includes survey matching of different vintages, data-driven geological model building to interpolate large distances between existing data, and a 3D post-stack migration to minimise the 2D migration artefacts. The merging of these datasets successfully creates a 3D migrated image from legacy 2D data, offering better structure and continuity while increasing confidence in its interpretation. Interpretation of a 3D volume is much more efficient than when using 2D data and is free from 2D artefacts. With this methodology TGS has completed a project covering a 40000km2 area in the Beagle Sub-basin, north-west Western Australia, using existing 2D data from over 42 different vintages. The resulting output ‘Beagle Cube’ interpolated 3D volume has been interpreted for major regional trends and structures. The results are very consistent with the original 2D data, but with better definition of major structures. Another study comparing the interpretation between the interpolated 3D volume and the real open-file 3D shows excellent preservation of the structural picture within the interpolated 3D volume, not at the same level as real 3D, but it gives greater confidence in the regional interpretation conducted within areas that do not have 3D coverage. This paper will address how the interpolation methodology works stage by stage, the results of the final product and how it assists in performing regional interpretation in a quick timeframe.

2016 ◽  
Vol 5 (3) ◽  
pp. 47-67 ◽  
Author(s):  
Rafika Hajji ◽  
Roland Billen

The need of 3D city models increases day by day. However, 3D modeling still faces some impediments to be generalized. Therefore, new solutions such as collaboration should be investigated. The paper presents a new vision of collaboration applied on 3D modeling through the definition of the concept of a 3D collaborative model. The paper highlights basic questions to be considered for the definition and the development of that model then argues the importance of reuse of 2D data as a promising solution to reconstruct 3D data and to upgrade to integrated 3D solutions in the future. This idea is supported by a case study, to demonstrate how 2D/2.5D data collected from different providers in Walloon region in Belgium can be integrated and reengineered to match the specifications of a 3D building model compatible with the CityGML standard.


First Break ◽  
2018 ◽  
Vol 36 (12) ◽  
pp. 99-103
Author(s):  
Paolo Esestime ◽  
Milos Cvetkovic ◽  
Jonathan Rogers ◽  
Howard Nicholls ◽  
Karyna Rodriguez

2015 ◽  
Vol 2015 (1) ◽  
pp. 1-5
Author(s):  
Bee Jik Lim ◽  
Denes Vigh ◽  
Stephen Alwon ◽  
Saeeda Hydal ◽  
Martin Bayly ◽  
...  

Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE303-VE311 ◽  
Author(s):  
Juergen Pruessmann ◽  
Sven Frehers ◽  
Rodolfo Ballesteros ◽  
Alfredo Caballero ◽  
Gerardo Clemente

A seismic depth-imaging project starts from an initial depth model of interval velocities. From time processing of reflection seismic data, a set of stacking parameters or kinematic data attributes usually is available for an initial model building at little effort. Two methods for initial model building from time-processing attributes are compared in this case study, using 3D seismic land data from the coast of the Gulf of Mexico. Conventional normal moveout (NMO)/dip moveout (DMO) time processing performs one-parametric stacking using stacking velocity as the parameter. The stacking velocity field can be converted into a depth model by the well-known vertical Dix inversion, which is very fast and robust but degrades with increasing dip. Common-reflection surface (CRS) time processing, on the contrary, isbased on the multiparametric CRS stacking approach, providing several volumes of CRS-stacking attributes that include the wavefield dip, or horizontal slowness. Inversion of CRS attributes by CRS tomography incorporates this dip information in depth model building. In this case study, CRS or normal-incidence point (NIP) wave tomography is presented as a model-building link between high-resolution CRS time processing and subsequent depth processing. The CRS tomography model shows a better adaptation to the dipping subsurface structures than the Dix model and a good fit to well data. The smooth tomography model is well suited for further use in poststack and prestack depth migrations. It provides a good starting point for iterative model enhancement and salt-body definition in prestack depth migration.


2020 ◽  
Author(s):  
Edy Forlin ◽  
Giuseppe Brancatelli ◽  
Nicolò Bertone ◽  
Anna Del Ben ◽  
Riccardo Geletti

<p>Nowadays depth imaging of seismic data, using different migration schemes (rays tracing or waves equation methods) and different techniques for velocity model building (i.e. grid or layer-based tomography, isotropic or anisotropic velocity field) is a standard approach for the earth’s subsurface characterization. When dealing with low fold vintage data, acquired with outdated technologies, modern processing algorithms may fail. On the other hand, the reprocessing of these old data with modern techniques may lead to an improvement of quality and resolution, allowing a more accurate interpretation of the investigated geological features. It is important to note that a lot of vintage data were acquired in areas with no recent surveys or currently subject to exploration restrictions. Therefore, available vintage data could be of great importance for all the stakeholders involved in geophysical exploration. We present a case study about the reprocessing of low fold marine seismic data that were acquired in 1971 in the Otranto Channel (Southern Adriatic Sea, Italy).</p><p>The first part of the work consists of a modern broadband sequence processing in the time domain, that allowed us to obtain a pre-stack time migrated seismic section; in the second part, depth imaging has been achieved through a pre-stack depth migration (PSDM). Reliable interval p-waves velocity model has been obtained using two tomographic approaches: grid tomography and layer-based tomography; for both, we carried out several iterations of the refinement loop, consisting of migration, ray tomography, residual velocity analysis, velocity model update.</p><p>The results show significant improvements compared to the original vintage section, in terms of resolution and signal to noise ratio. Moreover, depth imaging and velocity modeling added further information (e.g., reliable interval p-waves velocity model, real geometry and thickness of the main geological units). This study confirms that applying the up-to-date processing and imaging techniques to vintage data, their geophysical and geological value is enhanced and renewed at a relatively low cost.</p>


Sign in / Sign up

Export Citation Format

Share Document