A three‐dimensional perspective on two‐dimensional dip moveout

Geophysics ◽  
1988 ◽  
Vol 53 (5) ◽  
pp. 604-610 ◽  
Author(s):  
David Forel ◽  
Gerald H. F. Gardner

Prestack migration in a constant‐velocity medium spreads an impulse on any trace over an ellipsoidal surface with foci at the source and receiver positions for that trace. The same ellipsoid can be obtained by migrating a family of zero‐offset traces placed along the line segment from the source to the receiver. The spheres generated by migrating the zero‐offset impulses are arranged to be tangent to the ellipsoid. The resulting nonstandard moveout equation is equivalent to two consecutive moveouts, the first requiring no knowledge of velocity and the second being standard normal moveout (NMO). The first of these is referred to as dip moveout (DMO). Because this DMO-NMO algorithm converts any trace to an equivalent set of zero‐offset traces, it can be applied to any ensemble of traces no matter what the variations in azimuth and offset may be. In particular, this three‐dimensional perspective on DMO can be used with multifold inline data. Then it becomes clear that velocity‐independent DMO operates on radial‐trace profiles and not on constant‐offset profiles. Inline data over a three‐dimensional subsurface will be properly stacked by using DMO followed by NMO.

Geophysics ◽  
1984 ◽  
Vol 49 (10) ◽  
pp. 1806-1807
Author(s):  
Th. Krey

Quite recently Peter Hubral published a short note in which he described a special, very perspicuous stacking method which, starting from the records of a line survey, produces true amplitude reflections for “normal waves,” as defined in his Introduction. In the following I want to supplement Hubral’s note by showing the analytical connection with Hubral’s earlier paper (Hubral, 1983) and the additional short note by Krey (Krey, 1983). My present investigation will be two‐dimensional (2-D) as is that in the subject paper; an extension to the three‐dimensional (3-D) case is conceptionally easy for the following analytical derivation as well as for Hubral’s note. Besides a basic confirmation of Hubral’s findings, I shall show that the result of Hubral’s method has still to be multiplied by [Formula: see text] in the 2-D case and by [Formula: see text] in the 3-D case in order to obtain the precise result. Here ω is the frequency. Moreover the angle of emergence α of the zero‐offset raypath has to be taken into account.


Geophysics ◽  
1990 ◽  
Vol 55 (1) ◽  
pp. 10-19 ◽  
Author(s):  
Martin Karrenbach

Three‐dimensional migration of zero‐offset data using a velocity varying with depth can be performed in one pass using Fourier transforms of time slices. The migration process is carried out entirely in the two‐dimensional spatial Fourier domain. The algorithm consecutively filters and adds time slices of the 3-D data volume in a way that is equivalent to summing energy over the diffraction surface of a point scatterer. The partial energy being distributed along a circle in a time slice is properly added in each summation step. Time‐slice migration is based on an integral solution of the acoustic wave equation known as the “Kirchhoff integral.” The wavelet shape in a 3-D data volume is preserved throughout the entire migration process. The frequency characteristics are maintained by summing weighted differences between time slices instead of summing the time slices themselves. Automatic weighting is achieved by time slicing at equal increments of diffraction radius. Tapering the summation operator reduces effects introduced by limiting the summation window. Time‐slice migration preserves the frequency content of a 3-D data volume during summation in a natural way. Since the migration scheme assumes a constant velocity within the entire time slice, it is a local process in time which migrates a 3-D data volume with a constant velocity or with a velocity which varies with depth. The migration algorithm is applied to numerical and physical model data. This method is especially suitable for a migration of a targeted subset of the 3-D data volume.


Aerospace ◽  
2021 ◽  
Vol 8 (8) ◽  
pp. 231
Author(s):  
Zhanyuan Jiang ◽  
Jianquan Ge ◽  
Qiangqiang Xu ◽  
Tao Yang

The paper proposes a two-dimensional impact time control cooperative guidance law under constant velocity and a three-dimensional impact time control cooperative guidance law under time-varying velocity, which can both improve the penetration ability and combat effectiveness of multi-missile systems and adapt to the complex and variable future warfare. First, a more accurate time-to-go estimation method is proposed, and based on which a modified proportional navigational guidance (MPNG) law with impact time constraint is designed in this paper, which is also effective when the initial leading angle is zero. Second, adopting cooperative guidance architecture with centralized coordination, using the MPNG law as the local guidance, and the desired impact time as the coordination variables, a two-dimensional impact time control cooperative guidance law under constant velocity is designed. Finally, a method of solving the expression of velocity is derived, and the analytic function of velocity with respect to time is given, a three-dimensional impact time control cooperative guidance law under time-varying velocity based on desired impact time is designed. Numerical simulation results verify the feasibility and applicability of the methods.


Geophysics ◽  
1993 ◽  
Vol 58 (7) ◽  
pp. 1030-1041 ◽  
Author(s):  
Hans A. Meinardus ◽  
Karl L. Schleicher

The standard seismic imaging sequence consists of normal moveout (NMO), dip moveout (DMO), stack, and zero‐offset migration. Conventional NMO and DMO processes remove much of the effect of offset from prestack data, but the constant velocity assumption in most DMO algorithms can compromise the ultimate results. Time‐variant DMO avoids the constant velocity assumption to create better stacks, especially for steeply dipping events. Time‐variant DMO can be implemented as a 3-D, f-k domain process using the dip decomposition method. Prestack data are moved out with a set of NMO velocities corresponding to discrete values of in‐line and crossline dips. The dip‐dependent NMO velocity is computed to remove the trace offset and azimuth dependence of event times for an arbitrary velocity function of depth. After stacking the moved out CMP gathers, a three‐dimensional (3-D) dip filter is applied to select the particular in‐line and crossline dip. The final zero‐offset image is obtained by summing all the dip‐filtered sections. This process generates a saddle‐shaped 3-D impulse response for a constant velocity gradient. The impulse response is more complicated for a general depth‐variable velocity function, where the response exhibits secondary branches, or triplications, at steeper dips. These complicated impulse responses, including amplitude and phase effects, are implicitly produced by the f-k process. The dip‐decomposition method of 3-D time‐variant DMO is an efficient and accurate process to correct for the effect of offset in the presence of an arbitrary velocity variation with depth. The impulse response of this process implicitly contains complex features like a 3-D saddle shape, triplications, amplitude, and phase. Field data from the Gulf of Mexico shows significant improvement on a steep salt flank event.


Geophysics ◽  
1999 ◽  
Vol 64 (3) ◽  
pp. 942-953 ◽  
Author(s):  
Gijs J. O. Vermeer

The theory of spatial resolution has been well‐established in various papers dealing with inversion and prestack migration. Nevertheless, there is a continuing flow of papers being published on spatial resolution, in particular in relation to spatial sampling. This paper continues the discussion, and deals with various factors affecting spatial resolution. The theoretical best‐possible resolution can be predicted using Beylkin’s formula. This formula gives answers on the effect on resolution of frequency, aperture, offset, and acquisition geometry. In this paper, these factors are investigated using a single diffractor in a constant‐velocity medium. Some simple resolution formulas are derived for 2-D zero‐offset data. These formulas are similar to formulas derived elsewhere in a more heuristic way, and which are in common use in the industry. The formulas are extended to 2-D common‐offset data. The width of the spatial wavelet resulting from migration of the diffraction event is compared with the resolution predicted from Beylkin’s formula for various 3-D single‐fold data sets. The measured widths confirm the theoretical prediction that zero‐offset data produce the best possible resolution and 3-D shots the worst, with common‐offset gathers and cross‐spreads scoring intermediate. The effects of sampling and fold cannot be derived directly from Beylkin’s formula; these effects are analyzed by looking at the migration noise rather than at the width of the spatial wavelet. Random coarse sampling may produce somewhat less migration noise than regular coarse sampling, though it cannot be as good as regular dense sampling. The bin‐fractionation technique (which achieves finer midpoint sampling without changing the station spacings) does not achieve higher resolution than conventional sampling. Generally speaking, increasing fold does not improve the theoretically best possible resolution. However, as noise always has a detrimental effect on the resolvability of events, fold—by reducing noise—will improve resolution in practice. This also applies to migration noise as a product of coarse sampling.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 47-66 ◽  
Author(s):  
James L. Black ◽  
Karl L. Schleicher ◽  
Lin Zhang

True‐amplitude seismic imaging produces a three dimensional (3-D) migrated section in which the peak amplitude of each migrated event is proportional to the reflectivity. For a constant‐velocity medium, the standard imaging sequence consisting of spherical‐divergence correction, normal moveout (NMO), dip moveout (DMO), and zero‐offset migration produces a true‐amplitude image if the DMO step is done correctly. There are two equivalent ways to derive the correct amplitude‐preserving DMO. The first is to improve upon Hale’s derivation of F-K DMO by taking the reflection‐point smear properly into account. This yields a new Jacobian that simply replaces the Jacobian in Hale’s method. The second way is to calibrate the filter that appears in integral DMO so as to preserve the amplitude of an arbitrary 3-D dipping reflector. This latter method is based upon the 3-D acoustic wave equation with constant velocity. The resulting filter amounts to a simple modification of existing integral algorithms. The new F-K and integral DMO algorithms resulting from these two approaches turn out to be equivalent, producing identical outputs when implemented in nonaliased fashion. As dip increases, their output become progressively larger than the outputs of either Hale’s F-K method or the integral method generally associated with Deregowski and Rocca. This trend can be observed both on model data and field data. There are two additional results of this analysis, both following from the wave‐equation calibration on an arbitrary 3-D dipping reflector. The first is a proof that the entire imaging sequence (not just the DMO part) is true‐amplitude when the DMO is done correctly. The second result is a handy formula showing exactly how the zero‐phase wavelet on the final migrated image is a stretched version of the zero‐phase deconvolved source wavelet. This result quantitatively expresses the loss of vertical resolution due to dip and offset.


Author(s):  
H.A. Cohen ◽  
T.W. Jeng ◽  
W. Chiu

This tutorial will discuss the methodology of low dose electron diffraction and imaging of crystalline biological objects, the problems of data interpretation for two-dimensional projected density maps of glucose embedded protein crystals, the factors to be considered in combining tilt data from three-dimensional crystals, and finally, the prospects of achieving a high resolution three-dimensional density map of a biological crystal. This methodology will be illustrated using two proteins under investigation in our laboratory, the T4 DNA helix destabilizing protein gp32*I and the crotoxin complex crystal.


Author(s):  
B. Ralph ◽  
A.R. Jones

In all fields of microscopy there is an increasing interest in the quantification of microstructure. This interest may stem from a desire to establish quality control parameters or may have a more fundamental requirement involving the derivation of parameters which partially or completely define the three dimensional nature of the microstructure. This latter categorey of study may arise from an interest in the evolution of microstructure or from a desire to generate detailed property/microstructure relationships. In the more fundamental studies some convolution of two-dimensional data into the third dimension (stereological analysis) will be necessary.In some cases the two-dimensional data may be acquired relatively easily without recourse to automatic data collection and further, it may prove possible to perform the data reduction and analysis relatively easily. In such cases the only recourse to machines may well be in establishing the statistical confidence of the resultant data. Such relatively straightforward studies tend to result from acquiring data on the whole assemblage of features making up the microstructure. In this field data mode, when parameters such as phase volume fraction, mean size etc. are sought, the main case for resorting to automation is in order to perform repetitive analyses since each analysis is relatively easily performed.


Author(s):  
Yu Liu

The image obtained in a transmission electron microscope is the two-dimensional projection of a three-dimensional (3D) object. The 3D reconstruction of the object can be calculated from a series of projections by back-projection, but this algorithm assumes that the image is linearly related to a line integral of the object function. However, there are two kinds of contrast in electron microscopy, scattering and phase contrast, of which only the latter is linear with the optical density (OD) in the micrograph. Therefore the OD can be used as a measure of the projection only for thin specimens where phase contrast dominates the image. For thick specimens, where scattering contrast predominates, an exponential absorption law holds, and a logarithm of OD must be used. However, for large thicknesses, the simple exponential law might break down due to multiple and inelastic scattering.


Sign in / Sign up

Export Citation Format

Share Document