Processing of wide‐angle Vibroseis data

Geophysics ◽  
1988 ◽  
Vol 53 (10) ◽  
pp. 1303-1310 ◽  
Author(s):  
F. Wenzel

In 1984 seismic wide‐angle Vibroseis experiments across the Rhine‐graben in Eastern France/Southwest Germany were carried out as a joint French/German venture. Signals generated by French vibrators were recorded at the eastern flank of the graben in the Black Forest. With a 200 channel array spread along 16 km (group spacing: 80 m), data were recorded at offsets between 66 and 82 km. The data are characterized by a low signal‐to‐noise ratio (S/N) and distortions of seismic phases caused by strong topographic variations. For wide‐angle records, conventional static corrections based on the assumption of vertical travelpaths are no longer appropriate. Ray parameter‐dependent time delays should be used in order to correctly eliminate topographic and near‐surface influences on the data. In addition, delineation of reflector segments requires an increase in S/N. A two‐dimensional filter based on forward and inverse slant stacking was designed to handle the time delay and S/N problems simultaneously. Stacking along a suite of ray parameters allows the use of static corrections which depend on the angle of emergence of the seismic arrivals. Weighting data by their coherence emphasizes spatially correlatable phases. Both procedures significantly increase S/N.

Geophysics ◽  
1989 ◽  
Vol 54 (11) ◽  
pp. 1384-1396
Author(s):  
Howard Renick ◽  
R. D. Gunn

The Triangle Ranch Headquarters Canyon Reef field is long and narrow and in an area where near‐surface evaporites and associated collapse features degrade seismic data quality and interpretational reliability. Below this disturbed section, the structure of rocks is similar to the deeper Canyon Reef structure. The shallow structure exhibits very gentle relief and can be mapped by drilling shallow holes on a broad grid. The shallow structural interpretation provides a valuable reference datum for mapping, as well as providing a basis for planning a seismic program. By computing an isopach between the variable seismic datum and the Canyon Reef reflection and subtracting the isopach map from the datum map, we map Canyon Reef structure. The datum map is extrapolated from the shallow core holes. In the area, near‐surface complexities produce seismic noise and severe static variations. The crux of the exploration problem is to balance seismic signal‐to‐noise ratio and geologic resolution. Adequate geologic resolution is impossible without understanding the exploration target. As we understood the target better, we modified our seismic acquisition parameters. Studying examples of data with high signal‐to‐noise ratio and poor resolution and examples of better defined structure on apparently noisier data led us to design an acquisition program for resolution and to reduce noise with arithmetic processes that do not reduce structural resolution. Combining acquisition and processing parameters for optimum structural resolution with the isopach mapping method has improved wildcat success from about 1 in 20 to better than 1 in 2. It has also enabled an 80 percent development drilling success ratio as opposed to slightly over 50 percent in all previous drilling.


Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 573 ◽  
Author(s):  
Zhuo Jia ◽  
Sixin Liu ◽  
Ling Zhang ◽  
Bin Hu ◽  
Jianmin Zhang

Knowledge of the subsurface structure not only provides useful information on lunar geology, but it also can quantify the potential lunar resources for human beings. The dual-frequency lunar penetrating radar (LPR) aboard the Yutu rover offers a Special opportunity to understand the subsurface structure to a depth of several hundreds of meters using a low-frequency channel (channel 1), as well as layer near-surface stratigraphic structure of the regolith using high-frequency observations (channel 2). The channel 1 data of the LPR has a very low signal-to-noise ratio. However, the extraction of weak signals from the data represents a problem worth exploring. In this article, we propose a weak signal extraction method in view of local correlation to analyze the LPR CH-1 data, to facilitate a study of the lunar regolith structure. First, we build a pre-processing workflow to increase the signal-to-noise ratio (SNR). Second, we apply the K-L transform to separate the horizontal signal and then use the seislet transform (ST) to reserve the continuous signal. Then, the local correlation map is calculated using the two denoising results and a time–space dependent weighting operator is constructed to suppress the noise residuals. The weak signal after noise suppression may provide a new reference for subsequent data interpretation. Finally, in combination with the regional geology and previous research, we provide some speculative interpretations of the LPR CH-1 data.


1996 ◽  
Vol 06 (06) ◽  
pp. 581-591
Author(s):  
MING JIAN ◽  
ALEX C. KOT ◽  
MENG H. ER

In this paper, we address the problem of acoustical source localization using a five-elements microphone array system. The time delay estimation of signal arrival for any given pair of microphones using least square technique is proposed. These estimated time delays are used in the geometric location method to determine the location of the acoustical source which, in our case, is the position of talker of interest. Computer simulations are carried out in a teleconferencing room scenario. It is shown that the location of the acoustical source can be estimated effectively as signal-to-noise ratio is larger than 20 dB in a high reverberation environment.


2021 ◽  
Vol 40 (6) ◽  
pp. 460-463
Author(s):  
Lionel J. Woog ◽  
Anthony Vassiliou ◽  
Rodney Stromberg

In seismic data processing, static corrections for near-surface velocities are derived from first-break picking. The quality of the static corrections is paramount to developing an accurate shallow velocity model, a model that in turn greatly impacts the subsequent seismic processing steps. Because even small errors in first-break picking can greatly impact the seismic velocity model building, it is necessary to pick high-quality traveltimes. Whereas various artificial intelligence-based methods have been proposed to automate the process for data with medium to high signal-to-noise ratio (S/N), these methods are not applicable to low-S/N data, which still require intensive labor from skilled operators. We successfully replace 160 hours of skilled human work with 10 hours of processing by a single NVIDIA Quadro P6000 graphical processing unit by reducing the number of human picks from the usual 5%–10% to 0.19% of available gathers. High-quality inferred picks are generated by convolutional neural network-based machine learning trained from the human picks.


2005 ◽  
Vol 18 (10) ◽  
pp. 1513-1523 ◽  
Author(s):  
W. A. Müller ◽  
C. Appenzeller ◽  
F. J. Doblas-Reyes ◽  
M. A. Liniger

Abstract The ranked probability skill score (RPSS) is a widely used measure to quantify the skill of ensemble forecasts. The underlying score is defined by the quadratic norm and is comparable to the mean squared error (mse) but it is applied in probability space. It is sensitive to the shape and the shift of the predicted probability distributions. However, the RPSS shows a negative bias for ensemble systems with small ensemble size, as recently shown. Here, two strategies are explored to tackle this flaw of the RPSS. First, the RPSS is examined for different norms L (RPSSL). It is shown that the RPSSL=1 based on the absolute rather than the squared difference between forecasted and observed cumulative probability distribution is unbiased; RPSSL defined with higher-order norms show a negative bias. However, the RPSSL=1 is not strictly proper in a statistical sense. A second approach is then investigated, which is based on the quadratic norm but with sampling errors in climatological probabilities considered in the reference forecasts. This technique is based on strictly proper scores and results in an unbiased skill score, which is denoted as the debiased ranked probability skill score (RPSSD) hereafter. Both newly defined skill scores are independent of the ensemble size, whereas the associated confidence intervals are a function of the ensemble size and the number of forecasts. The RPSSL=1 and the RPSSD are then applied to the winter mean [December–January–February (DJF)] near-surface temperature predictions of the ECMWF Seasonal Forecast System 2. The overall structures of the RPSSL=1 and the RPSSD are more consistent and largely independent of the ensemble size, unlike the RPSSL=2. Furthermore, the minimum ensemble size required to predict a climate anomaly given a known signal-to-noise ratio is determined by employing the new skill scores. For a hypothetical setup comparable to the ECMWF hindcast system (40 members and 15 hindcast years), statistically significant skill scores were only found for a signal-to-noise ratio larger than ∼0.3.


2020 ◽  
Vol 642 ◽  
pp. A193 ◽  
Author(s):  
M. Millon ◽  
F. Courbin ◽  
V. Bonvin ◽  
E. Buckley-Geer ◽  
C. D. Fassnacht ◽  
...  

We present six new time-delay measurements obtained from Rc-band monitoring data acquired at the Max Planck Institute for Astrophysics (MPIA) 2.2 m telescope at La Silla observatory between October 2016 and February 2020. The lensed quasars HE 0047−1756, WG 0214−2105, DES 0407−5006, 2M 1134−2103, PSJ 1606−2333, and DES 2325−5229 were observed almost daily at high signal-to-noise ratio to obtain high-quality light curves where we can record fast and small-amplitude variations of the quasars. We measured time delays between all pairs of multiple images with only one or two seasons of monitoring with the exception of the time delays relative to image D of PSJ 1606−2333. The most precise estimate was obtained for the delay between image A and image B of DES 0407−5006, where τAB = −128.4−3.8+3.5 d (2.8% precision) including systematics due to extrinsic variability in the light curves. For HE 0047−1756, we combined our high-cadence data with measurements from decade-long light curves from previous COSMOGRAIL campaigns, and reach a precision of 0.9 d on the final measurement. The present work demonstrates the feasibility of measuring time delays in lensed quasars in only one or two seasons, provided high signal-to-noise ratio data are obtained at a cadence close to daily.


Geophysics ◽  
1976 ◽  
Vol 41 (2) ◽  
pp. 219-232 ◽  
Author(s):  
Paul C. Wuenschel

There are several advantages in using vertical arrays for recording reflected signal. Signal‐to‐noise ratio can be controlled to any desired level when the noise is due to scattering from layers shallower than the depth to the array. By the use of vertical arrays, the band width of useable seismic energy can be increased, events can be properly identified, the signal that eventually produces near surface induced multiples can be measured, as well as the direct pulse radiated from the source and its accompanying ghosts. A field test documents these predictions.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. V233-V243
Author(s):  
Dingyue Chang ◽  
Cai Zhang ◽  
Tianyue Hu ◽  
Dan Wang

Moveout correction for irregular topography has been a longstanding challenge in processing seismic exploration data. Irregular topography usually results in large moveout among traces, a low signal-to-noise ratio (S/N), and difficulty in modeling near-surface velocities. Conventional normal moveout (NMO) corrections and elevation static methods are imprecise and tend to introduce significant errors for large offsets. Over the past two decades, several multiparameter time corrections and stacking techniques to reduce noise and improve resolution have been proposed in place of the classic NMO and common-midpoint stack. These include the common-reflection-surface (CRS), common-offset CRS, nonhyperbolic CRS, implicit CRS, multifocusing (MF), irregular surface MF (IS-MF), spherical MF (SMF), and common-offset MF methods. Various CRS-type operators that consider the top-surface topography have been proposed. For MF-type operators, only IS-MF can be applied directly to the irregular topography with no elevation statics required. In this study, we have developed a new MF formulation, modifying the SMF method to consider nonzero elevations of sources and receivers and we corrected moveout of nonplanar data directly without prior elevation static corrections. The proposed extension combines the sensitivity to spherical reflectors of SMF with the applicability of the IS-MF method to irregular topography. We investigated the behavior of the new operator using a physical model data set and compared the results with those from the conventional IS-MF method. The results revealed that the new operator is more robust over a wide range of source and receiver elevations and has advantages on strongly curved interfaces. We also confirmed the potential of the proposed approach by comparing stacking results for a real-land data set with a low S/N.


Geophysics ◽  
2012 ◽  
Vol 77 (2) ◽  
pp. P23-P31 ◽  
Author(s):  
A. J. Berkhout ◽  
G. Blacquière ◽  
D. J. Verschuur

In traditional seismic surveys, the firing time between shots is such that the shot records do not interfere in time. However, in the concept of blended acquisition, the records do overlap, allowing denser source sampling and wider azimuths in an economic way. A denser shot sampling and wider azimuths make that each subsurface gridpoint is illuminated from a larger number of angles and will therefore improve the image quality in terms of signal-to-noise ratio and spatial resolution. We show that — even with very simple blending parameters like time delays — the incident wavefield at a specific subsurface gridpoint represents a dispersed time series with a “complex code”. For shot-record migration purposes, this time series must have a stable inverse. In a next step, we show that the illumination can be further improved by utilizing the surface-related multiples. This means that these multiples can be exploited to improve the incident wavefield by filling angle gaps in the illumination and/or by extending the range of angles. In this way, the energy contained in the multiples now contributes to the image, rather than decreasing its quality. One remarkable consequence of this property is that the benefits to be obtained from the improved illumination depend on the detector locations in acquisition geometries as well. We show how to quantify the contribution of the blended surface multiples to the illuminating wavefield for a blended source configuration. Results confirm that the combination of blending and multiple scattering increases the illumination energy and, therefore, will improve the quality of shot-record migration results beyond today’s capability.


Sign in / Sign up

Export Citation Format

Share Document