THE VERTICAL ARRAY IN REFLECTION SEISMOLOGY—SOME EXPERIMENTAL STUDIES

Geophysics ◽  
1976 ◽  
Vol 41 (2) ◽  
pp. 219-232 ◽  
Author(s):  
Paul C. Wuenschel

There are several advantages in using vertical arrays for recording reflected signal. Signal‐to‐noise ratio can be controlled to any desired level when the noise is due to scattering from layers shallower than the depth to the array. By the use of vertical arrays, the band width of useable seismic energy can be increased, events can be properly identified, the signal that eventually produces near surface induced multiples can be measured, as well as the direct pulse radiated from the source and its accompanying ghosts. A field test documents these predictions.

Geophysics ◽  
1970 ◽  
Vol 35 (2) ◽  
pp. 337-343 ◽  
Author(s):  
Zoltan A. Der

A vertical array of three component (triaxial) seismometers was operated in an abandoned oil well near Grapevine, Texas. The experiment was designed to investigate the effectiveness of teleseismic P‐wave enhancement by utilization of all three components of motion at various depths within the well. Previous experiments with vertical arrays which only recorded the vertical component of motion showed that optimum processors did not significantly improve the signal‐to‐noise ratio (Roden, 1968). The reason for this poor performance was found to be a similarity in the changes of signal and noise properties with depth.


Geophysics ◽  
1965 ◽  
Vol 30 (4) ◽  
pp. 597-608 ◽  
Author(s):  
Robert B. Roden

A model of teleseismic signal and surface‐mode noise is derived from wave‐propagation theory. Optimum Wiener multichannel frequency domain filters are designed to operate on the outputs of six seismometer arrays so as to pass signals and reject noise. The arrays studied include two 19‐element surface arrays, two 19‐element shallow‐buried arrays and two 6‐element vertical arrays where a 20‐db reduction in spatially uncorrelated noise is assumed to result from seismometer burial. It is found that there is very little difference among the outputs of the filter systems designed for the two surface arrays and the two vertical arrays. The performance of the systems designed for the shallow‐buried arrays found to be considerably better. For one particular array, the predicted signal‐to‐noise improvement resulting from the assumed effect of shallow burial varies from 5 to 15 db. The theoretical results are sensitive to the amount of uncorrelated noise assumed in the model. However, when the levels of incoherent noise are equal, it appears that a surface array will generally possess greater capability for rejection of coherent noise than will a vertical array with the same size and number of receivers. The performance of an array of either type appears to be quite insensitive to changes of geometry if the number of receivers and the maximum dimension are not changed very much. Although a vertical array will always be superior to a single deeply buried seismometer, the improvement in performance which may be obtained by increasing the number of receivers in a vertical array is much less than in the case of a surface array.


2015 ◽  
Author(s):  
Jinjiang Wang ◽  
Robert X. Gao ◽  
Xinyao Tang ◽  
Zhaoyan Fan ◽  
Peng Wang

Data communication through metallic structures is generally encountered in manufacturing equipment and process monitoring and control. This paper presents a signal processing technique for enhancing the signal-to-noise ratio and high-bit data transmission rate in ultrasound-based wireless data transmission through metallic structures. A multi-carrier coded-ultrasonic wave modulation scheme is firstly investigated to achieve high-bit data rate communication while reducing inter-symbol inference and data loss, due to the inherent signal attenuation, wave diffraction and reflection in metallic structures. To improve the signal-to-noise ratio, dual-tree wavelet packet transform (DT-WPT) has been investigated to separate multi-carrier signals under noise contamination, given its properties of shift-invariance and flexible time frequency partitioning. A new envelope extraction and threshold setting strategy for selected wavelet coefficients is then introduced to retrieve the coded digital information. Experimental studies are performed to evaluate the effectiveness of the developed signal processing method for manufacturing.


Geophysics ◽  
1989 ◽  
Vol 54 (11) ◽  
pp. 1384-1396
Author(s):  
Howard Renick ◽  
R. D. Gunn

The Triangle Ranch Headquarters Canyon Reef field is long and narrow and in an area where near‐surface evaporites and associated collapse features degrade seismic data quality and interpretational reliability. Below this disturbed section, the structure of rocks is similar to the deeper Canyon Reef structure. The shallow structure exhibits very gentle relief and can be mapped by drilling shallow holes on a broad grid. The shallow structural interpretation provides a valuable reference datum for mapping, as well as providing a basis for planning a seismic program. By computing an isopach between the variable seismic datum and the Canyon Reef reflection and subtracting the isopach map from the datum map, we map Canyon Reef structure. The datum map is extrapolated from the shallow core holes. In the area, near‐surface complexities produce seismic noise and severe static variations. The crux of the exploration problem is to balance seismic signal‐to‐noise ratio and geologic resolution. Adequate geologic resolution is impossible without understanding the exploration target. As we understood the target better, we modified our seismic acquisition parameters. Studying examples of data with high signal‐to‐noise ratio and poor resolution and examples of better defined structure on apparently noisier data led us to design an acquisition program for resolution and to reduce noise with arithmetic processes that do not reduce structural resolution. Combining acquisition and processing parameters for optimum structural resolution with the isopach mapping method has improved wildcat success from about 1 in 20 to better than 1 in 2. It has also enabled an 80 percent development drilling success ratio as opposed to slightly over 50 percent in all previous drilling.


Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 573 ◽  
Author(s):  
Zhuo Jia ◽  
Sixin Liu ◽  
Ling Zhang ◽  
Bin Hu ◽  
Jianmin Zhang

Knowledge of the subsurface structure not only provides useful information on lunar geology, but it also can quantify the potential lunar resources for human beings. The dual-frequency lunar penetrating radar (LPR) aboard the Yutu rover offers a Special opportunity to understand the subsurface structure to a depth of several hundreds of meters using a low-frequency channel (channel 1), as well as layer near-surface stratigraphic structure of the regolith using high-frequency observations (channel 2). The channel 1 data of the LPR has a very low signal-to-noise ratio. However, the extraction of weak signals from the data represents a problem worth exploring. In this article, we propose a weak signal extraction method in view of local correlation to analyze the LPR CH-1 data, to facilitate a study of the lunar regolith structure. First, we build a pre-processing workflow to increase the signal-to-noise ratio (SNR). Second, we apply the K-L transform to separate the horizontal signal and then use the seislet transform (ST) to reserve the continuous signal. Then, the local correlation map is calculated using the two denoising results and a time–space dependent weighting operator is constructed to suppress the noise residuals. The weak signal after noise suppression may provide a new reference for subsequent data interpretation. Finally, in combination with the regional geology and previous research, we provide some speculative interpretations of the LPR CH-1 data.


Geophysics ◽  
1950 ◽  
Vol 15 (2) ◽  
pp. 181-207 ◽  
Author(s):  
Thos. C. Poulter

The Poulter seismic method is outlined and an analysis is made of the frequency distribution of seismic energy from different sources. The effect of different sources upon the directivity of the energy in the ground and the improvement that can be obtained in signal‐to‐noise ratio is discussed. Three different methods are described for controlling the frequency of the seismic impulses being introduced into the ground and the resulting improvement in the quality of records is illustrated. The almost complete elimination of multiple reflections by the method is indicated.


Geophysics ◽  
1988 ◽  
Vol 53 (10) ◽  
pp. 1303-1310 ◽  
Author(s):  
F. Wenzel

In 1984 seismic wide‐angle Vibroseis experiments across the Rhine‐graben in Eastern France/Southwest Germany were carried out as a joint French/German venture. Signals generated by French vibrators were recorded at the eastern flank of the graben in the Black Forest. With a 200 channel array spread along 16 km (group spacing: 80 m), data were recorded at offsets between 66 and 82 km. The data are characterized by a low signal‐to‐noise ratio (S/N) and distortions of seismic phases caused by strong topographic variations. For wide‐angle records, conventional static corrections based on the assumption of vertical travelpaths are no longer appropriate. Ray parameter‐dependent time delays should be used in order to correctly eliminate topographic and near‐surface influences on the data. In addition, delineation of reflector segments requires an increase in S/N. A two‐dimensional filter based on forward and inverse slant stacking was designed to handle the time delay and S/N problems simultaneously. Stacking along a suite of ray parameters allows the use of static corrections which depend on the angle of emergence of the seismic arrivals. Weighting data by their coherence emphasizes spatially correlatable phases. Both procedures significantly increase S/N.


Geophysics ◽  
1966 ◽  
Vol 31 (3) ◽  
pp. 501-505 ◽  
Author(s):  
C. S. Clay

Conventional plane wave array theory does not apply to arrays in an inhomogeneous medium. In a stratified waveguide such as the ocean there are many modes of propagation, and each of them is dispersive. As expressed in normal mode formalism, the transmission between vertical arrays can be considered as a filter problem. In this paper we consider the response of the array filters to an ambient noise field, and maximize the signal‐to‐noise ratio for transmission in a noisy waveguide. The resulting optimum, or matched array, filter is given by the conjugate of the product of the source function and the waveguide transmission function. The response of the matched array filter is weighted with the reciprocal of the noise power in each mode and the attenuation.


2005 ◽  
Vol 18 (10) ◽  
pp. 1513-1523 ◽  
Author(s):  
W. A. Müller ◽  
C. Appenzeller ◽  
F. J. Doblas-Reyes ◽  
M. A. Liniger

Abstract The ranked probability skill score (RPSS) is a widely used measure to quantify the skill of ensemble forecasts. The underlying score is defined by the quadratic norm and is comparable to the mean squared error (mse) but it is applied in probability space. It is sensitive to the shape and the shift of the predicted probability distributions. However, the RPSS shows a negative bias for ensemble systems with small ensemble size, as recently shown. Here, two strategies are explored to tackle this flaw of the RPSS. First, the RPSS is examined for different norms L (RPSSL). It is shown that the RPSSL=1 based on the absolute rather than the squared difference between forecasted and observed cumulative probability distribution is unbiased; RPSSL defined with higher-order norms show a negative bias. However, the RPSSL=1 is not strictly proper in a statistical sense. A second approach is then investigated, which is based on the quadratic norm but with sampling errors in climatological probabilities considered in the reference forecasts. This technique is based on strictly proper scores and results in an unbiased skill score, which is denoted as the debiased ranked probability skill score (RPSSD) hereafter. Both newly defined skill scores are independent of the ensemble size, whereas the associated confidence intervals are a function of the ensemble size and the number of forecasts. The RPSSL=1 and the RPSSD are then applied to the winter mean [December–January–February (DJF)] near-surface temperature predictions of the ECMWF Seasonal Forecast System 2. The overall structures of the RPSSL=1 and the RPSSD are more consistent and largely independent of the ensemble size, unlike the RPSSL=2. Furthermore, the minimum ensemble size required to predict a climate anomaly given a known signal-to-noise ratio is determined by employing the new skill scores. For a hypothetical setup comparable to the ECMWF hindcast system (40 members and 15 hindcast years), statistically significant skill scores were only found for a signal-to-noise ratio larger than ∼0.3.


2021 ◽  
Vol 18 (6) ◽  
pp. 890-907
Author(s):  
Andrey Bakulin ◽  
Ilya Silvestrov ◽  
Maxim Protasov

Abstract Modern land seismic data are typically acquired using high spatial trace density with small source and receiver arrays or point sources and sensors. These datasets are challenging to process due to their massive size and relatively low signal-to-noise ratio caused by scattered near-surface noise. Therefore, prestack data enhancement becomes a critical step in the processing flow. Nonlinear beamforming had proved very powerful for 3D land data. However, it requires computationally intensive estimations of local coherency on dense spatial/temporal grids in 3D prestack data cubes. We present an analysis of various estimation methods focusing on a trade-off between computational efficiency and enhanced data quality. We demonstrate that the popular sequential «2 + 2 + 1» scheme is highly efficient but may lead to unreliable estimation and poor enhancement for data with a low signal-to-noise ratio. We propose an alternative algorithm called «dip + curvatures» that remains stable for such challenging data. We supplement the new strategy with an additional interpolation procedure in spatial and time dimensions to reduce the computational cost. We demonstrate that the «dip + curvatures» strategy coupled with an interpolation scheme approaches the «2 + 2 + 1» method's efficiency while it significantly outperforms it in enhanced data quality. We conclude that the new algorithm strikes a practical trade-off between the performance of the algorithm and the quality of the enhanced data. These conclusions are supported by synthetic and real 3D land seismic data from challenging desert environments with complex near surface.


Sign in / Sign up

Export Citation Format

Share Document