New image reconstruction technique for signal‐to‐noise ratio enhancement in seismic data

1988 ◽  
Author(s):  
Dimitris Agouridis ◽  
Sotiris Kapotas ◽  
Dzung Nguyen ◽  
Xi‐Shuo Wang
2021 ◽  
Vol 11 (1) ◽  
pp. 78
Author(s):  
Jianbo He ◽  
Zhenyu Wang ◽  
Mingdong Zhang

When the signal to noise ratio of seismic data is very low, velocity spectrum focusing will be poor., the velocity model obtained by conventional velocity analysis methods is not accurate enough, which results in inaccurate migration. For the low signal noise ratio (SNR) data, this paper proposes to use partial Common Reflection Surface (CRS) stack to build CRS gathers, making full use of all of the reflection information of the first Fresnel zone, and improves the signal to noise ratio of pre-stack gathers by increasing the number of folds. In consideration of the CRS parameters of the zero-offset rays emitted angle and normal wave front curvature radius are searched on zero offset profile, we use ellipse evolving stacking to improve the zero offset section quality, in order to improve the reliability of CRS parameters. After CRS gathers are obtained, we use principal component analysis (PCA) approach to do velocity analysis, which improves the noise immunity of velocity analysis. Models and actual data results demonstrate the effectiveness of this method.


Geophysics ◽  
2021 ◽  
pp. 1-51
Author(s):  
Chao Wang ◽  
Yun Wang

Reduced-rank filtering is a common method for attenuating noise in seismic data. As conventional reduced-rank filtering distinguishes signals from noises only according to singular values, it performs poorly when the signal-to-noise ratio is very low, or when data contain high levels of isolate or coherent noise. Therefore, we developed a novel and robust reduced-rank filtering based on the singular value decomposition in the time-space domain. In this method, noise is recognized and attenuated according to the characteristics of both singular values and singular vectors. The left and right singular vectors corresponding to large singular values are selected firstly. Then, the right singular vectors are classified into different categories according to their curve characteristics, such as jump, pulse, and smooth. Each kind of right singular vector is related to a type of noise or seismic event, and is corrected by using a different filtering technology, such as mean filtering, edge-preserving smoothing or edge-preserving median filtering. The left singular vectors are also corrected by using the filtering methods based on frequency attributes like main-frequency and frequency bandwidth. To process seismic data containing a variety of events, local data are extracted along the local dip of event. The optimal local dip is identified according to the singular values and singular vectors of the data matrices that are extracted along different trial directions. This new filtering method has been applied to synthetic and field seismic data, and its performance is compared with that of several conventional filtering methods. The results indicate that the new method is more robust for data with a low signal-to-noise ratio, strong isolate noise, or coherent noise. The new method also overcomes the difficulties associated with selecting an optimal rank.


Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. V229-V237 ◽  
Author(s):  
Hongbo Lin ◽  
Yue Li ◽  
Baojun Yang ◽  
Haitao Ma

Time-frequency peak filtering (TFPF) may efficiently suppress random noise and hence improve the signal-to-noise ratio. However, the errors are not always satisfactory when applying the TFPF to fast-varying seismic signals. We begin with an error analysis for the TFPF by using the spread factor of the phase and cumulants of noise. This analysis shows that the nonlinear signal component and non-Gaussian random noise lead to the deviation of the pseudo-Wigner-Ville distribution (PWVD) peaks from the instantaneous frequency. The deviation introduces the signal distortion and random oscillations in the result of the TFPF. We propose a weighted reassigned smoothed PWVD with less deviation than PWVD. The proposed method adopts a frequency window to smooth away the residual oscillations in the PWVD, and incorporates a weight function in the reassignment which sharpens the time-frequency distribution for reducing the deviation. Because the weight function is determined by the lateral coherence of seismic data, the smoothed PWVD is assigned to the accurate instantaneous frequency for desired signal components by weighted frequency reassignment. As a result, the TFPF based on the weighted reassigned PWVD (TFPF_WR) can be more effective in suppressing random noise and preserving signal as compared with the TFPF using the PWVD. We test the proposed method on synthetic and field seismic data, and compare it with a wavelet-transform method and [Formula: see text] prediction filter. The results show that the proposed method provides better performance over the other methods in signal preserving under low signal-to-noise ratio.


Author(s):  
D. BALASUBRAMANIAN ◽  
MURALI C. KRISHNA ◽  
R. MURUGESAN

The low-frequency instrumentation and imaging capabilities facilitate electron magnetic resonance imaging (EMRI) as an emerging non-invasive imaging technology for mapping free radicals in biological systems. Unlike MRI, EMRI is implemented as a pure phase–phase encoding technique. The fast bio-clearance of the imaging agent and the requirement to reduce radio frequency power deposition dictate collection of reduced k-space samples, compromising the quality and resolution of the EMR images. The present work evaluates various interpolation kernels to generate larger k-space samples for image reconstruction, from the acquired reduced k-space samples. Using k-space EMR data sets, acquired for phantom as well as live mice, the proposed technique is critically evaluated by computing quality metrics viz. signal-to-noise ratio (SNR), standard deviation error (SDE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR) and Lui's error function (F(I)). The quantitative evaluation of 24 different interpolation functions (including piecewise polynomial functions and many windowed sinc functions) to upsample the k-space data for the Fourier EMR image reconstruction shows that at the expense of a slight increase in computing time, the reconstructed images from upsampled data, produced using Spline-sinc, Welch-sinc, and Gaussian-sinc kernels, are closer to reference image with minimal distortion. Support of the interpolating kernel is a characteristic parameter deciding the quality of the reconstructed image and the time complexity. In this paper, a method to optimize the kernel support using genetic algorithm (GA) is also explored. Maximization of the fitness function has two conflicting objectives and it is approached as a multi-objective optimization problem.


Geophysics ◽  
1989 ◽  
Vol 54 (11) ◽  
pp. 1384-1396
Author(s):  
Howard Renick ◽  
R. D. Gunn

The Triangle Ranch Headquarters Canyon Reef field is long and narrow and in an area where near‐surface evaporites and associated collapse features degrade seismic data quality and interpretational reliability. Below this disturbed section, the structure of rocks is similar to the deeper Canyon Reef structure. The shallow structure exhibits very gentle relief and can be mapped by drilling shallow holes on a broad grid. The shallow structural interpretation provides a valuable reference datum for mapping, as well as providing a basis for planning a seismic program. By computing an isopach between the variable seismic datum and the Canyon Reef reflection and subtracting the isopach map from the datum map, we map Canyon Reef structure. The datum map is extrapolated from the shallow core holes. In the area, near‐surface complexities produce seismic noise and severe static variations. The crux of the exploration problem is to balance seismic signal‐to‐noise ratio and geologic resolution. Adequate geologic resolution is impossible without understanding the exploration target. As we understood the target better, we modified our seismic acquisition parameters. Studying examples of data with high signal‐to‐noise ratio and poor resolution and examples of better defined structure on apparently noisier data led us to design an acquisition program for resolution and to reduce noise with arithmetic processes that do not reduce structural resolution. Combining acquisition and processing parameters for optimum structural resolution with the isopach mapping method has improved wildcat success from about 1 in 20 to better than 1 in 2. It has also enabled an 80 percent development drilling success ratio as opposed to slightly over 50 percent in all previous drilling.


Sign in / Sign up

Export Citation Format

Share Document